In this article

The recent explosion in artificial intelligence (AI) has led to some disturbing allegations about the development and use of this powerful new tool. In fact, OpenAI, Meta, Microsoft and others have already come under scrutiny for allegedly exploiting established creative intellectual property rights and violating privacy rights during the model training process.

Large language models (LLMs) — the cornerstone of generative AI — are invariably infused with implicit biases and human error. AI models and solutions are likewise susceptible to those same biases and errors, which can lead to harmful mistakes and undesirable outcomes. This helps explain why global ethicists recommend a holistic and interdisciplinary approach to creating responsible AI frameworks that combat pervasive implicit bias and human fallibility.

The United Nations has weighed in on these growing concerns. In its "Recommendation on the Ethics of Artificial Intelligence," the U.N. Educational, Scientific and Cultural Organization (UNESCO) asserts that "AI technologies can be of great service to humanity […] but also raise fundamental ethical concerns […] regarding the biases they can embed and exacerbate." 

To address the growing concerns, UNESCO has released 11 policy considerations for fostering Responsible AI — the practice of designing, developing and deploying AI systems in ways that are safe, ethical and fair.

11 policy considerations for implementing Responsible AI

WWT has nearly a decade of experience creating guardrails to manage AI products. We've helped clients safely leverage AI to achieve outcomes such as power consumption savings that range from 17 to 30 percent; improved employee health and safety management through fine-tuned LLMs; reduced a carbon footprint by 180 tons; and achieved cost reductions by optimizing data center cooling capacity. Our practical approach to AI solution development and delivery, applied to these real-world examples, integrates the concept of Responsible AI throughout.

We are committed to helping our clients understand how to apply UNESCO's 11 policy considerations for pursuing Responsible AI when developing, testing and deploying their own AI solutions.

Source: The United Nations Educational, Scientific and Cultural Organization

These policy considerations comprise a constructive framework of values, principles and actions that governments and organizations can take in developing laws, policies and procedures around AI. Close consideration of these principles can help minimize AI ecosystem risks and ensure AI systems and solutions are designed to prevent harm and work for the benefit of humanity, individuals, societies and the greater environment.

  1. Ethical impact assessment: Facilitates citizen participation and encourages auditability, traceability and explainability.
  2. Ethical governance and stewardship: Ensures innovation and business growth are both supported, unhindered by administrative burdens.
  3. Data policy: Develop governance strategies that ensure continual evaluation of the quality of AI systems.
  4. Development and international cooperation: R&D infrastructure to promote AI ethics and research and its applicability by industry and capability.
  5. Environment and ecosystem: Measure direct and indirect environmental impacts and adhere to the latest compliance policies.
  6. Gender: Strategic solutions to ensure gender equality and avoid AI system biases.
  7. Culture: Develop AI solutions that retain the cultural integrity of the product (e.g., use case: voice bots).
  8. Education and research: OCM training to improve AI literacy across the organization.
  9. Communication and information: Ensure freedom of expression and access to curated information.
  10. Economy and labor: Assess the impacts of AI on the labor force and invest in education and training where needed.
  11. Health and social wellbeing: Factor in the physical and mental impacts of AI solutions on all social groups within the community.

The UNESCO framework can provide some reassurance to the many audiences who are troubled by the negative implications of AI, including: 

  • CEOs concerned about their valuable data and brand reputation
  • MIS leaders looking for the right compliance policy guardrails for effective data governance and adoption
  • Managers concerned about employee performance, productivity and sentiments
  • Model stewards concerned about possible bias and unfairness in the model output
  • Artists and performers worried about losing their IP, being exploited and losing income
  • Teachers worried about plagiarism and AI's long-term impact on learning
  • Regulators tasked with addressing the impact of AI on an overwhelmed citizenry

Corporate leaders, in particular, can benefit from paying close attention to the principles of ethical governance, stewardship and data policy. These principles can help companies design and deploy AI systems that benefit their customers, employees and community stakeholders. 

A multi-perspective approach with a human-centered focus is key to avoiding the most harmful risks. We must account for known biases and safeguard against unintentional negative consequences to those groups and individuals who will be impacted most by AI systems. 


Responsible AI can help ensure that AI systems are built and used in safe and ethical ways. Despite the many obstacles to the full adoption of Responsible AI, it's critical that we as a society fully commit ourselves to this challenge. 

By implementing the 11 policy considerations outlined above, we stand a much better chance of realizing AI solutions that are transparent and trustworthy while mitigating the very real harms this new technology can cause.

A Guide for CEOs to Accelerate AI Excitement and Adoption