# OECD AI Principles
In a world where artificial intelligence is being increasingly used for a wide variety of tasks, it becomes ever more necessary to guide AI development towards a responsible stance. In that regard, the [OECD AI Principles](https://oecd.ai/en/ai-principles), adopted in 2019 and updated in 2024, aim at promoting innovative and trustworthy AI through a unified set of guidelines.
The five values outlined in the OECD AI principles are the following:
1. Inclusive growth, sustainable development, and well-being;
2. Human rights and democratic values, including fairness and privacy;
3. Transparency and explainability;
4. Robustness, security, and safety;
5. Accountability.
Those principles are necessary for several reasons.
First, they **ensure public trust**: transparency, safety and accountability are the cornerstones of a resilient society where the public can be confident in the alignment of AI models.
Second, they **build a basis for policymakers** to create regulations that have the citizens' best interests at heart, much like how GDPR helped protect citizens' privacy.
Third, they **drive responsible innovation**, fostering the development of AI systems for objectives that are in tune with the United Nations' [Sustainable Development Goals](https://sdgs.un.org/goals).