Principles

What principles should guide AI's lifecycle?

  • Accountability

    Accountability is the professional practice whereby companies identify human beings responsible for designing, developing, deploying, and monitoring artificial intelligence systems at every stage of the AI lifecycle. By allowing external monitors to verify and replicate its findings, internal monitoring bodies and independent auditors elevate the company's trustworthiness by conducting impact assessments. On the one hand, accountability also means that companies are responsible for providing legal and monetary remedies when their products create harm and violate human rights. On the other hand, AI companies receive social and economic incentives to foster a culture of human rights, thereby bolstering their reputation and trustworthiness. By ensuring accountability points at every stage of the AI lifecycle, companies and the monitors who oversee the process ensure their products align with human rights principles to build safe, fair, and trustworthy AI systems.

  • Autonomy

    [Insert shortcode]

  • Agency

    [Insert shortcode]

  • Beneficence

    [Insert shortcode]

  • Consent

    [Insert shortcode]

  • Dignity

    [Insert shortcode]

  • Explainability

    [Insert shortcode]

  • Fairness

    [Insert shortcode]

  • Freedom

    [Insert shortcode]

  • Nonmaleficence

    [Insert shortcode]

  • Notice

    [Insert shortcode]

  • Predictability

    [Insert shortcode]

  • Privacy

    [Insert shortcode]

  • Reproducibility

    [Insert shortcode]

  • Remedy

    [Insert shortcode]

  • Safety

    [Insert shortcode]

  • Security

    [Insert shortcode]

  • Solidarity

    [Insert shortcode]

  • Sustainability

    [Insert shortcode]

  • Transparency

    [Insert shortcode]

  • Trust

    [Insert shortcode]

  • Verifiability

    [Insert shortcode]

  • Safety

    [Insert shortcode]