The responsible tech movement and the field of AI ethics often point to two benchmarks: AI systems that prevent harm (nonmaleficence) and promote good (beneficence).
But ethical AI is not the result of a single decision by a single actor, company, or country. It requires sustained ethical decision-making throughout the AI lifecycle across each sector within each society.
We achieve this by co-creating and co-maintaining Sustained Ethical Ecosystems (SEE) in every society, across all sectors, including arts, business, defense, education, energy, finance, health, law, social services, technology, transportation, and beyond.
This dynamic and collective process is not a result of checking a compliance box or conducting a one-time review. Rather, our ethical commitments are only as good as our ability to consistently cultivate our collective conscience, to raise one another to a higher ground.
In this spirit, we invite all stakeholders to participate in a sustained ethical decision-making process, recognizing that each of us has shifting commitments and interests. It's a process that integrates morals and values with safety protocols and legal obligations alongside humanitarian principles, measured by fair and objective standards.
Sustained Ethical Ecosystems also emphasizes the impacts on users and non-users with a particular duty of care for vulnerable populations and the environment.
Through those lived experiences, we evaluate the efficacy of our sustained ethical decision-making throughout the AI lifecycle, from design and financial investments to development, deployment, use, maintenance, and monitoring.
This dynamic collective process calls us to take shared responsibility to ensure that AI systems not only prevent harm but also benefit all.