Skip to content

Rutgers.edu   |   Rutgers Search

Humans First Fund
  • About
    • Students
    • People
    • Our Values
    • Programs
  • Human Rights Index
    • Purpose
    • Human Rights
    • Principles
    • Instruments
    • Sectors
    • Glossary
    • CHARTER
    • Editors’ Desk
  • Project Insight
  • Publications
    • AI & Human Rights Index
    • Moral Imagination
    • Human Rights in Global AI Ecosystems
  • Courses
    • AI & Society
    • AI Ethics & Law
    • AI & Vulnerable Humans
  • News
  • Opportunities
  • About
    • Students
    • People
    • Our Values
    • Programs
  • Human Rights Index
    • Purpose
    • Human Rights
    • Principles
    • Instruments
    • Sectors
    • Glossary
    • CHARTER
    • Editors’ Desk
  • Project Insight
  • Publications
    • AI & Human Rights Index
    • Moral Imagination
    • Human Rights in Global AI Ecosystems
  • Courses
    • AI & Society
    • AI Ethics & Law
    • AI & Vulnerable Humans
  • News
  • Opportunities
  • All
  • A
  • B
  • C
  • D
  • E
  • F
  • G
  • H
  • I
  • J
  • K
  • L
  • M
  • N
  • O
  • P
  • Q
  • R
  • S
  • T
  • U
  • V
  • W
  • X
  • Y
  • Z

Sustainability

Sustainability, in the context of AI ethics and law, is the principle that artificial intelligence should be designed, developed, and deployed to protect the environment, promote ecological balance, and contribute to societal well-being over the long term. This principle emphasizes minimizing AI’s ecological footprint, enhancing energy efficiency, and creating systems that remain effective and relevant over time. Beyond environmental concerns, sustainability addresses broader social impacts, such as fostering equity, reducing systemic inequities, and promoting peace and stability.

Achieving sustainability in AI requires intentional efforts at every stage of an AI system's lifecycle. Technically, this involves adopting energy-efficient algorithms, reducing resource consumption, and using sustainable data processing methods. Organizations are encouraged to align AI practices with global sustainability frameworks, such as the United Nations’ Sustainable Development Goals (SDGs), to ensure that AI technologies contribute positively to ecosystems and biodiversity. On a societal level, corporations are urged to mitigate potential disruptions caused by AI, such as job displacement, while leveraging these challenges to drive innovation and create equitable solutions that benefit all.

Governance is critical for embedding sustainability into AI practices. Transparent reporting on AI’s energy consumption, resource usage, and societal impacts can build public trust and drive responsible innovation. Accountability frameworks should hold developers, organizations, and policymakers responsible for minimizing environmental harm and promoting social equity. By prioritizing sustainability, stakeholders can ensure that AI technologies address present needs while safeguarding ecological preservation, societal well-being, and intergenerational responsibility.

Sustainability highlights the potential of AI to foster a more harmonious and resilient future. By embedding this principle into AI’s ethical foundation, developers and policymakers can create technologies that align with global values, support collective well-being, and contribute to a thriving, equitable planet.

Recommended Reading

Anna Jobin, Marcello Ienca, and Effy Vayena. "The Global Landscape of AI Ethics Guidelines." Nature Machine Intelligence 1 (2019): 389–399.

 


Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.

  • Rutgers.edu
  • New Brunswick
  • Newark
  • Camden
  • Rutgers Health
  • Online
  • Rutgers Search
About
  • Mission
  • Values
  • People
  • Courses
  • Programs
  • News
  • Opportunities
  • Style Guide
Human Rights Index
  • Purpose
  • Human Rights
  • Principles
  • Sectors
  • Glossary
Project Insight
Moral Imagination
Humans First Fund

Dr. Nathan C. Walker
Principal Investigator, AI Ethics Lab

Rutgers University-Camden
College of Arts & Sciences
Department of Philosophy & Religion

AI Ethics Lab at the Digital Studies Center
Cooper Library in Johnson Park
101 Cooper St, Camden, NJ 08102

Copyright ©2025, Rutgers, The State University of New Jersey

Rutgers is an equal access/equal opportunity institution. Individuals with disabilities are encouraged to direct suggestions, comments, or complaints concerning any accessibility issues with Rutgers websites to accessibility@rutgers.edu or complete the Report Accessibility Barrier / Provide Feedback Form.