AI & Society
Rutgers University | Digital Studies | Department of Philosophy & Religion
Spring 2025 | Mondays, 8:00 AM to 10:50 AM | Dr. Nathan C. Walker
The dawn of artificial intelligence (AI) has sparked many ethical and legal questions, resulting in societies worldwide responding in distinct ways. In this global communities & ethics course, students will hone their critical thinking skills when examining societies’ relationship to the development and deployment of various AI technologies. Case studies will reveal the social impact that multinational, national, and corporate regulatory systems have when seeking to govern the artificial intelligence movement. Students will explore AI’s impact across the professions and analyze the ethical and legal implications of AI systems that seek to emulate human learning, reasoning, self-correction, and perception. This exploration aims to illuminate these rapidly changing innovations and foster students’ nuanced understanding of these technologies with an eye on AI’s influence on humanity and the natural world.
These rapidly evolving technologies span the gamut from foundational algorithms—sequential instructions used in all AI systems—to sophisticated machine learning models that use neural networks to process vast amounts of data. Large language models in deep learning systems build upon these innovations to generate language and create images, multimedia, and speech recognition programs. These advancements range from narrow AI used for highly specified tasks, such as chatbots, language translation, medical imaging, navigation apps, and self-driving cars, to the theoretical realm of general AI that seeks to someday simulate broad human intelligence. While general AI, also referred to as Artificial General Intelligence, remains aspirational, understanding this distinction is necessary to evaluate the current ethical and legal landscape.
Specifically, students will ask how stakeholders legally define “artificial intelligence” and what core ethics (italicized below) are used to evaluate these definitions. For instance,
-
- What are the moral and legal relationships between the ethic of human dignity and AI systems, and how does this relate to human rights law?
-
- How can stakeholders in the AI movement apply principles of explainability and interpretability to ensure transparency and foster public trust—aware that fully explaining how some AI technologies work may not be possible?
-
- What regulations are necessary to ensure that AI systems protect people’s privacy, curb government surveillance, and guard against abusing civil liberties and human rights while maximizing freedom and autonomy?
-
- How can technology prevent harm (nonmaleficence) and do good (beneficence), ensuring that AI systems are just, fair, and equitable?
-
- What strategies can AI developers use to maximize accuracy in defending the human right to civic engagement, combating misinformation and disinformation, and protecting against propaganda, authoritarianism, and extremism?
-
- How can AI prevent, monitor, and mitigate bias and invidious discrimination and promote inclusion and equality?
-
- How can AI be used responsibly and with integrity?
-
- What accountability systems are needed to ensure AI will benefit humanity and the environment and provide a more equitable and sustainable future? What testing, operational, and monitoring tools are necessary to protect people, societies, and the environment?
The dense review of legal and ethical frameworks helps Rutgers students meet the Ethics & Values (EVA) requirement. This course requires no previous experience in computer science, philosophy, or law.
AI & Society is made possible by the Chancellor’s Grant for Pedagogical Innovation.