AI Ethics & Law: Honors Seminar
Rutgers Honors Seminar with Dr. Nathan C. Walker
EAV 50:525:155 or GCM 50:525:153
Mondays, September 9 to December 9, 2024, from 12:30 to 3:20 PM
The dawn of artificial intelligence (AI) has raised new ethical and legal questions. These rapidly evolving technologies span the gamut from foundational algorithms—sequential instructions used in all AI systems—to sophisticated machine learning models that leverage neural networks to digest vast amounts of data. Large language models in deep learning systems build upon these innovations to generate language and create images, multimedia, and speech recognition programs. These advancements range from narrow AI for highly specified tasks, such as chatbots, language translation, medical imaging, navigation apps, and self-driving cars, to the theoretical realm of general AI that seeks to someday simulate broad human intelligence. While Artificial General Intelligence remains aspirational, understanding this distinction is necessary to evaluate the current ethical and legal landscape.
In this interactive research seminar, honors students will hone their critical thinking skills when examining the multinational, national, local, and corporate regulatory systems that seek to govern the development and deployment of various AI technologies. Case studies will reveal moral dilemmas that people experience across the professions so that students can analyze the ethical and legal implications of AI systems that seek to emulate human learning, reasoning, self-correction, and perception. This exploration aims to illuminate these rapidly changing innovations and foster students’ nuanced understanding of these technologies with an eye on AI’s influence on humanity and the natural world.
Specifically, students will ask how stakeholders legally define “artificial intelligence” and what core ethics are used to evaluate these definitions. For instance,
-
- What are the moral and legal relationships between the ethic of human dignity and AI systems?
-
- How can stakeholders in the AI movement apply principles of explainability and interpretability to ensure transparency and foster public trust—aware that fully explaining how some AI technologies work may not be possible?
-
- What regulations are necessary to ensure that AI systems protect people’s privacy, curb government surveillance, and guard against abusing civil liberties and human rights while maximizing freedom and autonomy?
-
- How can technology prevent harm (nonmaleficence) and do good (beneficence), ensuring that AI systems are just, fair, and equitable?
-
- What strategies can AI developers use to maximize accuracy to defend democracies, combat mis- and disinformation, and protect against propaganda, authoritarianism, and extremism?
-
- How can AI prevent, monitor, and mitigate bias and invidious discrimination and promote inclusion and equality?
-
- How can AI be used responsibly and with integrity?
-
- What accountability systems are needed to ensure AI will benefit humanity and the environment and provide a more equitable and sustainable future? What testing, operational, and monitoring tools are necessary to protect people, societies, and the environment?
Given its multinational focus, this course will fill the Global Communities (GCM) requirement by examining technology law through a global lens. Given the dense review of legal and ethical frameworks, the course also meets the Ethics & Values (EVA) requirement. This course requires no previous experience in computer science, philosophy, or law.
AI Ethics & Law is made possible by the Chancellor’s Grant for Pedagogical Innovation.