Skip to content

Rutgers.edu   |   Rutgers Search

Humans First Fund
  • About
    • Students
    • People
    • Our Values
    • Programs
  • Human Rights Index
    • Purpose
    • Human Rights
    • Principles
    • Instruments
    • Sectors
    • Glossary
    • CHARTER
    • Editors’ Desk
  • Project Insight
  • Publications
    • AI & Human Rights Index
    • Moral Imagination
    • Human Rights in Global AI Ecosystems
  • Courses
    • AI & Society
    • AI Ethics & Law
    • AI & Vulnerable Humans
  • News
  • Opportunities
  • About
    • Students
    • People
    • Our Values
    • Programs
  • Human Rights Index
    • Purpose
    • Human Rights
    • Principles
    • Instruments
    • Sectors
    • Glossary
    • CHARTER
    • Editors’ Desk
  • Project Insight
  • Publications
    • AI & Human Rights Index
    • Moral Imagination
    • Human Rights in Global AI Ecosystems
  • Courses
    • AI & Society
    • AI Ethics & Law
    • AI & Vulnerable Humans
  • News
  • Opportunities
  • All
  • A
  • B
  • C
  • D
  • E
  • F
  • G
  • H
  • I
  • J
  • K
  • L
  • M
  • N
  • O
  • P
  • Q
  • R
  • S
  • T
  • U
  • V
  • W
  • X
  • Y
  • Z

Catastrophic Forgetting

Catastrophic forgetting is a problem in artificial intelligence in which a model abruptly loses previously learned knowledge as it learns new information. This happens because training on new data changes the system’s internal patterns, potentially unintentionally overwriting what it previously understood. Unlike human learning, where new understanding usually builds on prior knowledge, many AI models struggle to preserve prior knowledge while adapting to new tasks.

Catastrophic forgetting has serious implications for AI ethics and law because systems that forget essential information cannot be considered reliable or trustworthy. When an AI system loses earlier insights related to fairness, privacy, or safety, its decisions may drift in ways that harm individuals or communities.

Forgetting can also distort ethical performance by erasing training data from underrepresented groups, risking amplifying unfairness. It undermines accountability as well, since it becomes harder to trace why a system’s behavior has changed over time.

Preventing catastrophic forgetting is a matter of human rights because systems that handle sensitive information or influence people’s opportunities must maintain stable, rights-protecting knowledge throughout the entire period they are in use.

 

For Further Study

Goodfellow, I. et al. (2013). "An Empirical Investigation of Catastrophic Forgetting in Gradient-Based Neural Networks." Proceedings of the International Conference on Machine Learning.

Kirkpatrick, J. et al. (2017). "Overcoming Catastrophic Forgetting in Neural Networks." Proceedings of the National Academy of Sciences.

 


Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.

  • Rutgers.edu
  • New Brunswick
  • Newark
  • Camden
  • Rutgers Health
  • Online
  • Rutgers Search
About
  • Mission
  • Values
  • People
  • Courses
  • Programs
  • News
  • Opportunities
  • Style Guide
Human Rights Index
  • Purpose
  • Human Rights
  • Principles
  • Sectors
  • Glossary
Project Insight
Moral Imagination
Humans First Fund

Dr. Nathan C. Walker
Principal Investigator, AI Ethics Lab

Rutgers University-Camden
College of Arts & Sciences
Department of Philosophy & Religion

AI Ethics Lab at the Digital Studies Center
Cooper Library in Johnson Park
101 Cooper St, Camden, NJ 08102

Copyright ©2025, Rutgers, The State University of New Jersey

Rutgers is an equal access/equal opportunity institution. Individuals with disabilities are encouraged to direct suggestions, comments, or complaints concerning any accessibility issues with Rutgers websites to accessibility@rutgers.edu or complete the Report Accessibility Barrier / Provide Feedback Form.