Skip to content

Rutgers.edu   |   Rutgers Search

Humans First Fund
  • About
    • Students
    • People
    • Our Values
    • Programs
  • Human Rights Index
    • Purpose
    • Human Rights
    • Principles
    • Instruments
    • Sectors
    • Glossary
    • CHARTER
    • Editors’ Desk
  • Project Insight
  • Publications
    • AI & Human Rights Index
    • Moral Imagination
    • Human Rights in Global AI Ecosystems
  • Courses
    • AI & Society
    • AI Ethics & Law
    • AI & Vulnerable Humans
  • News
  • Opportunities
  • About
    • Students
    • People
    • Our Values
    • Programs
  • Human Rights Index
    • Purpose
    • Human Rights
    • Principles
    • Instruments
    • Sectors
    • Glossary
    • CHARTER
    • Editors’ Desk
  • Project Insight
  • Publications
    • AI & Human Rights Index
    • Moral Imagination
    • Human Rights in Global AI Ecosystems
  • Courses
    • AI & Society
    • AI Ethics & Law
    • AI & Vulnerable Humans
  • News
  • Opportunities
  • All
  • A
  • B
  • C
  • D
  • E
  • F
  • G
  • H
  • I
  • J
  • K
  • L
  • M
  • N
  • O
  • P
  • Q
  • R
  • S
  • T
  • U
  • V
  • W
  • X
  • Y
  • Z

Audit Points

Audit Points in the AI Lifecycle. Draft: “Audit points are critical checkpoints within the AI lifecycle where assessments can be made to ensure compliance, fairness, accountability, transparency, and alignment with ethical standards. These points allow for external or internal review and ensure that AI systems remain trustworthy, reliable, and lawful. Each audit point addresses potential risks at different stages of the AI lifecycle, providing opportunities to mitigate harm and enhance oversight. Key Audit Points: Data Collection: Objective: Ensure that the data is collected ethically, legally, and without bias. Review consent forms and verify that data subjects are aware of how their data will be used. Key Concerns: Privacy violations, consent issues, biased data sources, potential discrimination, and inclusivity. Audit Activities: Review data collection policies, check for GDPR compliance, and verify data source authenticity. Data Preprocessing: Objective: Check how raw data is cleaned, filtered, and transformed to ensure that it does not introduce bias or remove critical information. Key Concerns: Data imputation errors, removal of relevant outliers, introduction of new biases, and data anonymization issues. Audit Activities: Analyze the methods used to clean and preprocess the data. Ensure appropriate balancing techniques to avoid biased outcomes. Model Selection: Objective: Ensure that the AI model selected is appropriate for the task and does not inherently favor any particular outcome. Key Concerns: Choosing models that may unintentionally embed bias or lack explainability. Audit Activities: Review model selection criteria, ensure alignment with fairness principles, and assess any pre-trained models for their ethical implications. Training Phase: Objective: Validate that the model is being trained with representative, balanced, and ethical data. Key Concerns: Model overfitting, underfitting, and bias introduced during training. Audit Activities: Monitor training progress, evaluate hyperparameters, and ensure that proper methods are used to mitigate bias. Validation and Tuning: Objective: Ensure that hyperparameter tuning does not skew results toward biased or unethical outcomes. Key Concerns: Over-optimization for performance at the expense of fairness or interpretability. Audit Activities: Analyze validation methods and ensure that performance metrics consider ethical and legal compliance alongside technical metrics. Testing: Objective: Test the model against diverse and realistic datasets to ensure generalization and fairness in different conditions. Key Concerns: Failure to generalize, discrimination in specific groups, and potential for harmful outputs. Audit Activities: Conduct stress testing and adversarial testing. Evaluate the system's performance across different demographic groups to ensure fairness. Deployment and Monitoring: Objective: Ensure that deployed models are continuously monitored for any ethical violations, performance degradation, or emergence of bias. Key Concerns: Model drift, real-world ethical violations, and unseen biases in live environments. Audit Activities: Set up real-time monitoring systems, define ethical triggers for system review, and ensure transparency in how decisions are made. Retraining and Updates: Objective: Ensure that model updates do not introduce new ethical issues and that retraining is done responsibly with updated data. Key Concerns: Drift in performance, emerging biases, and degradation of fairness over time. Audit Activities: Review retraining processes, validate that new data remains ethically sourced, and assess post-update model behavior. Ethical Considerations: Transparency: Audits should ensure that AI systems are transparent about how decisions are made. Bias and Fairness: Auditors must actively assess and correct for biases in data and algorithms. Privacy: Regular audits should check for compliance with privacy laws and data protection regulations (such as GDPR). Accountability: Audits can help establish a chain of responsibility for decisions made by AI systems, holding developers and organizations accountable for failures. Conclusion: By embedding audit points throughout the AI lifecycle, organizations can proactively mitigate risks, protect user rights, and ensure that AI systems operate within ethical and legal framework.


Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.

  • Rutgers.edu
  • New Brunswick
  • Newark
  • Camden
  • Rutgers Health
  • Online
  • Rutgers Search
About
  • Mission
  • Values
  • People
  • Courses
  • Programs
  • News
  • Opportunities
  • Style Guide
Human Rights Index
  • Purpose
  • Human Rights
  • Principles
  • Sectors
  • Glossary
Project Insight
Moral Imagination
Humans First Fund

Dr. Nathan C. Walker
Principal Investigator, AI Ethics Lab

Rutgers University-Camden
College of Arts & Sciences
Department of Philosophy & Religion

AI Ethics Lab at the Digital Studies Center
Cooper Library in Johnson Park
101 Cooper St, Camden, NJ 08102

Copyright ©2025, Rutgers, The State University of New Jersey

Rutgers is an equal access/equal opportunity institution. Individuals with disabilities are encouraged to direct suggestions, comments, or complaints concerning any accessibility issues with Rutgers websites to accessibility@rutgers.edu or complete the Report Accessibility Barrier / Provide Feedback Form.