Abstract

Project Insight seeks to address challenges in online behavior and technology use by introducing Self-Reflection Technology. The project features the Insight Assistant, an AI assistant that supports users in creating custom Insight Cards—personalized ethical frameworks—that empower them to make intentional decisions and self-regulate their online behavior and technology use. These Insight Cards can also be shared with other users to develop their own Living Constitutions—dynamic, evolving articulations of their values that can be customized over time. The ultimate objectives are to elevate human agency, enhance self-efficacy, and promote a meaningful and responsible digital culture.

Table of Contents

I. The Problem

States and corporations are the primary governors of the digital landscape, setting basic legal and ethical standards for monitoring online behavior and technology use. These generalized, top-down regulatory practices effectively curb undesirable human instincts, such as preventing the spread of violence, illegal activities, discrimination, exploitation, and other measurable harms. Content moderation policies influence a wide range of digital experiences, including social media, smartphones, computers, online shopping, media streaming, and education, which can impact people’s lives at home, work, and school. While these regulatory frameworks establish a foundation for digital interactions, they serve only as the floor upon which these interactions occur. In contrast, the aspiration for a meaningful digital culture has no ceiling, nor is there a technical infrastructure to help individuals scaffold their ethical values and help raise humanity to higher ground.

The core problem is the absence of widely available Self-Reflection Technology (SRT) that enables users to actively participate in morally shaping their own digital consumption. Currently, no mechanism allows people to articulate their values and ethical commitments to customize digital interactions, resulting in some of the following challenges.

Technology users are often passive participants, subject to rules and penalties without the opportunity to build their own ethical guardrails. This lack of agency can foster frustration and disengagement, as individuals may feel constrained by regulations they did not help create. They may also become disillusioned with the regulatory floor upon which the digital culture rests if it does not meet their standards.

Without reflective guidance built into technology, users may feel isolated and struggle with impulse control, making decisions based on immediate emotions rather than intentional engagement. These patterns can lead to regrettable actions, such as impulsive social media posts, unprofessional emails, regrettable purchases, and unmindful consumption practices.

In response to these challenges, schools, workplaces, and families implement technology-free zones to prevent distractions and curb digital addictions. While these top-down strategies aim to mitigate negative behaviors, they assume users lack the intention or capability to make healthy decisions. This approach fails to empower individuals to develop their own self-regulation skills and can lead to resistance or co-dependence on external controls.

There is currently no mechanism for individuals to define and implement their own ethical standards in digital interactions. This absence hinders their ability to practice self-regulation that aligns with their values and cultural norms.

Existing content moderation strategies are uniform and are based on the lowest common denominator of ethical behavior. They cannot account for diverse ethical perspectives and distinct cultural standpoints. This approach can lead to feeling parented by corporations and governments rather than empowered to take shared responsibility for the environments we co-create.

Without Self-Reflection Technology to support users, technology companies miss an opportunity to empower users to cultivate a more empathetic, meaningful, and constructive digital culture. Users cannot articulate their values and personalize their digital experience accordingly.

II. The Plan

Project Insight seeks to resolve these issues by introducing AI tools that empower users to articulate their values, make intentional decisions, and self-regulate their online behavior and technology use. By developing Insight Cards—personalized ethical frameworks—and engaging an Insight Assistant, an AI agent, users live out their values across digital contexts. The ultimate objective is to elevate human agency, enhance self-efficacy, and promote a meaningful and responsible online engagement culture.

III. Opportunities

Imagine an Insight Assistant engaging users by asking them about their beliefs, intentions, and aspirations for their relationship with technology. This would result in a personalized Insight Inventory of values and practices. Insight Cards (ICs) would serve as distinct clauses of a user’s Living Constitution—a dynamic document that can be customized and curated over time. Users could even visit the Insight Fair to share and adopt other peoples’ Insight Cards, seeing the world through another’s eyes. Here are some examples of an Insight Assistant in action.

For instance, the Insight Insight Assistant could assist with setting up and evaluating online profiles and analyzing users’ posts against their Insight Cards, flagging content that may not align with their stated values.

The Insight Assistant may provide constructive feedback on communication, offering options such as time-delayed publishing to allow users to reflect on and revise draft emails, texts, social media posts, or customer reviews.

The Insight Assistant could apply the principles outlined in a user’s Mindful Consumption IC to customize and filter online shopping recommendations or help them reflect on their news consumption patterns.

In cases where users need a space to express their thoughts and emotions without immediate publication, the Insight Coach might activate a Private Venting Vault, allowing users to release their raw thoughts in a non-judgmental space or send the draft to the Chill Chair for a cooling-off period.

Users may also adopt Insight Cards focused on emotional intelligence or Cognitive Behavioral Therapy, enabling them to learn from their experiences, articulate and reflect on their insights, and study their own emotions and thought patterns.

Additionally, the Insight Coach could draw upon a Restorative Justice IC to guide users in clarifying misunderstandings and crafting apologies to restore strained relationships.

The Privacy IC might alert users to security matters related to data anonymization and encryption practices, ensuring that users are aware of privacy concerns in their digital interactions.

Overall, the Self-Reflection Technology is designed to cultivate users’ self-worth and avoid self-shaming. The Insight Assistant would be programmed with a Growth Mindset, employing positive reinforcement strategies to encourage learning and growth rather than perfectionism. The goal is to strengthen users’ self-efficacy and self-confidence.

By integrating Self-Reflection Technology into users’ daily routines, AI agents can function as assistants, helping them align their values and digital interactions. Like any form of athletic training, this process requires practice and commitment. By leveraging custom Insight Cards and experimenting with those of others, users can contribute to a more significant movement to build a more empathetic, understanding, healthier, and inclusive online community, starting with the values that define their technology use.

IV. Objectives

Project Insight is a research initiative that addresses the challenges identified and leverages the opportunities presented with advanced AI technologies. The project operates on the assumption that AI models will continue to evolve, becoming increasingly sophosticated, allowing users to personalize their technological experiences in innovative and intuitive ways. In preparation for these advancements, Project Insight seeks to collaborate with experts across disciplines to craft and evaluate diverse ethical choices humans may make when interacting with technology.

V. Roles

To achieve its goals, Project Insight will engage expert and generalist AI Trainers in the following consultancy roles: Indexer, Creator, Reviewer, and User. Each trainer will be compensated for producing deliverables according to their assigned roles.

Indexers are responsible for co-creating a comprehensive taxonomy of ethics, values, virtues, cultural norms, laws, spiritual practices, wellness approaches, and professional standards that users might incorporate into their Insight Cards and Living Constitution. This taxonomy will provide the framework for the AI’s classification system, enabling it to integrate and present diverse ethical perspectives effectively. The Indexers’ work will directly inform the development of the Insight Cards and the Living Constitution, ensuring these tools are inclusive and adaptable to a wide range of ethical frameworks and cultural practices. This dynamic and comprehensive index will also support the design of the Insight Coach’s feedback mechanisms and the presentation of Insight Cards at the Insight Fair, where users can exchange and adopt cards into their Living Constitutions.

Creators will develop the Insight Inventory by crafting questions that Insight Coaches will use to help users articulate their values, virtues, and cultural practices. Using this inventory process, Creators will produce sample Insight Cards on specific topics and provide model responses for the Insight Coach, demonstrating how AI can support users in various scenarios. The Creators’ work will shape the user experience by providing structured, meaningful interactions with the Insight Coach. The sample Insight Cards and model responses will serve as prototypes for training the AI, ensuring it can deliver relevant, constructive feedback and guidance aligned with diverse user needs.

Reviewers will conduct peer reviews of the Creator’s deliverables, evaluating the quality and effectiveness of the Insight Inventory, Insight Cards, and model responses and providing feedback for refinement. The Reviewers’ insights are essential to ensure the AI’s guidance is accurate, empathetic, and culturally sensitive. Their feedback will refine the AI training process, improving the quality and accuracy of the Insight Assistant’s responses. This role is crucial for validating the models and scenarios created by the Creators, ensuring they are robust and effective for a diverse user base.

Users are AI Trainers who assume the role of end-users, providing feedback based on their experiences engaging with the Insight Coach and applying and customizing their Insight Cards and Living Constitutions. Users’ feedback will help refine the AI’s capabilities and ensure that the tools meet the diverse needs and expectations of a broad user population. These focus groups will offer real-world data to assess the effectiveness and usability of the Insight Cards and Insight Coach, forming a critical feedback loop to continuously enhance the AI’s ability to support personalized, values-based digital interactions.

IV. Summary

Project Insight represents a pioneering approach to integrating AI technology with personalized ethical frameworks, empowering users to navigate the digital landscape with greater intentionality and self-awareness. By developing Insight Cards and the Insight Insight Assistant, this project aims to introduce Self-Reflection Technology, providing users with the resources to articulate their values, make thoughtful decisions, and cultivate a more empathetic, inclusive, and meaningful online community. Project Insight will create a robust infrastructure that supports diverse ethical perspectives and promotes continuous learning and growth through collaboration with Expert AI Trainers in roles such as Indexers, Creators, Reviewers, and Users. Ultimately, this initiative strives to elevate human agency, enhance digital self-efficacy, and foster a responsible, values-driven engagement with our interconnected and interdependent world.

VII. About

Dr. Nathan C. Walker is the president of 1791 Delegates, a nonprofit organization named after the year the U.S. Bill of Rights was ratified. He is an award-winning instructor of First Amendment and human rights law at Rutgers University, where he teaches AI Ethics & Law as an Honors College faculty fellow. He is also an Expert AI Trainer for OpenAI’s Human Data Team, providing expertise in First Amendment and human rights law to ensure the safety and accuracy of frontier models.

Currently, Dr. Walker is a contributing researcher to the Munich Declaration of AI & Human Rights and a research fellow at Stellenbosch University in South Africa, affiliated with the Centre for Applied Ethics and the School for Data Science and Computational Thinking. He has previously served as a visiting academic at the Institute for Ethics in AI at the University of Oxford and as a resident research fellow in law and religion at Harvard University.

He is the author of five books on law, education, and religion, and has presented his research at the United Nations Human Rights Council, the Italian Ministry of Foreign Affairs, and the U.S. Senate. In November 2016, Publishers Weekly listed his book Cultivating Empathy as one of “six books for a post-election spiritual detox.”

Nate has three learning disabilities and earned his doctorate in First Amendment law from Columbia University, where he also completed two master’s degrees in higher education administration with a focus on finance and education technology. An ordained Unitarian Universalist minister, Reverend Nate holds a Master of Divinity degree from Union Theological Seminary.

Born in Munich, Germany, and raised in the Lake Tahoe area of Northern Nevada, U.S., Nate enjoys learning American Sign Language. He lives in Philadelphia, Pennsylvania, with his husband, Vikram Paralkar.

sites.rutgers.edu/walker

linkedin.com/in/drnatewalker

Ayoub Saidi is a graduate of Rutgers University – Camden, where he studied Religion and Philosophy. As a first-generation Muslim Moroccan American, he has spent the majority of his career studying the Abrahamic faiths and has worked to develop and bolster modalities of interfaith discourse to be implemented across professional, academic, and social environments. He has worked on projects related to the history of interfaith cooperation and spiritual resilience, the role of law in religious life, and the refinement of religious literacy education. He hopes to bring his experience to Project Insight and serve as a researcher in an effort to nurture emerging AI technologies and contribute to a digital culture of positivity and growth.

Adhithi Uppalapati is a Research Assistant at 1791 Delegates, a nonprofit organization that advances the public’s legal literacy through First Amendment and human rights education. In her role, Adhithi provides research support for projects related to artificial intelligence, ethics, and law. 

Additionally, Adhithi is an accomplished student at South Brunswick High School, where her academic focus is on computer science. There, she actively participates in state-wide science and legal competitions. 

Upon graduation, Adhithi aims to continue to explore the development of artificial intelligence algorithms and create new ways to apply these models to everyday life. Additionally, she seeks to study the legal implications of artificial intelligence usage.

© 2024 Nathan C. Walker