AI, Ethics, and Safety in Suicide Prevention

On April 15, Dr. Nathan C. Walker and research assistants Amisha Rastogi, Jason Entrekin, and Akshay Wadhavkar will lead a one-hour workshop on “AI, Ethics, and Safety in Suicide Prevention” as part of the Riverside Trauma Center’s SOS Signs of Suicide learning series.

They will explore how AI harm can stem from the moral distance and the lack of moral imagination between builders and regulators of AI systems and the people whose lives their products impact. This session will present the risks, opportunities, and ethical considerations surrounding AI systems when used across public and clinical settings.

Dr. Walker will conclude by introducing the AI Ethic Lab’s governance framework, “Dignity by Design: AI and the Duty of Care.”

The program will also feature Annika Marie Schoene, Ph.D. Assistant Professor, Department of Public Health and Health Sciences, Technical Lead for the Responsible AI Practice at Northeastern University.

Together, they will take a deep dive into how AI is used to detect and understand suicide risk, the ethical issues that arise, especially with large language models, and practical policy recommendations focused on privacy, transparency, and human oversight. This session highlights best practices from suicide prevention to help protect children, identity groups, and at-risk communities.