Autonomous Research and Responsible Knowledge Production

Fall 2027 | Dr. Nathan C. Walker | Department of Computer Science

Students enrolled in Algorithmic Science examine the growing role of artificial intelligence as a scientific collaborator in contemporary research workflows. The course focuses on the ethical and legal implications of the evolving field of algorithmic science, where autonomous artificial intelligence agents perform tasks previously reserved for human researchers. The course applies insights from AI ethics and law to evaluate the roles of AI agents and human researchers across the full scientific lifecycle. Students will study the strengths and limitations of autonomous research agents and AI-enabled workflows when performing literature reviews across millions of publications, evaluating public and private datasets, identifying knowledge gaps, drafting research questions, designing and evaluating experiments, accelerating iteration cycles from hypothesis to validation, preparing IRB protocols and budgets, allocating lab resources, running experiments in self-driving labs, conducting AI-based peer review of their own outputs, and generating print-ready manuscripts for human scientists to evaluate. The course avoids anthropomorphic language such as “AI scientists,” given the risks of perceiving AI agents as alive or conscious. Instead, the course uses the terms autonomous research agents and AI-enabled scientific workflows.

Unit I. Introduction to Algorithmic Science. In the first three weeks, students survey the current landscape of artificial intelligence’s impact on scientific discovery. They review literature on both the promises and perils of using AI in knowledge production.

Unit II. Ethical & Legal Frameworks. Students reflect on the ethical and legal implications of using AI in the sciences, with special attention to principles including accountability, beneficence, cognitive liberty, explainability, nonmaleficence, predictability, privacy, reproducibility, safety, security, transparency, trust, and verifiability. Students examine the role of AI systems in data collection and evaluation, with particular attention to sycophantic behavior, fabrication, overconfidence, omitted constraints, and unverifiable research assertions. Students critically assess how AI systems can fail or be misused, including fabricating or omitting sources and overstating findings. Attention is given to the scientific and ethical significance of the human-in-the-loop at each stage of the AI science workflow, the coordination of hybrid AI stacks that combine general-purpose models with domain-specific tools, and the qualifications humans need to evaluate the efficacy of AI-generated methods and outputs.

Unit III. Case Studies. Students explore case studies examining the impact of algorithmic science across specific disciplines. They analyze the intellectual objectives for using AI, the potential efficiencies gained through AI-enabled research, and trends in budgeting for AI agents in research protocols, including public, private, and public-private funding sources.

Unit IV. The Future of Academic Research. In weeks 11 and 12, students consider the implications of algorithmic science for research universities, government funding, and scientific training. They examine the societal role of research universities in the age of co-intelligence and their reliance on government funding not only to produce scientific knowledge but also to train the next generation of scientists. Students ask how undergraduate and graduate researchers will gain experience conducting their own experiments and develop the expertise required to serve as the human-in-the-loop overseeing AI tools if funding increasingly flows through for-profit public-private partnerships. They also consider the ethical and scientific risks of students and early-career researchers spending their time supervising automated systems rather than developing their own intellectual authority as experts. The unit concludes by examining the cultural consequences of scientific fields that become dependent on automated pipelines that chart discovery pathways faster than humans can reflect on their wisdom.

Unit V. Final Project. In the final two weeks, students prepare a manifesto that articulates their ethical framework for the responsible use of AI in the sciences. This statement may be used in future job or graduate school applications. By clearly articulating their positions on responsible AI use in scientific research, students will be better prepared for graduate study, professional work across industries, and democratic engagement with questions surrounding public-private partnerships and the future of scientific discovery.

Course Assessments: The course will be sensitive to a variety of learning styles and will combine the effective use of multimedia, lectures, weekly readings, class discussions, short analytical essays, and case studies. Students will be graded on their participation and contributions in class (40 percent), the analysis and presentation of four case studies designed to verbally assess their understanding (40 percent), and the quality of their final project (20 percent).

General Education Categories: The course will fulfill the Ethics and Values (EVA) requirement due to its comprehensive review of ethical and legal frameworks related to scientific discovery, and the Physical and Life Sciences (PLS) requirement due to its focus on the scientific method, scientific principles, and the ways scientists in particular disciplines conduct research.