
The Student Code of Conduct (p. 62) and Plagiarism Statement (p. 70) address AI tools (ex. ChatGPT).
The resources listed below discuss bias in AI. Bias can be built into AI tools when algorithms learn from data and text that contain errors or distortions that reinforce inequalities in society.
Black Artists See Clear Bias in A.I. New York Times, 05 July 2023
Towards a standard for identifying and managing bias in artificial intelligence
Chatbots can accidentally create plausible answers that are false. The New York Times reported in November 2023 that these 'hallucinations' can happen in 3% to 30% of generative AI queries. See the article links below for more information.
More specifically, when ChatGPT is asked to generate citations, it may create links to sources that are not real. For example, a real author might be attached to a made-up journal, or an actual title will be listed next to the wrong facts, with the wrong dates.
Hallucinations by ChatGPT and other generative models are accidental. But AI images, audio, and text can also be created with the intention of providing false information. See the links below for more information:
Chatbots and other forms of AI use large amounts of processing power. As the technology expands, carbon emissions may rise. The articles here discuss these concerns and potential solutions.
Aligning artificial intelligence with climate change mitigation A research paper from Nature Climate Change about measuring and minimizing greenhouse gas emissions from AI and machine learning.
Many AI tools function at the expense of underpaid workers in the United States and around the world.
The links below include groups are studying the ethical implementation of AI technology. Other groups on this list are working to change AI policy or pioneering more ethical uses of the technology. Some of the links included here are publications by or about these organizations.
HAI: Human-Centered Artificial Intelligence The Stanford Institute for Human-Centered Artificial Intelligence (HAI) works to advance AI research, education, policy and practice to improve the human condition.
The AI Index Report: Measuring Trends in Artificial Intelligence Stanford University - Human Centered Artificial Intelligence
RAISE: Responsible AI for Social Empowerment and Education An initiative at MIT to innovate learning and education in the era of AI.
Latimer A large language model trained with diverse histories and inclusive voice.
The Black GPT: Introducing The AI Model Trained With Diversity And Inclusivity In Mind An October 2023 article from People of Color in Tech about Latimer.
UNESCO Recommendations on the Ethics of Artificial Intelligence