Skip to Main Content

Artificial Intelligence (AI)

An ethical examination of AI of FTCC policies on AI tools

The Student Code of Conduct (p. 62) and Plagiarism Statement (p. 70) address AI tools (ex. ChatGPT).

The resources listed below discuss bias in AI. Bias can be built into AI tools when algorithms learn from data and text that contain errors or distortions that reinforce inequalities in society.

Misinformation and Hallucinations

Chatbots can accidentally create plausible answers that are false. The New York Times reported in November 2023 that these 'hallucinations' can happen in 3% to 30% of generative AI queries. See the article links below for more information.

More specifically, when ChatGPT is asked to generate citations, it may create links to sources that are not real. For example, a real author might be attached to a made-up journal, or an actual title will be listed next to the wrong facts, with the wrong dates.

Hallucinations by ChatGPT and other generative models are accidental. But AI images, audio, and text can also be created with the intention of providing false information. See the links below for more information:

Additional Concerns

Climate Concerns:

Chatbots and other forms of AI use large amounts of processing power. As the technology expands, carbon emissions may rise. The articles here discuss these concerns and potential solutions.

Exploitation of Workers:

Many AI tools function at the expense of underpaid workers in the United States and around the world. 

Research and Advocacy

The links below include groups are studying the ethical implementation of AI technology. Other groups on this list are working to change AI policy or pioneering more ethical uses of the technology. Some of the links included here are publications by or about these organizations.

Subjects: Computer and Information Technology
Tags: ai, artificial intelligence, chatgpt