Artificial intelligence (AI) refers to the capacity of computers or other machines to perform tasks that typically require human intelligence such as reasoning, problem-solving, and decision-making. AI systems use algorithms and computational techniques to process large volumes of data, extract patterns, and make predictions or decisions based on those patterns.
Generative artificial intelligence is a specific subset of AI focused on creating content such as text, images, video, music, and other outputs in response to user input (or prompts). Generative AI models are designed to learn the patterns and structure of their input training data and generate new data with similar characteristics. Because generative AI tools can quickly and easily generate a wide variety of human-like outputs, they have the potential to radically transform the way we approach content creation across a wide range of domains and industries. However, because AI outputs are derived from undocumented data sources, infringe on intellectual property, and are prone to error, they are also subject to a number of important limitations.
Despite their broad potential, generative AI models also have several important limitations. Understanding these limitations is critical for using these technologies ethically and effectively.
Ethical Concerns
Quality and Reliability
Data Privacy and Security
Energy Consumption and Environmental Impact
Human Dependency, De-skilling, and Displacement
What is artificial intelligence (AI)?
Artificial intelligence refers to the capacity of computers or other machines to perform tasks that typically require human intelligence such as reasoning, problem-solving, and decision-making.
Generative AI is a type of artificial intelligence that can be used to create new content such as text, images, music, and other outputs in response to user input (also known as prompts). Popular examples include ChatGPT, Bard, DALL-E 2, and Midjourney.
What is ChatGPT?
ChatGPT is an AI chatbot trained on vast amounts of data to understand human language and generate conversational responses to a wide range of queries.
ChatGPT is trained on a wide range of sources including books, articles, websites, and other publicly available data. It uses natural language processing to parse and analyze texts and machine learning algorithms to generate query responses by predicting the word or sentence most likely to follow another in a sequence.
ChatGPT can be used in wide variety of text-based applications such as:
How can I use AI in my teaching, research, or coursework?
Common applications of AI in higher education include:
Note: Use of AI is fraught with complications involving accuracy, bias, academic integrity, and intellectual property and may not be appropriate in all academic settings. Students and faculty are strongly advised to consult with their instructor, department chair, or publisher before using AI-generated content in their teaching, research, or coursework.
Is student use of AI considered a violation of academic integrity?
Rutgers' Academic Integrity Policy (10.2.13) states that students must ensure “that all work submitted in a course, academic research, or other activity is the student’s own and created without the aid of impermissible technologies, materials, or collaborations.” Whether use of a technology is permissible depends on the specific policies and learning objectives of the course. Students are strongly advised to consult with their instructor before using generative AI in their research or coursework.
What are some ways instructors can address student misuse of AI?
Although tools for detecting AI-generated content exist, they are not 100% reliable and easy to evade. Instead of relying on detection tools like Turnitin or GPTZero, it is recommended that instructors:
What are the limitations of AI?
Notable limitations of AI include:
Accuracy | AI outputs may include false, inaccurate, or misleading information |
Bias | AI outputs may reflect, amplify, and perpetuate social biases found in training data |
Verifiability | AI outputs do not include source citations that can be used to verify or validate claims |
Intellectual property | AI platforms create derivative content from existing works without permission, attribution, or compensation to their creators |
Academic integrity | Most AI platforms do not have safeguards in place to prevent users from passing off AI-generated content as their own |
Privacy | AI platforms may collect and retain personal data that could be used for purposes other than what was originally intended or disclosed to the user |
Equity | Disparities in access could arise as AI providers move to fee or subscription-based access models |
What is AI literacy?
AI literacy refers to the ability to understand, evaluate, and responsibly interact with artificial intelligence technologies. It involves not just using AI tools but also comprehending their underlying principles, ethical implications, and limitations. For examples on how to integrate AI literacy skills into your teaching, see AI Literacy
Who owns the content generated by AI?
Generative AI raises questions concerning the copyrightability of AI-generated works (outputs), ownership of outputs, and use of copyrighted works ingested into AI systems as training data (inputs). Policies and legislative solutions are being developed across the globe to address regulation of AI. Where they already exist, rules vary from country to country.
In the U.S., only works “created by a human being” are eligible for copyright protection. Current guidance suggests that simply writing the prompt that generates the output is not by itself an act of authorship. However, in works consisting of both human-authored and AI-generated material, copyright may be claimed in eligible human-authored contributions. Whether a work created with the assistance of AI is copyrightable depends on the degree and nature of human involvement and this is a case-by-case assessment. A small number of countries currently provide protection to computer-generated works where there is no human author.
As for copyright ownership of AI-generated content, in the U.S., if the output has no human authorship, there is no copyright to be owned. Users should exercise caution in interpreting the “terms of use” of generative AI services that may be confusing. For example, OpenAI (the creator of ChatGPT and DALL-E) purports to assign users copyright ownership over content created with their tools and the right to use it for any purpose, including sale or publication. However, this language is meaningless when the AI service holds no copyright in the generated content.
Finally, OpenAI “terms of use” also state that “to the extent permitted by applicable law, you own all Input,” narrowly defining “Input” as user prompts. The user does not own third-party inputs ingested to train AI systems. The use of copyrighted works as inputs for AI technology is highly contentious. Numerous lawsuits have been filed to date against AI companies for use of copyrighted works as training data without permission, attribution, or compensation. These lawsuits are winding their way through the courts and will determine whether outputs incorporating copyrighted content are lawful or whether they are infringing derivative works. Use of AI-generated content is thus potentially subject to claims of copyright infringement. For more information, see Copyright and Artificial Intelligence.
How do I cite AI-generated output?
When writing an academic paper, it is important to cite your sources any time you incorporate words, ideas, data, or information that is not your own. This includes any content created by or with the assistance of artificial intelligence. Below are some guidelines on how to cite AI-generated output in various citation formats. For more information, consult the publication manual for your academic discipline.