Is AI the New Homework Machine? Understanding AI and Its Impact on Higher Education
Published by: WCET | 1/5/2023
Tags: Academic Integrity, Artificial Intelligence, Higher Education Trends
Published by: WCET | 1/5/2023
Tags: Academic Integrity, Artificial Intelligence, Higher Education Trends
By now you’ve likely seen the hubbub over ChatGPT, OpenAI’s new chat bot trained on their large language model AI GPT 3.5. Some of the more provocative announcements about the impact of artificial intelligence include:
The focus of much of this discussion about AI has been on academic integrity, specifically academic dishonesty. But bigger issues—digital literacy, pedagogical practices, equity—are also at play.
In 2023, WCET will look at Artificial Intelligence (AI) and provide support and resources to help you break through the rhetoric and understand both the promises and perils of AI in higher education.
To begin, this introductory blog post will focus on an overview of large language model AIs and their potential impact on higher education.
In coming months, we will do a number of deeper dives on AI and higher education including a Frontiers Podcast episode, a February brief that explores selected AI tools, several blog posts on AI’s impact on pedagogical practices, what a new digital literacy in the age of AI might look like, and the equity implications of large language model AI, as well as a summer white paper that will do a deeper dive on the pedagogical and policy implications of AI.
Before we delve into the impact of AI on higher education, it’s worth taking a few minutes to consider artificial intelligence and its most recent manifestations.
Artificial Intelligence (AI): Stanford University’s Human-Centered Artificial Intelligence group defines artificial intelligence as “a term coined by emeritus Stanford Professor John McCarthy in 1955, was defined by him as ‘the science and engineering of making intelligent machines.’ Much research has humans program machines to behave in a clever way, like playing chess, but, today, we emphasize machines that can learn, at least somewhat like human beings do.”
Machine learning: Machine learning was defined by AI pioneer Arthur Samuel in the 1950s as “the field of study that gives computers the ability to learn without explicitly being programmed.” Often times this now happens via neural networks.
Neural networks and deep learning: Neural networks are, simply put, computer networks that are built in such a way as to mimic the human brain with each node leading to other nodes, much as the brain is a complex collection of networked neurons. Neural networks lead to deep learning which allows the computer to learn from large amounts of data in a way that “creates an adaptive system that computers use to learn from their mistakes and improve continuously.”
Large language model (LLM): A language model trained on an enormous amount of text so that it is capable of predicting a given sequence of words. This capability allows the model to “understand” inquiries and replicate human language in a largely coherent way.
Much of the recent discussion is around large language model AI. This type of AI relies on a neural net to look for and find patterns in a massive amount of data. The larger the neural network, i.e., the number of nodes, the larger the data input can be, and, in theory, the more capability the AI has to interact with humans. Much large language model AI, such as OpenAI’s (a research and development company) ChatGPT, relies on large neural networks which allow the AI to continuously learn from its interactions with humans and improve its responses.
Last month OpenAI released ChatGPT, a chat interface for its GPT 3.5 large language model AI. Using the chat interface, users can make a seemingly endless array of queries of GPT 3.5 that result in often stunning responses. For example, when provided with a prompt to explain how the ChatGPT works, the program responded:
ChatGPT is a large language model that is trained to generate human-like text. It does this by being fed vast amounts of text data, which it uses to learn the patterns and structures of language. To generate text, ChatGPT uses a type of artificial intelligence called a transformer, which allows it to process language more efficiently and effectively. It also utilizes a technique called self-attention, which enables it to consider the relationships between words in a sentence and the context in which they are used. When given a prompt, ChatGPT uses all of this information to generate a response that is coherent and resembles natural language.
And when asked to write a poem in the style of iambic pentameter on the superiority of cats, the program responded:
In iambic pentameter, the most common type of meter in English poetry, each line contains ten syllables and has a regular pattern of unstressed and stressed syllables. The pattern is typically written as “da-DUM” and can be thought of as the beat of the poem. Here is a short poem about the superiority of cats written in iambic pentameter:
With grace and stealth they prowl the night Unseen, until they choose to strike Their sharpened claws and piercing eyes A hunter's soul within their guise Superior, sleek, and oh so wise The feline reigns, the ruler of all In every home, they rule the hall And all who dare defy their might Shall feel the wrath of feline fright For cats are king, and dogs must bow To their majestic, feline pow'r Long live the cats, in all their glory Their reign will never, ever be over.
And, finally, when asked to write a college application essay on one’s experiences of working in a soup kitchen which includes reflecting on what you learned and including an interaction with a small child, ChatGPT, in a matter of seconds, responded with a coherent, personalized, five paragraph essay. I fully believe that the fact that the essay was written by AI and not a live person would be undetectable for many college admissions committees.
Why, as a society, should we care about the release of ChatGPT? Very simply put, the release of ChatGPT and the development of other large language model AIs is a seminal moment in human history. The moment is akin to probably the single most important historical technological development—the creation of the movable type printing press by Johannes Gutenberg in the early 15th century. Just as the printing press changed our relationship with information by making it available to a wider audience, large language model AI is changing our relationship with information by blurring the lines between human and machine. It forces us to reconsider what is distinctly human about intelligence if a machine can generate human language complete with analysis.
It is clear that the development of large language model AI, and its growing availability to a more general audience, could significantly change higher education. It will call into question the ways in which we have used writing as, as Daniel Herman puts it, “a benchmark for aptitude and intelligence.” Generative LLM will force us to think about what we assess and how we assess it, shifting a reliance on writing to more creative assessments that require students to demonstrate application of knowledge rather than simply the ability to produce information.
Higher education is being called upon to rethink what we assess and why we assess it. We are being called upon to rethink the relationship between information, knowledge, and wisdom. When an AI can create passable prose with accurate information (something that ChatGPT and other LLM Ais still cannot yet do consistently), is it enough to ask our students to “prove” that they know the information? Or does our assessment shift to asking students to apply information, demonstrating knowledge of the subject at hand?
Higher education must rethink digital literacy and how we prepare our students for this new world of large language model AI. As we move closer to a world of hybrid work where more and more jobs involve the use of generative AI for everything from discovering new drug molecules to developing ad copy, we will need to help our students understand how to partner with AI. How do they craft a request? How do they evaluate the results of the AI? How can they leverage AI to more deeply understand the world around them? This is a new digital literacy and it goes beyond the use of statistical software application or how to craft a Google search request.
In September of last year, before the release of ChatGPT, Jeff Schatten wrote in The Chronicle of Higher Education, “It won’t be long before GPT-3, and the inevitable copycats, infiltrate the university. The technology is just too good and too cheap not to make its way into the hands of students who would prefer not to spend an evening perfecting the essay I routinely assign on the leadership style of Elon Musk.”
That time, that technology—it’s here, and higher education must decide how to respond.
In coming months we’ll do a much deeper dive on how you can respond to large language model AI but, in the interim, we would urge you to take the steps that John Warner suggests in his recent Inside Higher Ed blog, “Freaking Out About ChatGPT—Part I.”
We also need to begin thinking about how we define academic integrity in this new age of ChatGPT and other large language model AIs. This should lead to deeper conversations with our students about academic integrity.
As Melinda Zook, a Purdue history professor puts it, “The fact is the professoriate cannot teach the way we used to. Today’s students have to take ownership over every step of the learning experience. No more traditional five paragraph essays, no more ‘read the book and write about it.” We must radically rethink our pedagogical practices for the 21st century.
In 1958, Jay Williams and Raymond Abrashkin published Danny Dunn and the Homework Machine, a children’s book about three junior high schoolers who decide to use a computer prototype to do their homework for them. When their teacher discovers their ruse and confronts Danny, he passionately defends their decision to program the computer with all of the information in their textbooks and use it to produce their homework exclaiming,
“It’s just another tool. Lots of kids do their homework on typewriters. In high school and college they teach kids to do some of their homework on slide rules. And scientists use all kinds of computers as tools for their work. So why pick on us? We’re just…just going along with the times.”
Junior high school hijinks ensue, including the sabotage of the computer by a jealous classmate and Danny heroically discovering and fixing it just as a representative from the federal government is about to leave in disgust. And, in the end, Danny and his friends recognize that in programming the computer to do their homework they have, in reality, been learning and doing their homework leading Danny to resolve not to use the computer to do their homework anymore. However, he does close the story by wondering about what a teaching machine would look like.
Reading Danny Dunn and the Homework Machine in light of ChatGPT was eerie. The story (written when Dwight Eisenhower was President) reflects current discussions about the ethics of students leveraging the latest AI innovations, especially ChatGPT and GPT 3.5.
In the coming months, we’ll explore these larger issues around AI and higher education. Meanwhile, we would love to hear your thoughts on ChatGPT and other AI tools and their impact on higher education. You can send any thoughts or questions to Van Davis at vdavis@wiche.edu.