By now you’ve likely seen the hubbub over ChatGPT, OpenAI’s new chat bot trained on their large language model AI GPT 3.5. Some of the more provocative announcements about the impact of artificial intelligence include:

The focus of much of this discussion about AI has been on academic integrity, specifically academic dishonesty. But bigger issues—digital literacy, pedagogical practices, equity—are also at play.

article overview with sections: 

Introduction to WCET AI Work
Definitions
OpenAI's Chat GPT
Some Poetry about a Cat
Impact to Higher Education
What YOU Can do Now
Concluding Thoughts ON Danny Dunn and the Homework Machine

In 2023, WCET will look at Artificial Intelligence (AI) and provide support and resources to help you break through the rhetoric and understand both the promises and perils of AI in higher education.

To begin, this introductory blog post will focus on an overview of large language model AIs and their potential impact on higher education.

In coming months, we will do a number of deeper dives on AI and higher education including a Frontiers Podcast episode, a February brief that explores selected AI tools, several blog posts on AI’s impact on pedagogical practices, what a new digital literacy in the age of AI might look like, and the equity implications of large language model AI, as well as a summer white paper that will do a deeper dive on the pedagogical and policy implications of AI.

A Quick Primer and Glossary on Large Language Model Artificial Intelligence

Before we delve into the impact of AI on higher education, it’s worth taking a few minutes to consider artificial intelligence and its most recent manifestations.

Definitions

Image by Gerd Altmann from Pixabay

Artificial Intelligence (AI): Stanford University’s Human-Centered Artificial Intelligence group defines artificial intelligence as “a term coined by emeritus Stanford Professor John McCarthy in 1955, was defined by him as ‘the science and engineering of making intelligent machines.’ Much research has humans program machines to behave in a clever way, like playing chess, but, today, we emphasize machines that can learn, at least somewhat like human beings do.”

Machine learning: Machine learning was defined by AI pioneer Arthur Samuel in the 1950s as “the field of study that gives computers the ability to learn without explicitly being programmed.” Often times this now happens via neural networks.

Neural networks and deep learning: Neural networks are, simply put, computer networks that are built in such a way as to mimic the human brain with each node leading to other nodes, much as the brain is a complex collection of networked neurons. Neural networks lead to deep learning which allows the computer to learn from large amounts of data in a way that “creates an adaptive system that computers use to learn from their mistakes and improve continuously.”

Large language model (LLM): A language model trained on an enormous amount of text so that it is capable of predicting a given sequence of words. This capability allows the model to “understand” inquiries and replicate human language in a largely coherent way.

Much of the recent discussion is around large language model AI. This type of AI relies on a neural net to look for and find patterns in a massive amount of data. The larger the neural network, i.e., the number of nodes, the larger the data input can be, and, in theory, the more capability the AI has to interact with humans. Much large language model AI, such as OpenAI’s (a research and development company) ChatGPT, relies on large neural networks which allow the AI to continuously learn from its interactions with humans and improve its responses.

The Buzziest LLM AI Right Now: OpenAI’s ChatGPT and GPT 3.5

Last month OpenAI released ChatGPT, a chat interface for its GPT 3.5 large language model AI. Using the chat interface, users can make a seemingly endless array of queries of GPT 3.5 that result in often stunning responses. For example, when provided with a prompt to explain how the ChatGPT works, the program responded:

ChatGPT is a large language model that is trained to generate human-like text. It does this by being fed vast amounts of text data, which it uses to learn the patterns and structures of language. To generate text, ChatGPT uses a type of artificial intelligence called a transformer, which allows it to process language more efficiently and effectively. It also utilizes a technique called self-attention, which enables it to consider the relationships between words in a sentence and the context in which they are used. When given a prompt, ChatGPT uses all of this information to generate a response that is coherent and resembles natural language.

And when asked to write a poem in the style of iambic pentameter on the superiority of cats, the program responded:

In iambic pentameter, the most common type of meter in English poetry, each line contains ten syllables and has a regular pattern of unstressed and stressed syllables. The pattern is typically written as “da-DUM” and can be thought of as the beat of the poem. Here is a short poem about the superiority of cats written in iambic pentameter:

With grace and stealth they prowl the night

Unseen, until they choose to strike

Their sharpened claws and piercing eyes

A hunter's soul within their guise

Superior, sleek, and oh so wise

The feline reigns, the ruler of all

In every home, they rule the hall

And all who dare defy their might

Shall feel the wrath of feline fright

For cats are king, and dogs must bow

To their majestic, feline pow'r

Long live the cats, in all their glory

Their reign will never, ever be over.

And, finally, when asked to write a college application essay on one’s experiences of working in a soup kitchen which includes reflecting on what you learned and including an interaction with a small child, ChatGPT, in a matter of seconds, responded with a coherent, personalized, five paragraph essay. I fully believe that the fact that the essay was written by AI and not a live person would be undetectable for many college admissions committees.

Why, as a society, should we care about the release of ChatGPT? Very simply put, the release of ChatGPT and the development of other large language model AIs is a seminal moment in human history. The moment is akin to probably the single most important historical technological development—the creation of the movable type printing press by Johannes Gutenberg in the early 15th century. Just as the printing press changed our relationship with information by making it available to a wider audience, large language model AI is changing our relationship with information by blurring the lines between human and machine. It forces us to reconsider what is distinctly human about intelligence if a machine can generate human language complete with analysis.

What Does All of This Mean for Higher Education?

It is clear that the development of large language model AI, and its growing availability to a more general audience, could significantly change higher education. It will call into question the ways in which we have used writing as, as Daniel Herman puts it, “a benchmark for aptitude and intelligence.” Generative LLM will force us to think about what we assess and how we assess it, shifting a reliance on writing to more creative assessments that require students to demonstrate application of knowledge rather than simply the ability to produce information.

Photo by Headway on Unsplash

Higher education is being called upon to rethink what we assess and why we assess it. We are being called upon to rethink the relationship between information, knowledge, and wisdom. When an AI can create passable prose with accurate information (something that ChatGPT and other LLM Ais still cannot yet do consistently), is it enough to ask our students to “prove” that they know the information? Or does our assessment shift to asking students to apply information, demonstrating knowledge of the subject at hand?

Higher education must rethink digital literacy and how we prepare our students for this new world of large language model AI. As we move closer to a world of hybrid work where more and more jobs involve the use of generative AI for everything from discovering new drug molecules to developing ad copy, we will need to help our students understand how to partner with AI. How do they craft a request? How do they evaluate the results of the AI? How can they leverage AI to more deeply understand the world around them? This is a new digital literacy and it goes beyond the use of statistical software application or how to craft a Google search request.

What You Can Do Right Now

In September of last year, before the release of ChatGPT, Jeff Schatten wrote in The Chronicle of Higher Education, “It won’t be long before GPT-3, and the inevitable copycats, infiltrate the university. The technology is just too good and too cheap not to make its way into the hands of students who would prefer not to spend an evening perfecting the essay I routinely assign on the leadership style of Elon Musk.”

That time, that technology—it’s here, and higher education must decide how to respond.

In coming months we’ll do a much deeper dive on how you can respond to large language model AI but, in the interim, we would urge you to take the steps that John Warner suggests in his recent Inside Higher Ed blog, “Freaking Out About ChatGPT—Part I.”

  • Give students learning experiences that they are interested in and value so they are less inclined to use AI as a way for “doing an end run.”
  • Move away from using a single artifact, like a single exam or essay, as a measure of learning. Instead, create assessments that “take into consideration the processes and experiences of learning.”
  • Ask students to engage in metacognitive reflection that has them articulate what they have learned, how they have learned it, and why the knowledge is valuable.
  • Create assignments that require students to synthesize what they have learned and bring their own perspectives the subject.
  • And, finally, create assignments that integrate the technology into learning.

We also need to begin thinking about how we define academic integrity in this new age of ChatGPT and other large language model AIs. This should lead to deeper conversations with our students about academic integrity.

As Melinda Zook, a Purdue history professor puts it, “The fact is the professoriate cannot teach the way we used to. Today’s students have to take ownership over every step of the learning experience. No more traditional five paragraph essays, no more ‘read the book and write about it.” We must radically rethink our pedagogical practices for the 21st century.

In Conclusion: Danny Dunn and the Homework Machine

In 1958, Jay Williams and Raymond Abrashkin published Danny Dunn and the Homework Machine, a children’s book about three junior high schoolers who decide to use a computer prototype to do their homework for them. When their teacher discovers their ruse and confronts Danny, he passionately defends their decision to program the computer with all of the information in their textbooks and use it to produce their homework exclaiming,

“It’s just another tool. Lots of kids do their homework on typewriters. In high school and college they teach kids to do some of their homework on slide rules. And scientists use all kinds of computers as tools for their work. So why pick on us? We’re just…just going along with the times.”

Junior high school hijinks ensue, including the sabotage of the computer by a jealous classmate and Danny heroically discovering and fixing it just as a representative from the federal government is about to leave in disgust. And, in the end, Danny and his friends recognize that in programming the computer to do their homework they have, in reality, been learning and doing their homework leading Danny to resolve not to use the computer to do their homework anymore. However, he does close the story by wondering about what a teaching machine would look like.

Reading Danny Dunn and the Homework Machine in light of ChatGPT was eerie. The story (written when Dwight Eisenhower was President) reflects current discussions about the ethics of students leveraging the latest AI innovations, especially ChatGPT and GPT 3.5.

  • What is the purpose of homework?
  • What types of assistance should students be allowed to use?
  • What is academic integrity and how does AI fit in to discussions about it?
  • Are there ways for students to use AI that do not compromise academic integrity?
  • What does it mean to learn?
  • And, finally, what is the role of the teacher in this new age of AI?

In the coming months, we’ll explore these larger issues around AI and higher education. Meanwhile, we would love to hear your thoughts on ChatGPT and other AI tools and their impact on higher education. You can send any thoughts or questions to Van Davis at vdavis@wiche.edu.

Van Davis

Chief Strategy Officer, WCET


vdavis@wiche.edu

LinkedIn Profile

Subscribe

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,391 other subscribers

Archive By Month

Blog Tags

Distance Education (316)Student Success (295)Online Learning (228)Managing Digital Learning (218)State Authorization (214)WCET (211)U.S. Department of Education (204)Regulation (197)Technology (168)Digital Learning (149)Innovation (125)Teaching (121)Collaboration/Community (114)WCET Annual Meeting (105)Course Design (103)Access (98)Professional Development (98)Faculty (88)Cost of Instruction (88)SAN (88)Financial Aid (84)Legislation (83)Completion (74)Assessment (69)Instructional Design (68)Open Educational Resources (66)Accreditation (65)COVID-19 (64)SARA (64)Accessibility (62)Credentials (62)Professional Licensure (62)Competency-based Education (61)Quality (61)Data and Analytics (60)Research (58)Diversity/Equity/Inclusion (57)Reciprocity (56)WOW Award (51)Outcomes (47)Workforce/Employment (46)Regular and Substantive Interaction (43)Policy (42)Higher Education Act (41)Negotiated Rulemaking (40)Virtual/Augmented Reality (37)Title IV (36)Practice (35)Academic Integrity (34)Disaster Planning/Recovery (34)Leadership (34)WCET Awards (30)Artificial Intelligence (29)Every Learner Everywhere (29)State Authorization Network (29)IPEDS (28)Adaptive/Personalized Learning (28)Reauthorization (28)Military and Veterans (27)Survey (27)Credits (26)Disabilities (25)MOOC (23)WCET Summit (23)Evaluation (22)Complaint Process (21)Retention (21)Enrollment (21)Correspondence Course (18)Physical Presence (17)WICHE (17)Cybersecurity (16)Products and Services (16)Forprofit Universities (15)Member-Only (15)WCET Webcast (15)Blended/Hybrid Learning (14)System/Consortia (14)Digital Divide (14)NCOER (14)Textbooks (14)Mobile Learning (13)Consortia (13)Personalized Learning (12)Futures (11)Marketing (11)Privacy (11)STEM (11)Prior Learning Assessment (10)Courseware (10)Teacher Prep (10)Social Media (9)LMS (9)Rankings (9)Standards (8)Student Authentication (8)Partnership (8)Tuition and Fees (7)Readiness and Developmental Courses (7)What's Next (7)International Students (6)K-12 (6)Lab Courses (6)Nursing (6)Remote Learning (6)Testing (6)Graduation (6)Proctoring (5)Closer Conversation (5)ROI (5)DETA (5)Game-based/Gamification (5)Dual Enrollment (4)Outsourcing (4)Coding (4)Security (4)Higher Education Trends (4)Mental Health (4)Fall and Beyond Series (3)In a Time of Crisis (3)Net Neutrality (3)Universal Design for Learning (3)Cheating Syndicates Series (3)ChatGPT (3)Enrollment Shift (3)Nontraditional Learners (2)Student Identity Verification (2)Cross Skilling/Reskilling (2)Virtual Summit (2)Higher Education (2)Title IX (1)Business of Higher Education (1)OPMs (1)Department of Education (1)Third-Party Servicers (1)microcredentials (1)Minority Serving Institution (1)Community College (1)