A general perusal of Inside Higher Education, Chronicle of Higher Education, or the internet in general turns up countless fears that generative AI, especially in the form of large language models such as ChatGPT, will increase attacks on academic integrity.

A smartphone with a brief introduction of ChatGPT on it.
Photo by Sanket Mishra on Unsplash

Take Jeremy Weissman’s opinion piece in Inside Higher Ed where he compares ChatGPT with the early days of the COVID pandemic.

Calling ChatGPT and generative AI a “plague upon education,” Weissman opines “In these early days of the GPT spread, we are largely defenseless against this novel threat to human intelligence and academic integrity. A return to handwritten and oral in-class assignments—a lockdown response—may be the only immediate effective solution as we wait for more robust protections to arise.”

As a result of such fears, most of the discussion around generative AI and institutional policy has revolved around academic integrity, but there are myriad other areas that institutions need to be aware of and make policy to address.

Academic Integrity and Artificial Intelligence

Recently, WCET conducted a survey of college and university leaders regarding the use of generative AI on their campuses. Of the more than 600 respondents, only 8 percent indicated that they had implemented policies around artificial intelligence. Most of those policies, 21 percent, were around academic integrity. Of the 57 percent of respondents at institutions planning or developing policies:

  • 70 percent were planning academic integrity policies,
  • 51 percent policies around instructional use,
  • 32 percent policies around data security,
  • 27 percent intellectual property policies,
  • 26 percent privacy policies, and,
  • .04 percent accessibility policies.

Note: Full analysis of this survey, including institutional recommendations will be published in June.

That lack of institutional policy is born out in research conducted by Primary Research Group earlier this year. That research found that only 14 percent of college administrators reported the existence of institutional guidelines and only 18 percent of instructors reported having policies and guidelines on the use of generative AI in their classes.

A Perusal of Current Policy Standings

A lack of other policies notwithstanding, academic integrity is often the first policy area that institutions and faculty address, and such policies run the gamut from completely outlawing any use of generative AI to allowing for its usage with appropriate attribution. For example, the University of Missouri’s general academic dishonesty policy states, “Academic honesty is fundamental to the activities and principles of the University… Any effort to gain an advantage not given to all students is dishonest whether or not the effort is successful.” The institution’s informational page on AI usage goes on to state, “Students who use ChatGPT and similar programs improperly are seeking to gain an unfair advantage, which means they are committing academic dishonesty.”

The Ohio State University takes a slightly different tact by outlawing the use of generative AI tools unless an instructor explicitly gives permission for students to use such tools. The institution’s academic integrity and artificial intelligence page states, “To maintain a culture of integrity and respect, these generative AI tools should not be used in the completion of course assignments unless an instructor for a given course specifically authorizes their use… [T]hese tools should be used only with the explicit and clear permission of each individual instructor, and then only in the ways allowed by the instructor.” Most institutions crafting generative AI academic integrity policy appear to be adapting existing academic integrity policies as well as ceding the development of such policy to instructors.

Although there is currently no comprehensive directory of course level AI usage policies, Lance Eaton has begun to crowdsource examples of such policies. A perusal of this collection of classroom policies indicate that most policies can be categorized in two areas—bans of generative AI and use of generative AI with attribution. One such policy outlawing the use of AI reads, “Some student work may be submitted to AI or plagiarism detection tools in order to ensure that student work product is human created. The submission of AI generated answers constitutes plagiarism and is violation of CSCC’s student code of conduct.” Or, as one instructor from Northeast Lakeview College submitted for that institution’s ENGL 1301, 1302, 2322, 2323, an 2338 courses— “Unless otherwise explicitly instructed, students are not allowed to use any alternative generation tools for any type of submission in this course. Every submission should be an original composition that the student themselves wholly created for this course.”

Most of the sample policies catalogued by Eaton treat AI generated content as any other non-student generated content and require attribution if used. For example, for theater courses at one small liberal arts college, syllabi contain the following policy: “All work submitted in this course must be your own. Contributions from anyone or anything else—including AI sources, must be properly quoted and cited every time they are used. Failure to do so constitutes an academic integrity violation, and I will follow the institution’s policy to the letter in those instances.” While some policies, such as Ethan Mollick’s with the Wharton School at University of Pennsylvania proclaims, “I expect you to use AI (ChatGPT and image generation tools, at a minimum), in this class. In fact, some assignments will require it. Learning to use AI is an emerging skill.” Mollick goes on to warn students, “Be aware of the limits of ChatGPT: If you provide minimum effort prompts, you will get low quality results… Don’t trust anything it says. If it gives you a number or fact, assume it is wrong unless you either know the answer or can check in with another sources… AI is a tool but one that you need to acknowledge using. Please include a paragraph at the end of any assignment that uses AI explaining what you used the AI for and what prompts you used to get the results… Be thoughtful about when this tool is useful. Don’t use it if it isn’t appropriate for the case or circumstances.”

Other Institutional Policy Areas

There are numerous policy areas beyond academic integrity that institutions need to take into consideration when determining AI usage on their campus. Perhaps chief among these areas is data privacy and data security. Large language model artificial intelligence is built on the ingestion of massive amounts of data. Data entered into current generative models (such as ChatGPT) could be stored. Thus, faculty and staff must be cautioned against providing generative AI with FERPA protected student data that might compromise student data privacy. Additionally, institutions may want to consider intellectual property policies that consider the creation of generative AI assisted works. There is currently considerable discussion around whether or not AI generated work can be copyrighted. Institutions would benefit from developing intellectual property policies that address intellectual property and generative AI. Finally, institutions should consider the ways in which generative AI can impact accessibility. Although generative AI can function as an accommodation for some students, not all generative AI tools are currently accessible to all users. As faculty begin to incorporate generative AI into their courses, institutions should consider what to do when an AI tool does not meet ADA accessibility requirements.

What Your Institution Can Do

Institutions cannot afford to wait to address generative AI and should begin developing policies now. As Daniel Dolan and Ekin Yasin write in their March 23, 2023 Inside Higher Ed piece, “A Guide to Generative AI Policy Making,” institutions should respond to generative AI with speed, strategic purpose, and “inclusive focus on equitable student value.”

Although WCET will be providing our members with more specific recommendations in the coming months, some general recommendations include:

Example of an old west hitching post - a wooden stand one could use to tie up and leave horses.
Old horse barn and hitching post by C.M.
Highsmith. Retrieved from the Library of Congress.
  • Create an institutional taskforce comprised of all campus stakeholders including faculty, instructional design staff, educational technology professionals, IT representatives, and students.
  • Determine your institution’s greatest challenges and biggest questions regarding generative AI. Once you have determined these, you can make an informed decision as to what challenges should be addressed with institutional policies versus what should be addressed with course level policies.
  • Review what other institutions are doing.
  • Make sure to take equity into consideration. For a general overview of ethics and equity in generative AI, consult WCET’s April 20, 2023 blog post “Equity in a World of Artificial Intelligence.”
  • Don’t pretend that the challenges surrounding generative AI are going away. Artificial intelligence is here to stay, and institutions need to address the challenges that it poses head on.

As one respondent in the recent WCET generative AI survey put it, “It’s the wild, wild west. And we don’t have any horses.”

Generative AI isn’t going anywhere. Already we have seen its use and complexity grow by leaps and bounds in just the last six months. Just as institutions have developed intellectual property, privacy, data security, academic integrity, and accessibility policies, now institutions need to revisit those policies considering generative AI.

We cannot afford to stick our collective heads in the sand. It’s time to saddle up and ride into the wild, wild west.


Van Davis

Chief Strategy Officer, WCET


vdavis@wiche.edu

LinkedIn Profile

Subscribe

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,358 other subscribers

Archive By Month

Blog Tags

Distance Education (314)Student Success (292)Online Learning (228)Managing Digital Learning (218)State Authorization (212)WCET (211)U.S. Department of Education (202)Regulation (195)Technology (166)Digital Learning (149)Innovation (125)Teaching (120)Collaboration/Community (114)WCET Annual Meeting (105)Course Design (103)Access (97)Professional Development (97)Faculty (88)Cost of Instruction (88)SAN (86)Financial Aid (84)Legislation (83)Completion (74)Assessment (69)Instructional Design (68)Open Educational Resources (66)Accreditation (64)COVID-19 (64)SARA (64)Accessibility (62)Credentials (62)Professional Licensure (62)Quality (62)Competency-based Education (61)Data and Analytics (60)Research (58)Diversity/Equity/Inclusion (56)Reciprocity (55)WOW Award (51)Outcomes (47)Workforce/Employment (45)Regular and Substantive Interaction (43)Policy (42)Higher Education Act (41)Negotiated Rulemaking (39)Virtual/Augmented Reality (37)Title IV (36)Practice (35)Academic Integrity (34)Disaster Planning/Recovery (34)Leadership (34)WCET Awards (30)Every Learner Everywhere (29)IPEDS (28)Adaptive/Personalized Learning (28)Reauthorization (28)Military and Veterans (27)State Authorization Network (27)Survey (27)Credits (26)Disabilities (25)MOOC (23)WCET Summit (23)Evaluation (22)Retention (21)Enrollment (21)Complaint Process (20)Artificial Intelligence (19)Correspondence Course (18)Physical Presence (17)WICHE (17)Cybersecurity (16)Member-Only (16)Products and Services (16)Forprofit Universities (15)WCET Webcast (15)Blended/Hybrid Learning (14)System/Consortia (14)Digital Divide (14)NCOER (14)Textbooks (14)Mobile Learning (13)Consortia (13)Personalized Learning (12)Futures (11)Marketing (11)Privacy (11)STEM (11)Prior Learning Assessment (10)Courseware (10)Teacher Prep (10)Social Media (9)LMS (9)Rankings (9)Standards (8)Student Authentication (8)Partnership (8)Tuition and Fees (7)Readiness and Developmental Courses (7)What's Next (7)International Students (6)K-12 (6)Lab Courses (6)Nursing (6)Remote Learning (6)Testing (6)Graduation (6)Proctoring (5)Closer Conversation (5)ROI (5)DETA (5)Game-based/Gamification (5)Dual Enrollment (4)Outsourcing (4)Coding (4)Security (4)Higher Education Trends (4)Mental Health (4)Fall and Beyond Series (3)In a Time of Crisis (3)Net Neutrality (3)Universal Design for Learning (3)Cheating Syndicates Series (3)ChatGPT (3)Enrollment Shift (3)Nontraditional Learners (2)Student Identity Verification (2)Cross Skilling/Reskilling (2)Higher Education (2)Title IX (1)Virtual Summit (1)Business of Higher Education (1)OPMs (1)Department of Education (1)Third-Party Servicers (1)microcredentials (1)Minority Serving Institution (1)