Cover of AI Report

As we approach the fall semester we also approach the start of the first full academic year in a generative AI, ChatGPT world. We’ve heard that many institutions have used the summer to examine how they will integrate (or not) AI into their campuses. In March and April, WCET surveyed higher education leaders around the United States.

That report, Supporting Instruction and Learning Through Artificial Intelligence: A Survey of Institutional Practices and Policies, found that only four percent of respondents reported that their institution had an overall institutional strategy for approaching AI, and only seven percent had strategies at the department or college level. In fact, the majority (52 percent) reported that their institution had no strategy at all. Clearly campuses are struggling to make sense of AI and its impact on higher education.

Our report closed with a number of recommendations which you can find outlined in our July 20th blog. But, as institutions prepare for what is sure to be an AI filled fall semester, we wanted to share with you what we believe are the top five things you can be doing to prepare for artificial intelligence on campus as the new academic year nears.

1. Put an AI Statement in your Syllabus

Every syllabus should include a statement that addresses how/if AI can be used in your class and what academic integrity and artificial intelligence looks like in your class.

Why this is important

Students need transparency when it comes to faculty expectations around AI use in class. This sort of transparency should go beyond an academic integrity statement to include instruction on the acceptable use of AI in your class as well as the ways that you as an instructor may leverage AI in your class. Since expectations around AI use will vary from instructor to instructor and discipline to discipline, it is especially important that we provide students with clear expectations at the beginning of the term.

What this could look like

Lance Eaton has collected a number of excellent examples of AI syllabus statements that run the gamut from disallowing the use of AI to instructing students on the proper use of AI. Three statements (one disallowing the use of AI, one allowing for minimal use of AI, and one allowing for significant use of AI) are highlighted below.

1) No use of AI:

“All work submitted in this course must be your own. Contributions from anyone or anything else- including AI sources, must be properly quoted and cited every time they are used. Failure to do so constitutes an academic integrity violation, and I will follow the institution’s policy to the letter in those instances.” A theater course at a small liberal arts college.

2) Minimal use of AI:

“You might be permitted to use generative AI tools for specific assignments or class activities. However, assignments created with AI should not exceed 25% of the work submitted and must identify the AI-generated portions. Presenting AI-generated work as your own will have consequences according to university policies. Importantly, while AI programs like ChatGPT can help with idea generation, they are not immune to inaccuracies and limitations. Further, overreliance on AI can hinder independent thinking and creativity. Note that, in the spirit of this policy, it was written in part by ChatGPT.” A marketing course at a public university

3) Significant use of AI:

“Within this course, you are welcome to use generative artificial intelligence (Ai) models (ChatGPT, DALL-E, GitHub Copilot, and anything after) with acknowledgment. However, you should note that all large language models have a tendency to make up incorrect facts and fake citations, they may perpetuate biases, and image generation models can occasionally come up with offensive products. You will be responsible for any inaccurate, biased, offensive, or otherwise unethical content you submit regardless of whether it originally comes from you or an Ai model.

If you use an Ai model, its contribution must be cited and discussed:

  • What was your prompt?
  • Did you revise the Ai model’s original output for your submission?
  • Did you ask follow-up questions?
  • What did you learn?

Having said all these disclaimers, the use of Ai models is encouraged, as it may make it possible for you to submit assignments and your work in the field with higher quality and in less time.” A graduate level library sciences course at a public university

2. Create A Campus Taskforce

Hopefully by now your institution has created a campus task force to examine the use and impact of artificial intelligence on your campus.

Why this is important

Any development of a campus wide AI policy must take into consideration multiple stakeholders beyond academic affairs—AI is neither a strictly academic or IT issue.

Often times our institutions are siloed; by creating a taskforce that represents all campus stakeholders, your institution will be well positioned to create a comprehensive set of AI policies.

What this could look like

A group of people in a work meeting
Photo by Redd F on Unsplash

It is critical that your campus taskforce include all stakeholders, including students. A comprehensive task force should include:

  • IT,
  • faculty,
  • instructional designers,
  • student support services (especially your campus accessibility office),
  • faculty development,
  • student affairs (especially your conduct office if your campus has one),
  • general counsel, and,
  • students.

A great example of a comprehensive task force is the one that the University of Alberta created that includes students, instructional design, faculty, IT, and the Dean of Students.

3. Create an AI assessment plan

Just like any pedagogical practice should be assessed for effectiveness, so should artificial intelligence.

Why this is important

AI usage on campuses can be vast. Faculty may choose to use AI in their classes in a number of ways. Institutions may incorporate AI through predictive analytics programs. Student academic support services and student affairs may use AI driven chatbots to help students with academic and personal needs. And the business office and student recruitment may use informational AI driven chatbots to respond to basic student questions and requests. Institutions need to create an assessment plan to help determine which AI efforts are effective and worth continuing. This will be especially important as AI driven software and apps become more prevalent in the marketplace.

What this could look like

Institutions that have research and assessment offices should consider tapping into that resource to develop and implement a campus wide AI assessment plan. Any plan should include multiple assessment practices including end user surveys. Institutions without assessment and research offices should consider leveraging their campus taskforce to develop an assessment plan.

4. Review Your Data Privacy and Security Policies

Data privacy and security policies will become especially important as campuses begin to leverage more and more AI tools. Current policies may not be applicable to these new tools.

Why this is important

Large language models are trained on vast amounts of data scraped from numerous sources. Although ChatGPT now has a setting that allows users to opt out from having their interactions become part of the training set, that option is not automatic. Other large language models such as Bard currently don’t have such a privacy option. It is critical that campuses ensure that FERPA and other data privacy and security regulations are followed in any AI implementation. Additionally, faculty planning on using AI in their courses need to have a privacy discussion with students on the first day of class so students can make informed decisions regarding the use of AI and the sovereignty of their data.

What this could look like

Institutions already have data privacy and security policies in place that will need to be reviewed in the context of artificial intelligence. Considerations that institutions should take include:

  • how FERPA data and other identifiable information should be handled;
  • student ability to opt out of using generative AI; and,
  • faculty and staff downloads of various AI tools among other things.

One example of a policy statement from Oregon State University is:

Because OSU representatives have no recourse for holding externally hosted AI platforms accountable for data storage or use, and because these platforms may be hosted outside of OSU’s legal jurisdiction, the accidental or deliberate introduction of protected data could result in organizational, legal, or even regulatory risks to OSU and university employees. Unlike vendors who have undergone vetting before implementation in support of OSU business needs, OSU administrators lack the authority to enforce standard data governance, risk management, and compliance requirements upon publicly available AI platforms. Pursuant to these concerns, the Office of Information Security (OIS) strongly recommends that OSU employees who wish to utilize externally hosted artificial intelligence tools for research, instruction, or administration, reach out to OIS for a brief consultation prior to proceeding.

The introduction of Sensitive/IRB Level II (e.g., FERPA-protected or proprietary) or Confidential/IRB Level III (e.g., PII or PHI) data to AI platforms is strictly prohibited. OSU instructors who assign AI-enabled assignments should also remind their students that they should avoid providing sensitive data to AI prompts. OIS recommends that instructors who wish to direct their students to utilize Internet-based AI tools include the following language in their syllabi:

“Because OSU does not control the online AI tools associated with the curriculum of this course, the Office of Information Security advises students to avoid entering Personally Identifiable Information (PII) or otherwise sensitive data into any AI prompt. For additional information, contact the Office of Information Security, or visit the OIS website at https://uit.oregonstate.edu/infosec/ .”

Other policy examples include Iowa State University’s policy on the secure and ethical use of artificial intelligence and the University of Michigan’s policy.

5. Develop an Ongoing Professional Development Plan

With the rapid evolution in technology, an ongoing plan for professional development will be critical for faculty staff. As a participant in a recent meeting put it, “AI won’t replace faculty, but faculty that use AI will replace faculty (that don’t).”

Why this is important

Generative AI is already significantly changing pedagogical and assessment practices in ways in which many faculty will need assistance.

It is becoming particularly critical that faculty rethink traditional assessment practices as ChatGPT and other AI tools can create essays, answer problem sets, write code, and answer multiple choice and fill in the blank assessment questions.

What this could look like

Several people in a meeting learning from a presenter at the front of the room, who is pointing to a computer screen.
Photo by Jason Goodman on Unsplash

Institutions should invest in frequent formal and informal faculty development activities that involve both general instruction on generative AI as well as discipline specific training. In addition to one time seminars, synchronous activities might also include the development of disciplinary communities of practice that discuss AI throughout the term or academic year.

Although it is not specifically aimed at artificial intelligence, Every Learner Everywhere’s Communities of Practice: A Playbook for Centering Equity, Digital Learning, and Continuous Improvement is a wonderful place to start.

Institutions might also consider the development of asynchronous resources such as the asynchronous faculty development course developed by Auburn University. Finally, if institutions have not already built a faculty resource site such as those found at Northwestern University, The Ohio State University, Texas A&M University, Arizona State University, and Pima Community College

Conclusions

Generative artificial intelligence is becoming prevalent on our campuses whether we are prepared for it or not. It is critical that campuses prepare for the use of AI in as holistic way as possible. We cannot confine our discussion to academic integrity; we must begin developing data privacy and security processes as well as address intellectual property and accessibility among other things. And the development of these processes and policies cannot take place in a vacuum; they must involve all campus stakeholders. WCET continues to develop new resources to assist campuses as they enter into these conversations and develop AI policies and practices. You can always find WCET resources on our AI page. We will also be hosting an AI pre-conference workshop at our Annual Meeting in New Orleans, LA, October 25-27.


Van Davis

Chief Strategy Officer, WCET


vdavis@wiche.edu

LinkedIn Profile

Subscribe

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,542 other subscribers

Archive By Month

Blog Tags

Distance Education (342)Student Success (315)Online Learning (242)Managing Digital Learning (241)State Authorization (230)WCET (223)U.S. Department of Education (215)Regulation (212)Technology (169)Digital Learning (165)Innovation (125)Teaching (121)Collaboration/Community (114)WCET Annual Meeting (106)Course Design (103)Professional Development (101)SAN (101)Access (100)Faculty (90)Cost of Instruction (89)Financial Aid (84)Legislation (83)Completion (74)Assessment (69)Accessibility (68)Instructional Design (68)Open Educational Resources (68)Professional Licensure (66)Accreditation (65)COVID-19 (64)SARA (64)Credentials (62)Competency-based Education (61)Quality (61)Data and Analytics (60)Diversity/Equity/Inclusion (59)Research (58)Reciprocity (57)WOW Award (54)Outcomes (47)Workforce/Employment (46)Negotiated Rulemaking (45)Regular and Substantive Interaction (43)Policy (43)Higher Education Act (41)Virtual/Augmented Reality (37)Artificial Intelligence (36)Title IV (36)Practice (35)Academic Integrity (34)Disaster Planning/Recovery (34)Leadership (34)State Authorization Network (33)Every Learner Everywhere (31)WCET Awards (31)IPEDS (28)Adaptive/Personalized Learning (28)Reauthorization (28)Military and Veterans (27)Survey (27)Credits (26)Disabilities (25)MOOC (23)WCET Summit (23)Retention (22)Evaluation (22)Complaint Process (21)Enrollment (21)WICHE (18)Correspondence Course (18)Physical Presence (17)System/Consortia (16)Cybersecurity (16)Products and Services (16)Blended/Hybrid Learning (15)Forprofit Universities (15)Member-Only (15)WCET Webcast (15)Digital Divide (14)Mobile Learning (14)NCOER (14)Textbooks (14)Consortia (13)Personalized Learning (12)Futures (11)Marketing (11)Privacy (11)STEM (11)Prior Learning Assessment (10)Courseware (10)Teacher Prep (10)Social Media (9)LMS (9)Rankings (9)Standards (8)Student Authentication (8)Partnership (8)Tuition and Fees (7)Readiness and Developmental Courses (7)Graduation (7)What's Next (7)International Students (6)K-12 (6)Lab Courses (6)Nursing (6)Remote Learning (6)Testing (6)Proctoring (5)Closer Conversation (5)ROI (5)DETA (5)Game-based/Gamification (5)Dual Enrollment (4)Outsourcing (4)Coding (4)Security (4)Higher Education Trends (4)Mental Health (4)Fall and Beyond Series (3)In a Time of Crisis (3)Net Neutrality (3)Universal Design for Learning (3)Cheating Syndicates Series (3)ChatGPT (3)Enrollment Shift (3)Minority Serving Institution (3)Nontraditional Learners (2)Student Identity Verification (2)Cross Skilling/Reskilling (2)Virtual Summit (2)Department of Education (2)Higher Education (2)Title IX (1)Business of Higher Education (1)OPMs (1)Third-Party Servicers (1)microcredentials (1)equity (1)Community College (1)Formerly Incarcerated Students (1)Global (1)Compliance (1)