The Five Things You Should be Doing to Prepare for AI’s First Full Year at College
Published by: WCET | 8/3/2023
As we approach the fall semester we also approach the start of the first full academic year in a generative AI, ChatGPT world. We’ve heard that many institutions have used the summer to examine how they will integrate (or not) AI into their campuses. In March and April, WCET surveyed higher education leaders around the United States.
That report, Supporting Instruction and Learning Through Artificial Intelligence: A Survey of Institutional Practices and Policies, found that only four percent of respondents reported that their institution had an overall institutional strategy for approaching AI, and only seven percent had strategies at the department or college level. In fact, the majority (52 percent) reported that their institution had no strategy at all. Clearly campuses are struggling to make sense of AI and its impact on higher education.
Our report closed with a number of recommendations which you can find outlined in our July 20th blog. But, as institutions prepare for what is sure to be an AI filled fall semester, we wanted to share with you what we believe are the top five things you can be doing to prepare for artificial intelligence on campus as the new academic year nears.
Every syllabus should include a statement that addresses how/if AI can be used in your class and what academic integrity and artificial intelligence looks like in your class.
Students need transparency when it comes to faculty expectations around AI use in class. This sort of transparency should go beyond an academic integrity statement to include instruction on the acceptable use of AI in your class as well as the ways that you as an instructor may leverage AI in your class. Since expectations around AI use will vary from instructor to instructor and discipline to discipline, it is especially important that we provide students with clear expectations at the beginning of the term.
Lance Eaton has collected a number of excellent examples of AI syllabus statements that run the gamut from disallowing the use of AI to instructing students on the proper use of AI. Three statements (one disallowing the use of AI, one allowing for minimal use of AI, and one allowing for significant use of AI) are highlighted below.
1) No use of AI:
“All work submitted in this course must be your own. Contributions from anyone or anything else- including AI sources, must be properly quoted and cited every time they are used. Failure to do so constitutes an academic integrity violation, and I will follow the institution’s policy to the letter in those instances.” A theater course at a small liberal arts college.
2) Minimal use of AI:
“You might be permitted to use generative AI tools for specific assignments or class activities. However, assignments created with AI should not exceed 25% of the work submitted and must identify the AI-generated portions. Presenting AI-generated work as your own will have consequences according to university policies. Importantly, while AI programs like ChatGPT can help with idea generation, they are not immune to inaccuracies and limitations. Further, overreliance on AI can hinder independent thinking and creativity. Note that, in the spirit of this policy, it was written in part by ChatGPT.” A marketing course at a public university
3) Significant use of AI:
“Within this course, you are welcome to use generative artificial intelligence (Ai) models (ChatGPT, DALL-E, GitHub Copilot, and anything after) with acknowledgment. However, you should note that all large language models have a tendency to make up incorrect facts and fake citations, they may perpetuate biases, and image generation models can occasionally come up with offensive products. You will be responsible for any inaccurate, biased, offensive, or otherwise unethical content you submit regardless of whether it originally comes from you or an Ai model.
If you use an Ai model, its contribution must be cited and discussed:
Having said all these disclaimers, the use of Ai models is encouraged, as it may make it possible for you to submit assignments and your work in the field with higher quality and in less time.” A graduate level library sciences course at a public university
Hopefully by now your institution has created a campus task force to examine the use and impact of artificial intelligence on your campus.
Any development of a campus wide AI policy must take into consideration multiple stakeholders beyond academic affairs—AI is neither a strictly academic or IT issue.
Often times our institutions are siloed; by creating a taskforce that represents all campus stakeholders, your institution will be well positioned to create a comprehensive set of AI policies.
It is critical that your campus taskforce include all stakeholders, including students. A comprehensive task force should include:
A great example of a comprehensive task force is the one that the University of Alberta created that includes students, instructional design, faculty, IT, and the Dean of Students.
Just like any pedagogical practice should be assessed for effectiveness, so should artificial intelligence.
AI usage on campuses can be vast. Faculty may choose to use AI in their classes in a number of ways. Institutions may incorporate AI through predictive analytics programs. Student academic support services and student affairs may use AI driven chatbots to help students with academic and personal needs. And the business office and student recruitment may use informational AI driven chatbots to respond to basic student questions and requests. Institutions need to create an assessment plan to help determine which AI efforts are effective and worth continuing. This will be especially important as AI driven software and apps become more prevalent in the marketplace.
Institutions that have research and assessment offices should consider tapping into that resource to develop and implement a campus wide AI assessment plan. Any plan should include multiple assessment practices including end user surveys. Institutions without assessment and research offices should consider leveraging their campus taskforce to develop an assessment plan.
Data privacy and security policies will become especially important as campuses begin to leverage more and more AI tools. Current policies may not be applicable to these new tools.
Large language models are trained on vast amounts of data scraped from numerous sources. Although ChatGPT now has a setting that allows users to opt out from having their interactions become part of the training set, that option is not automatic. Other large language models such as Bard currently don’t have such a privacy option. It is critical that campuses ensure that FERPA and other data privacy and security regulations are followed in any AI implementation. Additionally, faculty planning on using AI in their courses need to have a privacy discussion with students on the first day of class so students can make informed decisions regarding the use of AI and the sovereignty of their data.
Institutions already have data privacy and security policies in place that will need to be reviewed in the context of artificial intelligence. Considerations that institutions should take include:
One example of a policy statement from Oregon State University is:
Because OSU representatives have no recourse for holding externally hosted AI platforms accountable for data storage or use, and because these platforms may be hosted outside of OSU’s legal jurisdiction, the accidental or deliberate introduction of protected data could result in organizational, legal, or even regulatory risks to OSU and university employees. Unlike vendors who have undergone vetting before implementation in support of OSU business needs, OSU administrators lack the authority to enforce standard data governance, risk management, and compliance requirements upon publicly available AI platforms. Pursuant to these concerns, the Office of Information Security (OIS) strongly recommends that OSU employees who wish to utilize externally hosted artificial intelligence tools for research, instruction, or administration, reach out to OIS for a brief consultation prior to proceeding.
The introduction of Sensitive/IRB Level II (e.g., FERPA-protected or proprietary) or Confidential/IRB Level III (e.g., PII or PHI) data to AI platforms is strictly prohibited. OSU instructors who assign AI-enabled assignments should also remind their students that they should avoid providing sensitive data to AI prompts. OIS recommends that instructors who wish to direct their students to utilize Internet-based AI tools include the following language in their syllabi:
“Because OSU does not control the online AI tools associated with the curriculum of this course, the Office of Information Security advises students to avoid entering Personally Identifiable Information (PII) or otherwise sensitive data into any AI prompt. For additional information, contact the Office of Information Security, or visit the OIS website at https://uit.oregonstate.edu/infosec/ .”
With the rapid evolution in technology, an ongoing plan for professional development will be critical for faculty staff. As a participant in a recent meeting put it, “AI won’t replace faculty, but faculty that use AI will replace faculty (that don’t).”
Generative AI is already significantly changing pedagogical and assessment practices in ways in which many faculty will need assistance.
It is becoming particularly critical that faculty rethink traditional assessment practices as ChatGPT and other AI tools can create essays, answer problem sets, write code, and answer multiple choice and fill in the blank assessment questions.
Institutions should invest in frequent formal and informal faculty development activities that involve both general instruction on generative AI as well as discipline specific training. In addition to one time seminars, synchronous activities might also include the development of disciplinary communities of practice that discuss AI throughout the term or academic year.
Although it is not specifically aimed at artificial intelligence, Every Learner Everywhere’s Communities of Practice: A Playbook for Centering Equity, Digital Learning, and Continuous Improvement is a wonderful place to start.
Institutions might also consider the development of asynchronous resources such as the asynchronous faculty development course developed by Auburn University. Finally, if institutions have not already built a faculty resource site such as those found at Northwestern University, The Ohio State University, Texas A&M University, Arizona State University, and Pima Community College
Generative artificial intelligence is becoming prevalent on our campuses whether we are prepared for it or not. It is critical that campuses prepare for the use of AI in as holistic way as possible. We cannot confine our discussion to academic integrity; we must begin developing data privacy and security processes as well as address intellectual property and accessibility among other things. And the development of these processes and policies cannot take place in a vacuum; they must involve all campus stakeholders. WCET continues to develop new resources to assist campuses as they enter into these conversations and develop AI policies and practices. You can always find WCET resources on our AI page. We will also be hosting an AI pre-conference workshop at our Annual Meeting in New Orleans, LA, October 25-27.