Developing Institutional Level AI Policies and Practices: A Framework
Published by: WCET | 12/7/2023
Tags: Artificial Intelligence, ChatGPT, Distance Education, Student Success, WCET
Published by: WCET | 12/7/2023
Tags: Artificial Intelligence, ChatGPT, Distance Education, Student Success, WCET
ChatGPT recently turned one and what a wild, first year it has been. Over the last twelve months, institutions have scrambled to not only better understand generative Artificial Intelligence (AI) and its impact on teaching and learning, but also to determine the best ways to provide guardrails and guidance for faculty, staff, and students. Many institutions have struggled to develop institutional-level policies.
In a spring survey administered by WCET, only eight percent of respondents reported that their institution had developed and/or implemented at least one AI-related policy.
The overwhelming majority of institutions, 65 percent, indicated that they have or will be developing policies but have not done so yet. The initial focus of campus policy discussions has centered on academic integrity. And while these discussions are critical, they cannot be the end of the AI conversation on campuses. Institutions must also consider additional areas such as data security and privacy, promotion and tenure practices, professional development planning, and many other policy and practice areas.
WCET has developed an AI policy and practice framework to help institutions identify the policy areas that they need to address and develop policies and guidelines for those areas.
In 2023, Cecilia Ka Yuk Chan conducted research on perceptions and implications of text generative AI technologies in order to develop an AI policy for higher education. Based on the findings, she proposed an AI Ecological Education Policy Framework to address “the multifaceted implications of AI integration in university teaching and learning.” The WCET framework adapts Chan’s framework and categorizes institutional AI policy needs in three areas:
Undergirding all three areas of our policy and practice framework is the ethical and responsible use of AI. All policy decisions at colleges and universities should be grounded in ethical considerations of AI. Doing so ensures the most effective and responsible use of, and teaching about, these technologies. And it is often institutional administrators who lead this work. Not developing and implementing AI policies within the context of ethical considerations opens up the institution – and thus its leaders – to, at best, inefficient use of resources that often include funds from taxpayers, and, at worst, serious breaches of privacy, security, transparency, and equity.
This dimension emphasizes the governance considerations surrounding AI usage in higher education. Governance refers to the senior management at an institution, including such positions and roles as Chancellor/President, Chief Academic Officer, Chief Information Office, Vice President for Student Services, VP for Institutional Research/Effectiveness, and others depending on the campus context. Governance may also encompass managers such as Deans and Chairs of academic discipline units. Members of senior leadership will be the initiators for the Governance dimension of the framework. As they hold decision-making authority, they should set the tone for effective and innovative AI use across campus and ensure that all AI policies and practices support the mission and goals of the institution and foster an equitable and inclusive environment.
Here we highlight six areas of responsibility:
Data governance refers to an institution’s policies and processes that ensure that effective and responsible management, including security, exists throughout the complete lifecycle of the data, and data controls are implemented that support business objectives.
Campus administrators should also oversee (working in concert with such units as Institutional Research and Information Technology) the evaluation of the effectiveness of AI in every use. The information and data collected should be harnessed for continuous improvement of AI planning, policies, and practices. By regularly collecting feedback from all users, including students, colleges and universities can make informed decisions about how to improve AI implementation. Evaluating the effectiveness of AI tools in enhancing learning outcomes is also vital to determine their value and make adjustments as needed.
Where appropriate, institutional governance should work to encourage campus personnel, including faculty, and students to utilize AI technologies. It may be important to continue to emphasize that, even in AI use, faculty remain centered as the subject matter experts and that AI technologies can support their ongoing role as SMEs. Along with this, though, comes the responsibility of monitoring that use – including while conducting research – to ensure that it is ethical, effective, and appropriate.
Ensuring equitable access to AI technologies is crucial for fostering an inclusive learning environment. Universities should work to provide the appropriate technologies and support to all students, faculty, administrators, and staff, regardless of their background or access to technology. By promoting equal access to AI technologies, universities can help level the playing field and ensure that all students and staff have the opportunity to benefit from the advantages offered by AI integration. Not doing so widens the digital divide.
Leaders at institutions will need to consider how intellectual property, including research, course materials, and student-produced work, is defined and, where needed, protected when created using AI, either fully or in part. However, these policies must be developed in accordance with U.S. and international copyright laws (which are scrambling to keep up with the new technologies) and, thus, likely should involve collaboration with the institution’s legal counsel.
Institutional leaders should also consider how works produced using AI are considered in promotion, tenure, and reappointment of faculty. These processes can be used to reward and incentivize innovative research and teaching, but they also should guard against plagiarism of content in portfolios and dossiers.
This dimension assists in the understanding and implementation of AI across the institution and includes staff in key areas such as Academic Affairs, Information Technology, and Centers for Teaching & Learning Effectiveness/Excellence. Here we highlight three areas of responsibility:
Training and support on AI technologies should be offered to all who use or may use AI, including administrators, staff, faculty, and students. Effective training and support can go a long way to mitigate and alleviate often extensive (and legitimate) concerns about integrating AI into work, instruction, and learning. Investing in training, support, and resources can help educators, their students, and others feel more confident and capable in navigating the complexities and ever-changing landscape of AI technologies.
The responsibility for developing and maintaining an institution’s AI infrastructure will likely fall primarily to Information Technology in consultation with other units to determine needs and evaluate costs and efficacy of tools.
All operational units should be engaged in scanning the landscape of AI to review and recommend platforms and tools that can enhance the efficiency and effectiveness of the institution’s operations, whether for student services and support, instruction and learning, admissions, recruitment and marketing, staff workflows, and resource planning, among others.
This dimension emphasizes the practical implementation of AI to support instruction and learning in the classroom. Faculty are the initiators of this dimension, working closely with those in Operations to actualize policy and planning from the Governance level while always considering ethical dimensions. Instructors are ultimately responsible for designing and implementing curricula, activities, and assessments that utilize AI technologies. They will need to gain some expertise to determine how AI can best support and enhance students’ learning experiences while assisting learners in understanding the implications for academic integrity. Here we highlight seven areas of responsibility:
Generative AI has raised concerns that students will misuse technologies to plagiarize. The more clear and consistent policies are, the more likely students will understand and follow them, reducing the chances of misuse. Policies and guidelines may range from those that ban the use of AI in the classroom altogether, to those that allow and even encourage use. Policies regarding appropriate attribution and acknowledgment of AI technologies used to create assignments and other products of learning are crucial as well. There may be an institutional policy regarding this; if not, faculty should develop their own.
Assessing the effectiveness of learning is a hallmark of education; however, it has been historically fraught and intertwined with ensuring academic integrity. The increasing ubiquity of Generative AI has further complicated these practices, necessitating reconsideration of assessment methods to balance the benefits of AI with the need to maintain academic integrity.
Faculty should clearly state in the syllabus how students will be expected to use AI in the class and should also verbally communicate those expectations on the first day of class. Being clear about how a faculty member will leverage AI in the course allows students to make informed decisions about whether to stay in the course.
The increasing ubiquity of Generative AI in the workplace calls for a new digital literacy. This need makes it imperative for institutions to prepare students for this complex technological working landscape, equipping them with skills and knowledge to successfully navigate not only the current landscape, but a rapidly evolving one as well. Therefore, not only should faculty teach at least basic skills students need to integrate AI into their work, but also evaluate when it is appropriate to use AI, how to evaluate the tools, and to understand their role in professional settings.
Instructors should make students aware of the possibility of discrimination being programmed into AI, since fallible humans must be a part of the process to develop inputs used (with the recognition that humans may themselves perpetuate discriminatory practices through the data).
The use of AI to augment or even replace certain instructional and related support practices, such as information delivery, responding to questions, assessment, tutoring, and personalized learning and guidance, could have a significant impact on norms and expectations around interactions between students and instructors. Institutions should ensure that they address the extent to which faculty are allowed to automate instruction through the use of artificial intelligence and the aspects of instruction that can leverage artificial intelligence. For most institutions, this will mean revising existing policies on regular and substantive interaction.
It is important to consider the ways in which some generative AI tools might not be accessible for all students with disabilities and learning challenges in general, while others may support accessibility, including for users of assistive technology. All learners using assistive technology must be able to meaningfully engage and independently interact with AI interfaces and outputs.
Putting aside fears of AI surpassing human intelligence and achieving singularity – a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible – legitimate concerns remain a year after the appearance of ChatGPT, including in education.
Issues surrounding academic integrity, the quality of knowledge produced by AI tools, the replacement of instructors by AI, mitigating a new “digital divide,” and how to prepare students for an AI-infused workforce, among others, are real.
Institutions continue to grapple with security and privacy, equity and access, and other challenges that these technologies present.
In his Substack AI + Education = Simplified, Lance Eaton suggests that the all-too-common reinvention of wheels in higher education – “the thing that contributes to institutions being so slow” – is stymieing effective use of AI in the sector. WCET is committed to addressing this challenge by bringing institutions together to share knowledge and providing resources to support the community. One of the resources that we are most excited about is the development of our AI Policy and Practice Toolkit which we will release later this month. This WCET members-only resource builds out our AI Policy and Practice Framework and includes sample policies and/or guidelines for each of the outlined areas.
If your institution is not a WCET member, you can join now. WCET is offering a discounted membership rate through the end of the year of 35% off new memberships in celebration of our 35thanniversary. You can find more information about it here. And if you are wondering if your institution is a member of WCET, you can access a list of members here.