What We Built at UCF

Video Summary

When reviewing a tool for accessibility, we ask whether it will work for students in real course use. In practice, that question usually runs through a Voluntary Product Accessibility Template (VPAT®). VPATs are widely available and a standard part of many review processes, but they are often highly technical and difficult to interpret without specialized training. Too often, teams spend time translating conformance language instead of completing the judgment work of interpreting what claims mean in context, identifying risks, and addressing findings. To navigate this tension, we focused on two questions: how could we add clarity to accessibility reviews, and where could generative AI add value without replacing human oversight?

At the University of Central Florida (UCF), the Center for Distributed Learning (CDL) supports the adoption of educational technologies with a mission to make high-quality education available to anyone, anywhere, at any time. To clarify accessibility evaluation while preserving reviewer oversight, CDL developed the VPAT Evaluator, a generative AI tool that translates VPAT documentation into structured, narrative reports to support human judgment.

Promotional graphic for "VPAT Evaluator," featuring the tagline "Easy-to-read accessibility reports" and a button labeled "Evaluate a VPAT."

Reviewers upload a VPAT, and the tool generates a structured narrative that reframes technical documentation into plain-text descriptions of how a feature may function in the course environment. It surfaces claims, ratings, and explanations that indicate limitations, then explains how they may introduce barriers to student learning, such as when students cannot reach required materials, cannot complete activities, or cannot access key functions. The report organizes details in a consistent format so reviewers can more easily compare products and decide what to verify, which questions to ask vendors, and where to focus follow-up efforts. The tool accelerates interpretation, but verification and final decisions remain with reviewers, aligned with institutional policies.

Since its release in May 2025, the VPAT Evaluator has supported more than 1,200 requests by 300+ users across 200+ institutions. We share these numbers as a signal that many campuses face similar challenges: interpreting accessibility documentation at scale across the specialists available for interpretation.

What We’re Learning

  1. Clarity Saves Time

Generative AI can perform an initial scan, turning technical documentation into clear narrative summaries. A faster first review helps teams move beyond basic interpretation and focus on deeper evaluation, vendor follow-up, and instructional planning.

  1. Structure Guides Reliable Output

AI performs best with structured workflows. During development, we iterated prompt strategies and output formats to align results with established accessibility standards as well as the needs of procurement and instructional technology teams. A consistent structure makes it easier to compare products and communicate findings across roles and levels of expertise.

  1. Human Expertise Stays Central

AI-generated output is a starting point, not a final verdict. Reviewers verify the narrative, interpret claims in context, and determine what additional information is needed. Time saved on interpretation becomes time reinvested in judgment, validation, and planning for real course use.

AI-Supported Workflows

Higher education faces increasing expectations to provide accessible digital learning environments. Generative AI offers a practical way to support these requirements while also streamlining the everyday work of technology evaluation. Our experience shows that when AI manages the first pass of complex documentation, reviewers have more time to focus on the nuanced decisions that shape student learning environments.

For institutions looking to integrate generative AI into operational practices, three key takeaways stand out:

  • Begin with clarity: Use AI to create an initial review that prepares decision makers for deeper investigation and planning.
  • Structure the process: Provide clear prompts or checklists so AI output aligns with individualized standards and institutional needs.
  • Expand human review: Route the first AI-generated pass to the right people so all stakeholders can verify initial claims, add course context, and shape follow-up before decisions are finalized.

Accessibility is everyone’s responsibility, and new tools make it easier for more people to contribute their expertise.

Resources

Authors: Rebecca McNulty & Ahmad Altaher Alfayad, Center for Distributed Learning, University of Central Florida

Rebecca McNulty

Instructional Designer, Center for Distributed Learning at the University of Central Florida

Ahmad Altaher Alfayad

Applications Programmer, Center for Distributed Learning at the University of Central Florida

Subscribe

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 1,371 other subscribers

Archive By Month

Blog Tags

Distance Education (384)Student Success (347)Managing Digital Learning (285)Online Learning (280)WCET (247)State Authorization (241)U.S. Department of Education (223)Regulation (222)Digital Learning (203)Technology (181)Innovation (126)Teaching (122)Collaboration/Community (116)SAN (110)WCET Annual Conference (107)Course Design (103)Professional Development (103)Access (101)Faculty (90)Cost of Instruction (89)Financial Aid (85)Legislation (83)Accessibility (75)Completion (74)Assessment (68)SARA (68)Instructional Design (67)Professional Licensure (67)Open Educational Resources (66)Accreditation (66)Credentials (64)COVID-19 (63)Quality (63)Competency-based Education (61)Reciprocity (60)Data and Analytics (59)Research (58)WOW Award (56)Diversity/Equity/Inclusion (53)Workforce/Employment (51)Negotiated Rulemaking (50)Artificial Intelligence (48)Outcomes (46)Regular and Substantive Interaction (43)Higher Education Act (42)Policy (41)State Authorization Network (41)Virtual/Augmented Reality (37)Title IV (36)Leadership (35)Practice (35)Disaster Planning/Recovery (34)Academic Integrity (32)Every Learner Everywhere (32)WCET Awards (30)IPEDS (28)Adaptive/Personalized Learning (28)Reauthorization (28)Military and Veterans (27)Survey (27)Credits (26)Disabilities (24)MOOC (23)WCET Summit (23)Retention (22)Evaluation (22)Complaint Process (21)Enrollment (21)WICHE (20)Correspondence Course (18)Physical Presence (17)System/Consortia (16)WCET Webcast (16)Products and Services (16)Blended/Hybrid Learning (15)Cybersecurity (15)Forprofit Universities (15)Member-Only (15)Digital Divide (14)Mobile Learning (14)NCOER (14)Textbooks (14)Consortia (13)Futures (12)Personalized Learning (12)Marketing (11)Privacy (11)Prior Learning Assessment (10)Courseware (10)Teacher Prep (10)Social Media (9)LMS (9)Rankings (9)Standards (8)Student Authentication (8)Partnership (8)Remote Learning (7)Tuition and Fees (7)Readiness and Developmental Courses (7)Graduation (7)What's Next (7)International Students (6)K-12 (6)Nursing (6)STEM (6)Testing (6)Lab Courses (5)Proctoring (5)Closer Conversation (5)ROI (5)DETA (5)Game-based/Gamification (5)Department of Education (5)Dual Enrollment (4)Outsourcing (4)Coding (4)Security (4)Higher Education Trends (4)Mental Health (4)Virtual Summit (4)Fall and Beyond Series (3)In a Time of Crisis (3)Net Neutrality (3)Universal Design for Learning (3)Cheating Syndicates Series (3)ChatGPT (3)Enrollment Shift (3)Minority Serving Institution (3)Compliance (3)Nontraditional Learners (2)Student Identity Verification (2)Cross Skilling/Reskilling (2)Higher Education (2)Community College (2)Licensure (2)Title IX (1)Business of Higher Education (1)OPMs (1)Third-Party Servicers (1)microcredentials (1)equity (1)Formerly Incarcerated Students (1)Global (1)Cost & Price (1)experts (1)Digital Learning Operations (1)WCET Member Feature (1)Student Voice (1)ASWE (1)Reflection (1)Gainful Employment (1)benefits (1)AHEAD (1)Course Catalog (1)