Equity in a World of Artificial Intelligence
Published by: WCET | 4/20/2023
Tags: Artificial Intelligence, Digital Divide, Diversity/Equity/Inclusion, Technology
Published by: WCET | 4/20/2023
Tags: Artificial Intelligence, Digital Divide, Diversity/Equity/Inclusion, Technology
Since our last blog in January on generative artificial intelligence (AI), the field has changed by leaps and bounds:
The higher education press continues to increase its coverage with multiple articles, blogs, and op-ed pieces in The Chronicle of Higher Education and Inside Higher Education among other outlets. Much of that coverage, though, continues to focus on the pedagogical implications of generative AI, including academic integrity concerns. Equally important, but discussed very little, are the equity considerations around how AI can and should be leveraged in higher education. Today we’ll look at both the positive and potentially negative aspects of generative AI in higher education as it relates to educational equity.
According to the U.S. Department of Education’s Office of Educational Technology, “Technology can be a powerful tool for transformation learning. It can help affirm and advance relationships between educators and students, reinvent our approaches to learning and collaboration, shrink longstanding equity and accessibility gaps, and adapt learning experiences to meet the needs of all learners.” Artificial intelligence, when used deliberately and carefully, can advance equity by improving educational accessibility and assisting second language learners, among others.
AI can be especially powerful when addressing learner accessibility. For example, students with dyslexia can benefit from AI as a December 10, 2022, Washington Post article demonstrated when it described how ChatGPT is being used by a British landscaper with dyslexia to rewrite emails so that they are more professional and more easily understood. Additionally, students and faculty with AD/HD are finding generative AI useful in approaching research and writing. As Maggie Melo wrote in her February 28, 2023, op-ed in Inside Higher Education, “My thinking and writing processes are not linear. ChatGPT affords me a controlled chaos.” Melo goes on to describe how the need to create an abstract can feel overwhelming despite having having done so numerous times. However, after asking ChatGPT “How to write an abstract,” she received an outline that, as she put it, “provides my mind with an anchor to focus on and return to.” And much like our British landscaper, non-native English speakers may also benefit from ChatGPT’s ability to revise and rephrase text. ChatGPT can be used to revise text for grammatical correctness and clarity.
In her 2019 work, Race After Technology, Princeton University sociologist Ruha Benjamin wrote about what she calls the “New Jim Code” and the problem of data and algorithmic bias.
Benjamin defined the “New Jim Code” as “the employment of new technologies that reflect and reproduce existing inequities but that are promoted and perceived as more objective or progressive than the discriminatory systems of a previous era” (5). Generative AI is trained on large existing data sets, mostly scraped off the internet. This means that models are ingesting biased information as well as information that is likely to over-represent certain groups such as white, economically well-off individuals. As the old adage goes, “garbage in, garbage out.”
The result is algorithmic bias. Algorithmic bias “describes systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others. Also, occurs when an algorithm produces results that are systemically prejudiced due to erroneous assumptions in the machine learning process.” In addition to challenges with training data, algorithmic bias is also impacted by the implicit bias of generative AI developers, a field in which white men are over-represented and women and some racial groups such as Blacks and Latinos are under-represented. As Henry Kissenger, Eric Schmidt, and Daniel Huttenlocher wrote in Age of AI: And Our Human Future, “The algorithms, training data, and objectives for machine learning are determined by the people developing and training the AI, thus they reflect those people’s values, motivations, goals, and judgment” (77).
In addition to challenges around algorithmic bias, AI also experiences challenges of access. According to the 2021 American Community Survey One Year Estimates, “Types of Computers and Internet Subscriptions,” of the 127,544,730 households in the United States, 6,320,698 (4.9%) had no computing device including a smartphone and 12,374,652 (9.7%) had no internet subscription including no cellular data plan. This digital divide is especially acute for low-income Americans with 15.3 percent of Americans with income less than $75,000 lacking internet access. And a Pew Research Center 2021 study found that 15 percent of all American adults were smartphone only internet users but that number rose sharply to 28 percent when looking at 18-29 years old. Even then, that number is not equally divided among all racial groups. 25 percent of Hispanic young adults and 17% of Black young adults were smartphone only internet users as compared to 10 percent of White young adults.
Why does the digital divide matter when we explore equity and AI? Simply put, most generative AI is difficult to use without an internet connection. Although text based generative AI like ChaptGPT can run on mobile devices, its response time may be slower than one would experience when using a high-speed internet connection. Making more sophisticated queries with long outputs would be difficult, at best.
In addition to challenges resulting from the digital divide, there are challenges associated with the cost of using generative AI tools themselves. Chat GPT, which started out free, is now partially behind a paywall begging the question of how much longer these tools will remain freely available. In fact, the economic realities of the astronomical costs of running generative AI almost guarantees that such paywalls will become more common. CNBC reports that just training a large language model could be more than $4 million. Some estimates of daily cost to run it put it at $100,000 per day or $3,000,000 per month.
What will be the result of fewer students having access to generative AI? We run the risk of the digital divide turning into an AI divide. Some students who lack sufficient access will not gain the skills related to working with generative AI that will be increasingly necessary as we enter an age of hybrid human/AI work.
Generative AI has the potential to revolutionize society, including higher education, in ways that we still are determining. But as higher education professionals, we need to be cognizant of how we leverage generative AI. How can you build upon the promise of generative AI while mitigating some of its challenges?
As we continue to explore the ways in which generative artificial intelligence can impact higher education, it is critical for us to remember that no technology is neutral.
As Kate Crawford in Atlas of AI puts it, “Artificial intelligence is not on objective, universal, or neutral computational technique that makes determinations without human direction. Its systems are embedded in social, political, cultural, and economic worlds, shaped by humans, institutions, and imperatives that determine what they do and how they do it” (211). Does this negate the potential advantages of generative AI and the ways that it can improve educational equity? No, but it does mean we should be cognizant of the potential for further educational inequity and work to counter that potential.