Last semester, I began a graduate program to study Technology, Cybersecurity, and Policy. I have also, like many other people, been digging deeper into learning about anti-racism since the summer of 2020. As my learning about these two topics coincided, it didn’t take me long to begin looking into the ways in which these topics intersect. I did research on the topic at the end of last semester and have since narrowed the research to focus on higher education for this post.
There are a lot of industries in the United States where white people are disproportionately over-represented in positions of authority. This is often an outcome of historical racial bias, which in turn perpetuates racial bias across society in leadership, policies, and access to future opportunities. Higher ed is one such industry that is impacted by this trend, but beyond institutions themselves, institutions can also experience biases that come from the technology that they use. Higher education has been struggling to address bias and inequality since the summer of 2020 when the murder of George Floyd brought renewed urgency to racial inequality in the U.S. Institutions must also take into account the bias that is embedded in the technology that they use. Technology has many biases, and without proper oversight, issues are not always addressed when they should be. It is therefore essential that institutions understand the many ways that technology and cybersecurity can fail their students, especially students who are Black, Latinx, Native American, Asian, and other students of color. To strive towards anti-racism in higher education, it is important to recognize the many forms that racism can take within cyberspace and address the different problems wherever they occur.
Unequal Vulnerabilities
To begin with, many technologies offer a platform for race-based bullying or cybercrime to take place. Harassment in cyberspace is more scalable and less location-based than harassment that takes place in person. A quarter of Black Americans say they’ve faced race-based harassment online. Women of color face disproportionately more harassment on Twitter than other groups. A wide range of technologies and social media can be utilized as a platform for racial harassment. And technologies can be hacked to cause further harassment or even facilitate hate crimes. One such example of this is in “Zoom bombings,” which spiked in frequency in the spring of 2020 before Zoom increased its security practices. These events often had a racial element and frequently used anti-Black and anti-Asian language (the latter of which resulted in part from racist ideas about the origins of the pandemic). Any communication technology can be hacked and can then be used in a race-based attack. Another example of this can be found in the exploited insecurities in Amazon’s Ring security camera and microphone, which hackers have used as a platform for racist harassment.
While the previous examples of racism in cyberspace are preventable because they are caused by people abusing technology rather than by biases embedded in the technology itself, it is important to recognize that any tech that students are using could be susceptible to such race-based attacks or harassment. Therefore, it is essential that academics ensure, whenever possible, that students are not experiencing such harassment, especially within any tech tools used by the institution for learning and communication.
Bias in Technology
The following examples of the ways that systemic racism impacts cyberspace pertain to the ways that bias is embedded within technology itself. These examples may be just as harmful as the previous examples but potentially less obvious and harder to address because technologies themselves are perpetuating the issues, rather than humans.
Systemic racism can be discreet in its impacts on cyberspace. The use of algorithms, facial recognition, and surveillance all have the potential to have very negative impacts especially on students who are Black, Latinx, Native American, or other people of color. Each of these topics is rooted in something that is marketed as positives:
- algorithms are supposed to help filter data quickly and make predictions to find solutions faster;
- facial recognition allows individuals to use their faces as biometric data to do things like unlock their phones or make payments; and
- surveillance, although the word is often used in the negative, is supposed to be used as a tool for promoting safety or protection.
However, just like in many aspects of society outside of cyberspace, even things that are promoted as helpful or positive can have embedded bias that furthers inequality and are a disservice to peoples’ safety and their access to education. Algorithmic bias can be the unintended consequence of algorithms that results when historical data used in training an algorithm is biased or even when the code itself is written from a biased point of view. Failings in algorithms can contribute to widening racial divides in a number of circumstances from determining who needs increased medical care to the likelihood that someone who has previously been arrested will reoffend. Meanwhile, the bias in facial recognition, far from providing the convenience and ease of unlocking a smartphone, can cause misidentification in arrests, as the technology used in facial recognition misrecognizes people of color more often than white people; women more often than men; and children and the elderly more often than all ages in between. Lastly, surveillance can be used in combination with facial recognition technology to monitor the movement of people, or it can be used on the internet to monitor the search terms, activities, and communications or specific people online.
Unequal Access
Finally, another major issue from the physical world that has crept into cyberspace is the issue of access, which contributes to the digital divide. Neighborhoods that were previously redlined, rural areas, and a significant portion of Tribal Lands, lack high quality access to the internet. While the digital divide is not strictly race-based, as there are white people who live in places with poor access to the internet including rural areas who face their own issues around internet access, historical racism has a significant impact on where internet connectivity issues exist.
This divide subsequently contributes to poor access to education, information, employment, telehealth, and more. A limited number of possible internet providers in the U.S. creates low levels of competition between them, which has the potential to leave rates high because there is nothing to drive rates down. When rates are too high, poverty-impacted people must forgo digital access and all the privileges that come with it. Additionally, in some cities, internet providers have replicated the same issues that redlining caused within their own coverage. Since formerly redlined neighborhoods now lack the accumulated wealth that neighborhoods that weren’t redlined have, residents may be unable to pay as much for broadband access. As a result, internet providers do not connect high speed fiber networks to those neighborhoods because of apparent low demand (though a lack of available funds is the more likely reason), which perpetuates historical inequalities into present day inequalities.
While access to high-speed broadband remains unequal, there is an insignificant racial gap in smartphone ownership. A 2019 study from Pew Research Center shows that while Black and Hispanic adults have less access to computers and internet than white adults, they own smartphones at a similar rate as white adults. Because smartphones can use data or connect to public Wi-Fi, smartphones can serve as an alternative option for accessing the internet for people who do not have home computers or high-speed internet access at home. However, smartphones cannot fill all the same functions that a computer that is connected directly to the internet can. While a large portion of the web is now ‘mobile first,’ there remains many aspects of the web that are considerably less accessible on a phone than a computer. For example, infographics, tables of data, online forms, and many PDFs can be difficult to navigate from a smartphone. Unfortunately, such digital forms are often needed when applying for jobs or schools, when filling in financial information, and when accessing health services. Furthermore, public WI-Fi can present vulnerabilities to the user if the network is insecure, which is the case for many free and public WI-Fi networks.
Current Events and Data Insecurity
Just this semester, the school that I’m attending has been spotlighted in the news for a major hack to a 3rd party system that has compromised over 300,000 student records. I recently received one of multiple communications from the university stating which records may have been hit. Included in the overview are items like veteran status, visa status, disability status, medical information, and occasionally financial information. While I am not feeling too great about the breach which could have compromised some of my own data, I have also been considering the ways in which this hack might disproportionately impact students who already experience marginalization. Additionally, because of societal inequality, the recovery from this hack could be unequal as well, with international students, students with disabilities or medical issues, or impoverished students needing to deal with the fallout of their data being exposed online.
Looking Forward – Consider the Impacts of the Tools We Use
Although the tech industry bears much of the responsibility for addressing racism in cyberspace, as does public policy, higher education should be cautious of the way that it uses technologies because of the potential impacts that it could have on their students. Racial bias is not limited to that which exists within institutions themselves when the tools that they use carry their own bias. I recommend the following as we all try to address these issues:
- Use Technology Intentionally – Education practitioners should use technology intentionally and understand the potential issues that their students could face in using different tools.
- Continue Anti-racism Efforts – Institutions should continue making their own efforts towards anti-racism in modifying their hiring practices, educating their students and staff about racial inequality, and properly compensating those that work on anti-racism projects.
- Consider Potential Bias in Tech Tools – When adopting new technology (anything from an LMS that has features that are tricky on mobile to adopting a technology that uses algorithms, and anything in between) administrators should consider potential bias issues the same way that they consider potential accessibility issues. It is essential that schools understand the bias within the systems they use even if they are provided by an outside vendor.
Further Learning
Although the topic of cyber racism is still relatively niche, it is expanding. I recently attended a virtual viewing of the film Coded Bias, which is based on impacts of algorithmic bias and facial recognition on society. I am also making my way through a growing list of books that tackle a range of topics within cyber racism.
Check out the following to learn more, and email us if you can think of more that I didn’t include:
- Black Software: The Internet & Racial Justice, from the AfroNet to Black Lives Matter – Charlton D. McIlwain (2019)
- Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor – Virginia Eubanks (2019)
- Race After Technology: Abolitionist Tools for the New Jim Code – Ruha Benjamin (2019)
- Algorithms of Oppression: How Search Engines Reinforce Racism – Safiya Umoja Noble (2018)
- Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy – Cathy O’Neil (2017)
- Dark Matters: On the Surveillance of Blackness – Simone Browne (2015)