Skip to main content
SearchLoginLogin or Signup

Critical Race Approaches to Tech Design: Learning from and with Black Youth

Published onJul 17, 2024
Critical Race Approaches to Tech Design: Learning from and with Black Youth
·

Introduction

In recent years, critical information scientists have reckoned with the material and discursive consequences of “race evasive” and “race neutral” approaches to tech design that further exacerbates the racial inequities currently plaguing the Black community (Noble 2018; Benjamin 2019; McMillan Cottom 2016; Daniels 2009a). The material and discursive ramifications of racist technologies are profound, and nearly every facet of US society has been substantially impacted. For instance, Benjamin’s (2019) work on carceral technologies reveal how anti-Blackness exists as the animating force beneath predictive policing software, surveillance technologies and e-carceration systems, which in turns expands the scope of mass incarceration for Black Americans via the “New Jim Code.” Relatedly, Raji et. al (2020) have uncovered racist and sexist infrastructures within image recognition systems, and found that these algorithmic biases overwhelmingly render women, trans and non-binary folx and People of Color as both illegible and criminal. Likewise, Noble’s (2018) work on algorithmic oppression underscores how racialized sexism informs the socio-technical underpinnings of Google technologies, including commercialized search and Google Glass. These sociotechnical matrices of domination sell racially offensive and sexually traumatizing images of Black and diasporic communities to the highest bidder, while simultaneously rendering Black users hyper-visible and hyper-vulnerable to racist attacks online (Noble 2014). Collectively, the groundbreaking contributions of these critical science and technology scholars have revealed the centrality of anti-Blackness as the “default setting” of information technology.

The permanence and pervasiveness of racism, further entrenched and substantiated by anti-Black digital technologies, warrants the development of new approaches to technological design and innovation that can protect, rather than harm, historically marginalized communities. We need new design frameworks that are intersectional, historically anchored, race conscious and justice-oriented; that center, rather than obscure, the voices and experiences of Black youth and those most directly impacted by racist domination; and that seek to answer the Black community’s resounding call for sociotechnical abolition. This latter piece is crucial, and Benjamin (2019) reminds us that calls for abolition are “never simply about bringing harmful systems to an end, but also about envisioning new ones” (p. 162). What we need is a critical race approach to technology design.

Why Do We Need a CRT Approach to Design?

Technology is often constructed as inherently neutral, unbiased and post racial because of its reliance upon logic, reason and quantitative data to inform decision making and analysis. The belief in the inherent objectivity of numerical data, statistics and computational analysis that undergird technology is a byproduct of positivist thought, an onto-epistemological framework that has played a profound role in shaping the field of science and technology. According to positivism, true, objective and verifiable knowledge that derived from reason, logic and observable facts. Other ways of knowing, including intuition, introspection, experiential insight, historical or cultural knowledge, are believed to be inherently biased. Because it is powered by numbers, strings of code and statistical analysis, technology is heralded as inherently objective - the “great equalizer” - impenetrable to the biases created by personal opinions or feelings. These and other reasons are why analyses, deductions and outputs produced by technology are believed to be more trustworthy than human decision makers.

However, critical science and technology (STS) scholars are challenging these taken for granted assumptions about technology, calling attention instead to how techno-solutionist narratives are often ahistorical, uncritical and dangerously misleading. As scholars like Ruha Benjamin, Timnit Gebru, Safiya Noble, Joy Bouluwami and others have articulated, technology has emerged as the newest iteration of white supremacy, and techno-racial domination is the new frontier for racial justice activism. Indeed, the rhetorical de-racialization of technology is by design - a tactic of white supremacy that has a rich and well documented history in science and technology. Like other tools of modernity and globalization, technology operates under the veil of ‘colorblind’ racism, and the relative invisibility of anti-black infrastructures helps to obscure how race and racism become encoded to the technological hardware, software and infrastructures (Benjamin 2019). Because the role that anti-blackness plays in data collection, curation and analysis are actively obscuring via “Black Box” protocols, racist outcomes produced by seemingly ‘unbiased’ technologies can be rationalized as either empirically true, or as unpreventable “glitches” in an otherwise effective system (Broussard 2023).

Queer, feminist and critical race scholars have long interrogated the nomenclature surrounding computational and technological errors, or as they are more commonly called: bugs, glitches, and cracks in the code. Each of these terms are meant to convey images of small, innocuous oversights, and ask us to assume positive intent when interrogating how technologies can continually foster disproportionate harm towards hyper vulnerable populations. Cumulatively, terms like ‘glitch’ or ‘bug’ suggest that racialized violences, disparities and discrimination that result from ‘good tech gone wrong’ are not systemic in nature; rather, they are isolated, unforeseeable and most importantly unexplainable defects in the system that do not reflect the larger system (Broussard 2023).

However, a historically situated, critical race analysis can prove otherwise.

Sankofa Historical Analysis

By submerging contemporary issues of race, racism and technological innovation into a rich historical context, I purposely engage in a praxis of Sankofa: a Ghanaian term meaning “to go back and get it.” Black feminists and CRT scholars alike use Sankofa as methodology, understanding ahistoricism as one of the primary ways white supremacist power structures remain obscured from view, and how socially and technologically engineered inequalities remain stubbornly intact. Sankofa requires contemporary social issues to undergo a “deep historical embedding,” and in doing so, illuminates how racism’s past, present and future are intricately and indivisibly connected. In order to move towards an anti-racist technological future, we must first delve into the anti-Black technological past. We must “go back and get it.”

White supremacy as the organizing logic for scientific advancement and technological innovation

Benjamin (2019) provides compelling historic insight into the interdependence of white supremacy, eugenics, and technological innovation, noting how the impetus behind major technological developments has consistently been tethered to notions of imperialism, racialized death, and white supremacy. It is no secret that the earliest version of the internet - called ARPANET - was commissioned by US The Department of Defense at the height of the Cold War, when military commanders were seeking a computer communication system “without a central core, with no headquarters or base of operations that could be attacked and destroyed by enemies thus blacking out the entire network in one fell swoop” (Featherly 2023) Because imperial logics and war tactics guided its infrastructure and development, ARPANET - and later, the contemporary internet - retained the tentacle-like structure military officials had envisioned. Similarly, the US-owned technology company, IBM, played a central role in the Nazi genocide by facilitating “the regime’s generation and tabulation of punch cards for national census data, military logistics, ghetto statistics, train traffic management, and concentration camp capacity” (Black 2012). Indeed, the technologies created by IBM automated the holocaust, helping to scale the mass torture and extinction of Jewish people to levels that were previously inconceivable.

While the aforementioned examples highlight how global imperialism and white supremacy directly impacted the design and deployment of colonial technologies beyond the US, historical accounts of technologies designed during and after Black enslavement showcase how these entanglements between racialized death and technological innovation were actually homegrown. Racialized calculations and technological experiments conducted by antebellum scientists on Black bondspeople served to objectively and empirically ‘prove’ the inferiority, inhumanity and disposability of Black life. The technological advancements that emerged as a direct result of anti-black approaches to technological design are not only harrowing and pervasive, but persist into contemporary times: the tools and treatments derived from forced experimentation on non-anesthetized Black women slaves are still staples of the field of gynecology (Prunty et al. 2021); the medical knowledge about syphilis gleaned from the scientifically engineered death of Black sharecroppers in the Tuskegee experiments continue to inform policies and procedures related to research with vulnerable populations (Corbie-Smith 1999); and the lead abatement practices derived from the deliberate exposure of low-income Black residents to deadly toxins continue to be benchmarks for the housing and real estate (Benjamin 2022). These are just three examples of how racialized death and dying has not only been justified, but has existed as a central feature of modern science and technology. As we continue to pull on the thread of objectivity, neutrality and post-raciality that undergirds technological design and scientific advancement, we must consider whether or not racialized death and disparities are, indeed, just a glitch or if they are symptomatic of a larger, more noxious system that requires dismantling.

Racial constructions of humanity and its implications in tech

Though different in terms of time period, discipline, line of inquiry ,and technological and scientific outcomes, the aforementioned atrocities share a salient similarity: each technological development arose out of a call for “saving,” “advancing” and “protecting” humanity and the human race. Indeed, these technologies were constructed as instruments of human progress, inching society ever closer to an idealized techno-utopia where “technologies make life easier and more enjoyable for all humans.” Yet social constructions of ‘humanity’ and ‘social progress’ at the center of these and other technological developments includes deeply absconded beliefs about race, racism and anti-blackness that inevitably determine whose humanity is to be saved, and whose is to be destroyed. Indeed, mainstream articulations of ‘human/ity’ are deceivingly inclusive, and falsely promulgate a narrative of post-raciality that assumes every person, regardless of race, has full, impermeable access to the rights and privileges afforded to human beings.

In her seminal piece “No Humans Involved: An Open Letter to my Colleagues,” Sylvia Wynters provides insight into the sociological underpinnings of racialized humanity, arguing that the social construction of un/human on the basis of race is a necessary feature of a anti-black plantation politic. Once Black bodies are coded as non-human, the gratuitous violence, erasure and enslavement levied against them in order to maintain racial capitalism can be justified simply because there is no human involved. Benjamin echoes these assertions, noting that techno-solutionist claims of ‘saving’ or ‘advancing’ humanity “falsely assume we have all had a chance to be human” (Benjamin 2019, p. 32). History has thoroughly shown how human/ity is socially and racially constructed, and how the multiple, ongoing pandemics of white supremacy, settler colonialism and techno-racial capitalism continue to denigrate and dehumanize racialized communities in ways that bring rhetorical commitments to “all humans” into question. One of the most prominent examples is the use of eugenics science to determine objective, measurable and “ostensibly verifiable” determinants of biological purity, intelligence, and subsequently, full humanity (Atanasoski and Vora 2019, p. 15). Not surprisingly, these scientific studies used logic, reason and observable facts to determine that “the figure of the human.. [that is] most iconically human…is white and male” and “the Black is a despised thing-in-itself (but not a person unto to him or herself) in opposition to all that is pure, human(e), and White” (Dumas and ross 2016). In this way, white supremacy and colonialism - bolstered by eugenics science - created “a global sliding scale of humanity” which used race as the tool used to determine which bodies were marked for violence, servitude and erasure.

While Afro-pessimist scholars have thoroughly detailed how anti-blackness has been used to construct and define (non)humanness in ways that maintain white supremacy and racial capitalism, critical science and technology scholars have wrestled with how these racial codes have become enmeshed into digital technologies to concretize historical imaginations of the Black body as a lifeless object upon which violence, erasure and servitude can be applied without retribution. As Benjamin (2019) notes:

…race, to be sure, is one of our most powerful tools - developed over hundreds of years, varying across time and place, codified in law and refined through custom…and still considered to reflect immutable differences between groups… race itself is a kind of technology - one designed to separate, stratify and sanctify the many forms of injustice experienced by members of racialized groups, but one that people routinely reimagine and redeploy to their own ends (p. 36).

If race is a technology, then anti-blackness can be thought of as an algorithm - defined in the computational sciences as a set of instructions or logical steps designed to solve a problem. With innumerable examples of technology (re)producing life-extinguishing results for Black communities - exacerbating disparities in educational access, mass incarceration, health care and reproductive justice, housing and financial stability, and even life expectancy - new questions have emerged about what “problem” these technologies are attempting to solve. Through this framing, Web DuBois’ timeless query -“How does it feel to be a problem?” – has taken on a new meaning in the age of technology.

Contemporary examples of racism with/in technology

By ascribing human value along arbitrary lines of race, the apparatus of anti-blackness - made manifest through eugenic science and race-producing technologies - can rationalize the ongoing denigration of Black bodies as logical, permissible and even necessary by the very institutions positioned to protect, uphold and advance public safety and human rights: in this case, the State and by extension, its technologies. Perhaps the most notable example of this phenomenon has emerged within the context of Black Lives Matter. The implementation of more “humane” and “civic-minded” technologies within the carceral system manifested as a direct response to public outrage against the state-sanctioned killings and mass incarceration of Black Americans. In this way, Black death and dehumanization were the impetus for technological innovations that promised increased public safety, social progress and - most importantly - democracy for “all human beings.” Yet, the very same body cameras, image recognition systems, and e-carceration anklets that emerged as a sociotechnical “fix” to systemic racism have substantially advanced the form, function and severity of state-sanctioned violence against the Black community.

Similarly, the rise in school shootings has sparked an influx of ‘school safety” technologies that are overwhelmingly implemented in low income urban schools. These systems, which rely upon predictive analytics, machine learning and computer vision to proactively detect threats of danger, operate based on race-evasive and ahistorical definitions of crime and safety that actually make schools less safe for Black students. Studies have also shown burgeoning connections between the proliferation of AI technologies and the increased brutalization, dehumanization and mass incarceration of Black students, both within and beyond the classroom setting (Laird and Dwyer 2023). In a review of prominent school surveillance platforms, like Gaggle or SocialSentiol, researchers found that Black students were disproportionately flagged for ‘violating’ school safety guidelines during after-school hours (Herold 2019). Because these platforms are directly connected to law enforcement agencies, students that are identified as threats to school safety are contacted by law enforcement, who deploy their own host of algorithmically biased technologies that threaten the lives and wellbeing of Black youth. Whether its facial recognition systems that consistently mis-identify Black youth as “fitting the description” of a wanted criminal (Thanawala 2023; Hill 2020), automated sentencing programs used to justify harsher sentences and higher bail amounts to Black defendants (Hao 2019; Benjamin 2019; Angwin et al. 2016), or the use of autonomous agents to track and remotely kill People of Color identified as threats to public safety, the anti-black technologies employed by the criminal (in)justice system work to exacerbate - rather than remediate- racialized disparities in Black life, education and incarceration.

According to Dumas (2016), these and other socio-technical ironies are indicative of the “cultural disgust and disregard for Black life” that continues to render Black people as ‘inhuman’ within America’s social imagination, and subsequently, its technologies (p. 11). He notes: “Black people exist in a structurally antagonistic relationship with humanity. That is, the very technologies and imaginations that allow a social recognition of the humanness of others systematically exclude this possibility for the Black” (p. 13).

While technologies are often deployed in the service of protecting, supporting or advancing “humanity,” their design and development is regularly predicated upon the death, destruction and denigration of racialized others (Atanasoski and Vora 2019). Evidence of this global reality is extensive, and can be seen in the use of Congolese slave labor to mine the Colton used for smartphones and computer chips; the reliance upon Black prison labor for data labeling used to support the ‘sentience’ of generative AI; the environmental injustices that undergird the placement of subsea fiber optic cables in colonized countries and data processing centers used to house large language models like Chat GPT; and the African women and girls who are exposed to radiation as they disassemble and recycle the world’s e-waste in toxic electric graveyards. Ultimately, these historic and racial connotations of un/humanness - and their dialectical relationship with technological design and innovation - are the first step in understanding the need for a race-conscious and historically anchored approach to tech design.

Theoretical Frameworks: Critical Race Approaches to Tech Design

My approach to technology design with Black youth is guided by critical race technology theory (CRTT) in education. While these frameworks are undoubtedly related and in loving conversation with one another, they are nevertheless distinct not only in their theorization of race, but also in their analysis of schools and other codified systems of white supremacy. While critical race theory has been described as a theory of race and a critique of white supremacy and multiculturalism within schools (Dumas 2016), critical race technology theory is a theorization of blackness and of technology, and a critique of anti-blackness as the organizing logic of schools and digital systems. This distinction is important, as the latter is designed to reckon with anti-blackness as a phenomenon that compounds, but is simultaneously distinct from, white supremacy and racism writ large. As Dumas and ross note (2016), “only critical theorization of blackness confronts the specificity of anti-blackness, as a social construction, as an embodied lived experience of social suffering and resistance, and perhaps most importantly, as an antagonism, in which the Black is a despised thing-in-itself (but not person for herself or himself) in opposition to all that is pure, human(e), and White” (Dumas and ross 2016).

Designed in response to calls for more intersectional, techno-structural examinations of digital technologies in educational settings (Vakil 2018; Vossoughi and Vakil 2018; Garcia and Nichols 2021; Tanksley 2016, 2019), CRTT works to expose the intercentricity of anti-blackness as the “default setting” and “organizing logic” of schools and school-based technologies (Tanksley 2022a). This framework disrupts majoritarian narratives that characterize information technologies as post-racial, apolitical and inherently democratic and instead acknowledges the matrices of domination encoded within Twenty First Century information systems (Noble 2012, 2014, 2018a, 2018b). In doing so, CRTT “shifts discourse away from simple arguments about the liberatory possibilities” of technology toward more critical engagements with how digital systems are “a site of power and control over Black life” (Noble, 2016, p. 2). Ultimately, CRTT works to expose the racialized layers of subordination (Yosso 2013) embedded within digital technologies that have historically restricted Students’ of Color access to, representation in, and agency over digital systems that influence their educational, socio-political and technological experiences.The following tenets form anatomical underpinnings of CRTT in education, and inform my approach to technological research and design with Black youth:

1. The Intercentricity of Algorithmic Anti-Blackness:

CRTT in education acknowledges that sociotechnical racism is permanent and deeply ingrained within the very fabric of American society, both on and offline, and should therefore be centralized in discussions of equity and access for Black youth (Bell 1995; Perez Huber and Solórzano 2015). Foundationally, CRTT acknowledges that the ‘institutional disease of white supremacy” (Solorzano 1997) has been digitally upgraded and algorithmically codified, and that its invisible presence within technological hardware, software and infrastructures.

2. The Challenge to Dominant Ideology

CRTT encourages scholars to interrogate dominant narratives of race, gender and technology, and challenge oversimplified constructions of digital technologies as a post-racial, ungendered and politically neutral. Importantly, this tenet showcases how the proliferation of digital and artificially intelligent technologies have introduced new forms of racial violence that make cyber spaces particularly unsafe for Black youth.

3. Commitment to Social Justice

In its struggle toward sociotechnical justice, CRTT aims to completely abolish algorithmic racism, as well as to eliminate all other forms of sociotechnical oppression along axes of class, gender, sexuality and more (Tanksley 2022). As such, a social justice agenda within and by way of technology should aim to expose and eradicate socio-technical and techno-structural racism that sustains anti-Blackness, racial capitalism and environmental injustice for Communities of Color around the globe.

4. The Centrality of Experiential Knowledge

CRTT recognizes that lived experiences of Youth of Color are legitimate and critical to understanding the current condition of educational inequity and sociotechnical oppression (Solórzano and Yosso, 2002; Perez Huber 2009). For far too long, Black communities youth have been spoken for and about, but rarely are they centered as the experts of their own experiences (Collins, 1986). Thus, CRTT positions Black youth’s socio-technical experiences, insights and funds of knowledge as an indispensable source of knowledge that will bring about collective liberation and new technological futures.

5. The Interdisciplinary Perspective

CRTT actively integrates race and racism within an interdisciplinary context by drawing upon scholarship from ethnic studies, feminist theories, communication studies, digital humanities, computer science, and critical science and technology studies.

Though not exhaustive, these tenets form the theoretical underpinnings of a critical race approach to technology design. In the following section, I will demonstrate how I put these theoretical tenets into practice through my construction of a critical race technology course for Black youth from high schools across Southern California.

From Theory to Praxis: Constructing a Critical Race Technology Course

In summer 2020, during the dual pandemic of COVID-19 and anti-Black racism, I designed an undergraduate course titled “Race, Resistance and Technology” as a part of a culturally relevant/responsive/sustaining college access program at a large public university in Southern California. The overarching goal of the course was to foster students’ ability to critically examine the ubiquity of anti-Black racism within socio-technical architectures (e.g. code, data, algorithms, interface design, etc.) of popular technologies, and to develop ways to resist, subvert and hack these systems in computationally sophisticated ways. In order to accomplish this larger goal, I employed the following strategies: 1) making meaningful, academically rigorous connections between course content and students’ everyday, lived experience with algorithmic racism; 2) engaging students in socio-technical and techno-structural critique of anti-blackness across a variety of technological hardware, software and infrastructures; and 3) providing scaffolded opportunities for students to design and dream up race-conscious and justice-oriented technologies that could protect - rather than harm - Communities of Color.

First and foremost, I began my course design by recognizing youth as holders and creators of knowledge (Delgado-Bernal 2002), particularly as it relates to digital media and internet technology (Tanksley 2016, 2019). Thus, the lessons, activities and learning materials were designed to showcase the experiential knowledge and socio-technical funds of knowledge that Black youth develop beyond the classroom setting. For instance, during the first class session, I asked students to complete a zoom poll denoting their top three favorite (or most frequently used) social media platforms. I then asked them to list the top three social media apps where they encountered the most anti-Black racism. The students overwhelmingly identified globally popular social media applications, like Instagram and Tik Tok, as “some of the most racist” platform technologies that they had encountered. This was a catalyst for robust discussions about common forms of anti-Black racism they experience online, including race-based digital harassment, content flagging, shadow banning, viral videos of Black death and dying, and digital Black face.

Following these discussions, we began exploring literature that leveraged critical race theory and Black feminist thought to critically interrogate how and why anti-Black racism exists as the “default setting” (Benjamin 2019) of digital technologies. Students read, annotated and presented on cutting edge scholarship in critical race technology studies, including Algorithms of Oppression (Noble 2018), Race After Technology (Benjamin 2019), Woke Gaming (Gray and Leonard 2018), and more. The course also incorporated multimedia texts as a crucial dimension of fostering socio-technical consciousness, including popular films and television series (i.e. Black Mirror, They Cloned Tyrone, and Power), hit songs and music videos (i.e. “Dirty Computer” by Solange), video games (e.g. Hair Nah) and so much more. The students were so invigorated by these opportunities to leverage their burgeoning sense of sociotechnical consciousness in popular culture that we ultimately decided to host weekly “cyber socials” outside of class. These cyber socials were focused on fostering dialogue and communal knowledge construction (Collins 2018) about issues of race, resistance and technology, as well as learning to operationalize a techno-social “oppositional gaze” (hooks 2012). By learning to read and interrogate a variety of sociotechnical texts, students gained a critical awareness of how anti-Black digital racism indexes, reifies and extends racial domination within and beyond the screen.

In addition to discussing material and discursive consequences of racist technologies, including predictive policing software, commercialized search engines, image recognition systems, and artificial intelligence agents, we also examined ways People of Color transformatively resist technological racism through the design and deployment of abolitionist platforms. For instance, students researched Appolition, a crowdfunding application that uses automated spare change donations to pay bail for low income People of Color, and BlackBird, a web browser that uses race-conscious machine learning algorithms to produce more culturally responsive search engine results for Black internet users. These and other justice-oriented technologies were introduced as a way to catalyze student’s ability to “freedom dream” (Kelley 2002) in the digital world. Using critical race and Black feminist technology studies to nuance Robin D. Kelley’s (2002) seminal work, I imagine techno-social freedom dreaming as the process by which Black youth leverage counterhegemonic computational practices and sociopolitical funds of knowledge to not only dream up, but to simultaneously bring forth abolitionist technologies that can foster transformative change in their lives and schooling experiences.

It was at this critical juncture that we invited programmers, software developers and platform designers into our class to help teach us how to critically “read” and decode the majoritarian narratives that are embedded in algorithmic functions, and to subsequently rewrite and rebuild these systems in race-conscious ways. Students were able to attend a virtual presentation by an Apple technician where he explained the foundations of machine learning, image recognition and algorithmic bias. Students got to examine, test and interrogate AI software in a variety of ways, including talking with Moxie - an social emotional support robot - and Eliza - one of the first AI systems ever created. They also got to create a rudimentary AI agent using Google’s Teachable Machines, and researched and interacted with large language models, including Chat GPT and Bing. All of these activities worked to deepen students’ computational understandings of sociotechnical infrastructures, and prepare them to design and dream up algorithmically and racially just platform technologies.

After examining the algorithmic innerworkings and design interfaces of justice-oriented technologies, and comparing/contrasting them to mainstream technologies like Tik Tok and Instagram, the students engaged in sociotechnical freedom dreaming; that is, they began to dream up and design their own justice oriented technologies that could bring forth new worlds and Black digital futures that were profoundly different from the ones they inherited. The course culminated in a critical race technology project where Black youth worked collaboratively in groups of 4-5 to design a technological innovation that could address a specific manifestation of anti-Black racism either in an analog or digital setting. Students identified a range of issues related to anti-black racism, including socially engineered health care deserts, underfunded and under-resourced public schools, the consequences of race-based stress and so much more. A critical race design rubric was created as a way to guide students’ approach to their design, and was one of the formal mechanisms I used to articulate and encourage critical race approaches to tech design [See Table 5.1]. Each design group was given 10-15 minutes to present their abolitionist technologies to class in what we called a “technology showcase.”

Table 5.1: Critical Race Computational Thinking Design Rubric

A Clear Critique of Anti-Black Racism:

Projects must have a clear articulation of the specific manifestation of anti-Black racism being addressed (e.g. police brutality, race-based digital harassment, limited availability of trans-inclusive & race-conscious mental health services, etc.) and a detailed explanation for why this issue is important to remediate.

Students are encouraged to consider how multiple, interlocking systems of oppression create the targeted issue (e.g. red lining, property taxes and socially engineered poverty might collectively create educational inequities in urban schools)

A techno-social solution to anti-Black racism

Projects should clearly articulate how and why the proposed technology can work to heal, protect and/or uplift the Black community.

A Clear Consideration for Intersectionality & Accessibility

How will you ensure that your platform interface is accessible and equity-oriented for multiply marginalized groups? How does your design consciously consider interlocking systems of oppression that affect a range of Black users, including racism, sexism, transphobia, ableism, etc.

An Incorporation of Experiential Knowledge

What data (e.g. interviews, personal reflections, zoom polls, informal peer conversations, member checks, user testing, etc.) and/or literature (from class readings, lectures or otherwise) did you use to inform your design process?

A Justice-Oriented Infrastructure

How do your socio-technical infrastructures (e.g. code, algorithms, content moderation, data sharing protocols, etc.) center the needs, interests and experiences of Black users?

Projects should also consider how the proposed technology could be misused or manipulated to uphold, rather than remediate, anti-Black racism (e.g. if you make a profit by selling user data, what could go wrong? If your content moderation algorithms block hate speech, how can you ensure it doesn’t identify AAVE or critiques of white supremacy as hate speech? If you’re designing a Black-centric LLM or chatbot, how will you ensure your servers aren’t contributing to environmental racism in BIPOC communities?)

Methods

Data for this study spans across 3 separate cohorts of students, and includes semi-structured interviews with each participant; student work collected during and after formal class sessions; recorded lectures and teaching materials; and research memos and field notes.

For the final project, students were asked to operationalize all of the course learning to design a technology that could address a specific manifestation of anti-black racism to create more life-sustaining realities for Black people. These presentations, which included powerpoint slides, blueprints, strings of code, skits and videos, were all recorded and the zoom chats and class discussions were saved and transcribed.

Data analysis

A grounded theory (Glaser and Strauss 1967) approach informed my coding process and I searched for codes and themes emerging from continuous and systematic review of the data corpus. For the purpose of this paper, I was focused on how my use of critical race technology theory to teach about technology and technological design influenced students’ approach to engaging, analyzing and designing technologies.

Specifically, I was interested in seeing what types of design tenets the students were employing, either explicitly or implicitly; how they were defining and solving the problem of anti-blackness; and how critical race technology theory and abolitionist commitments were being explicitly taken up in their project goals and their project infrastructures. Guided by CRTT’s commitments to prioritizing experiential knowledge and the voices of Black students, I began by constructing a coding tree based on themes and patterns that emerged from student interviews. I came up with initial codes related to students’ interests, such as “use tech to resist and subvert racism,” “connections to family, culture and everyday life,” and “fun, engaging and joy-centered learning” that emerged as important to students in the post-interviews. As I coded, I jotted down questions and/or clarifications that arose when using the existing codes, as well as potential additions or deletions from the coding scheme. After coding all of the student interviews, I used the coding scheme to code the student data from the course lectures, including audio transcripts of class discussions, Zoom chat conversations, and students’ final projects and presentations. An important part of this coding process included re-watching all of the recorded lectures, and paying close attention to the topics, activities and class discussions that students identified as “the most memorable” and “the most impactful” in their final interviews.

While analyzing the students’ final projects and presentations, I engaged in a similar process of open-coding, re-coding, expanding, collapsing and deleting codes. I completed the data analysis in two waves. Wave one focused on analyzing student projects, presentation and classwork from Cohort 1 (Summer 2020). After coding for patterns and similarities across student work, I found five design features that emerged across the student projects. I formalized these five design tenets into a design rubric, and used the rubric to guide the student projects for Cohort 2 and Cohort 3. Then, my Wave 2 analysis focused on how the design rubric was taken up, tweaked and expanded by Cohort 2 & 3 students. I then reviewed all of the projects across all three cohorts simultaneously, and refined my design tenets.

Findings: Black Youth Employing Critical Race Approaches to Technology

Data analysis revealed 5 main features of critical race approaches to technology design: 1) an understanding of the centrality of algorithmic anti-blackness in technology design and innovation; 2) an explicit and unapologetic commitment to disrupting and dismantling algorithmic anti-blackness; 3) socio-technical and techno-structural solutions that considers historical context, leverages experiential knowledge and challenges post raciality; and 4) draws upon radical, interdisciplinary bodies of scholarship, including feminist, dis/abled, queer and trans, and indigenous ways of knowing; and 5) a prioritization of Black joy, justice and collective healing in the physical, intellectual and spiritual approach to design.

For the purpose of this chapter, I have selected two student presentations that I believe powerfully exemplify the components of critical race approaches to technology design. In addition to deploying the 5 aforementioned design considerations, these groups proactively considered how anti-blackness enmeshed within socio-technical infrastructures could be misused and misappropriated, and designed with these caveats in mind. They also produced technological designs that demanded a radical re-envisioning of social, political and economic systems as they currently exist. In doing so, they designed technologies for speculative futures and alternate realities, actively refusing to limit their freedom dreaming to the constraints and contradictions of the current system. Ultimately, these projects exemplified critical race approaches to technology design because they pushed the boundaries of what is, should and can be possible in technological design and innovation.

Project 1: The “Karen Katchers” App

Group A, which proposed an app called KarenKatchers, began their presentation by acknowledging the links between experiencing racial discrimination and the development of race-related stress among Black youth. As a result, they designed an app that could detect and deter racist incidents in real time. As Ashleigh explains, the goal of KarenKatchers is to “save people from having uncomfortable situations and encounters with Karen's because, as we know, encounters with Karen’s can have bad effects on our mental health.”

In discussing the primary features of their technology, the KarenKatchers group explains that they use GPS mapping technology to spotlight locations where racist incidents or attacks have recently taken place. They note, “The Karen GPS uses information that's collected from its users, so it works similar to the Waze app where it tells you about recent police sightings…So if users are in an area and they see a Karen situation going down, they can report it to the app using the button in the triangle at the bottom. It'll let everybody else with the app know and they'll get a notification.” They go on to explain that this feature can keep Black youth safe by helping them avoid situations that could be stressful or even deadly.

While explaining their approach to the algorithmic infrastructures, the presenters reflected upon the negative consequences of discriminatory design, such as predictive policing software and image recognition systems that over-identify Black people as criminals and threats to collective safety. They used this information to consider how justice-oriented data sets, such as those that define “crime” as “anti-Black racism” and “criminal” as anyone found to perpetuate anti-Black violence, could have life saving implications if used in GPS technologies used by Black youth. When asked to explain how they would create a GPS system that uses image recognition systems, or “Karen cameras” that could recognize “past offenders” of racial violence, Group A offers a two fold strategy: first, they will use user-generated data to design a communally- constructed database that included pictures, demographic information and locations of Karen-related incidents. Then, they will use this data to “train” their AI agents to readily recognize and locate Karen’s who are “repeat offenders.” This is meant to take the onus of reporting racist attacks off of Black youth, and refrain from adding additional stress and emotional labor onto the victims that had to endure and subsequently process the racist harm. Once incidents are reported, the members plan to submit this data to the “real authorities in charge” to make sure that repeat offenders are brought to justice. They explain,

The Karen Katcher app uses facial scan technology to identify nearby Karens and so when you're nearby one, you’ll get a notification… The Karen Camera has an AI face scan to identify pass offenders…. we get data from our team to keep track of the Karen’s and when we get more calls, we can start identifying the Karens [more accurately] and with pass offenders, we can report them to the real authorities.”

Finally, the creators of the Karen Katchers app explicitly rejected the carceral logics that currently dominate white supremacist social systems, and instead envisioned alternative approaches to safety and justice for both victims and perpetrators of racist harm. Drawing clear lines of demarcation between the Karen Katchers app and the police, they state, “unlike the police, we take our reports very seriously. We don't care what race you are, what happened to you, we’re here to help you.” This counter hegemonic approach was further exemplified in their rehabilitation approach to perpetuators of racist violence. Specifically, the Karen Katchers app provides targeted mental health and critical education resources to help “rehabilitate Karens” and ensure that they “don't go out raging on innocent people.” In discussing the rehab resources, the group says:

We are aiming to provide rehabilitation services to all people who need them, including Karens. We understand that trauma comes in many forms, even for Karens. We have a yoga class for Karens so that they can destress so they don’t go out in the world raging on people. We also promote educational services to Karens so that they can be educated on their actions, and learn information that we’re learning here in [this class] and we provide spiritual and mental strengthening.

Ultimately, Group A showed how critical race computational thinking can inspire youth to re-envision technology as a tool of Black life and liberation, and also encourage innovators to re-imagine the carceral social systems within which these technologies would theoretically exist.

Project 2: “BIPOethCary”

Group B identified medical racism as a pressing issue of anti-blackness that had multiple causes and manifestations. First, they identified how socially engineered poverty and historic redlining created health care deserts, a type of medical racism that makes it exceedingly difficult for members of low income communities to access consistent, high quality care and preventive treatment. Savannah explains “Our thought process was…we want to talk about redlining and healthcare. What combined those two was medical racism and how insurance or just healthcare, period, is not acceptable to Black folks.” Additionally, the students leveraged personal, familial and historic knowledge to identify the prevalence of discrimination in the medical system, and the disproportionate deaths of Black people that occur because doctors ignored or misdiagnosed their illnesses. Here, the group called attention to how racist medical practices, such as the Tuskegee experiments, has fostered a cultural distrust of hospitals. They also discussed how expensive health care is, and how many low income Black people can’t afford the high premiums on high quality insurance plans.

In order to combat this multifaceted issue, Group B decided to design a mobile healthcare robot, called BIPOCethary. As they explain in their presentation, this robot will “provide proper health care for Black and Indigenous People of Color to combat the white supremacist form of racism that takes place in health care and mental health across America using state of the art medical technology.” While explaining how the technology works, Group B says, “the robot will be launched at the beginning of a block and it will go house to house, in a procession basically, checking up on the person and seeing how they're doing… the robot is being piloted by humans and assisted by Black doctors. The robot has sensors in its hands to [conduct] MRI scans that sense diseases and give physicals to BIPOC people for free…If they can't afford to go get a physical, the robot will do a physical right there in the privacy of their home.”

Importantly, the group recognized that current MRI and imaging technologies are often ill equipped to accurately “see” melanated skin. Thus, they felt it was important to have image scanning technology trained on images of Black people so that it could more readily detect medical anomalies more quickly and accurately for patients with melanated skin. Importantly, Group B did not feel that a robot in and of itself was a sufficient solution to medical inequity. Rather, they felt it was important to employ Black doctors so that they could lend their cultural and racial expertise to both the patient, and to the robot engineers so that “we can learn how to make [medical racism] better.”

Finally, when asked by their peers how such an expensive technology could be offered for free (and whether doing so would contribute to racial wealth gaps faced by Black doctors), Group B theorized an alternative economic structure designed to redistribute wealth and resources to hyper marginalized groups. They explain, “The robot is a monthly service that includes the services, the pilot and the doctor. And there will be an app where you can schedule appointments, so it’s easily accessible, and the city will be the one that's paying for it… it will be a part of medicare or medicaid, so that’s how people will be able to get it for free. The robot will also run off of solar, which is cheaper and also helps the environment.” Here again we see youth re-designing oppressive social systems in ways that allow for equity-oriented and life sustaining technologies to emerge.

Discussion

Cumulatively, the students’ presentations exemplify how a critical race approach to technology fostered sociotechnical consciousness, awareness of algorithmic anti-Blackness, and a radical love of Blackness that could act as both a theory of justice and an organizing framework for technological design. Critical race approaches to tech design were exemplified when students applied the the guiding principles of abolition, intersectional justice and collective uplift to disrupt anti-blackness within and across technological hardware, software and infrastructure.

As exemplified in these final presentations, critical race approaches to technology enables students to not only think differently about technology, but also the social norms and systems of power in which these technologies would inevitably exist within. It simultaneously pushes students to think collaboratively and creatively, to love and laugh with one another, and to be unapologetic in their pursuit of Black life and liberation. It is my hope that critical race approaches to tech design with youth will find their way into classrooms and after school programs, and help bring about a generation of techno-social change agents capable of designing a world radically different from the one they inherited (Kelley 2002).


References

Atanasoski, Neda, and Kalindi Vora. 2019. Surrogate Humanity: Race, Robots, and the Politics of Technological Futures. Duke University Press.

Black, Edwin. 2012. IBM and the Holocaust: The Strategic Alliance Between Nazi Germany and America's Most Powerful Corporation. Expanded edition. Colorado Springs, CO: Dialog Press.

Bell, Derrick A.. 1995. “Who’s Afraid of Critical Race Theory?” University of Illinois Law Review 4:893‐910.

Broussard, Meredith. 2023. More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech. Cambridge, MA: MIT Press.

Benjamin, Ruha. 2019. Race After Technology: Abolitionist Tools for the New Jim Code. Cambridge, UK: Polity

Benjamin, Ruha. 2022. Viral Justice: How We Grow the World We Want. Princeton, NJ: Princeton University Press.

Collins, Patricia Hill. 1986. “Learning From the Outsider Within: The Sociological Significance of Black Feminist Thought.” Social Problems 33(6):s14-s32.

Collins, Patricia Hill. 2018. “The Social Construction of Black Feminist Thought.” Pp. 526-548 in Feminism and Philosophy. Routledge.

Corbie-Smith, Giselle. 1999. “The Continuing Legacy of the Tuskegee Syphilis Study: Considerations for Clinical Investigation.” The American Journal of the Medical Sciences 317(1):5-8.

Daniels, Jessie. 2009a. “Rethinking Cyberfeminism(s): Race, Gender, and Embodiment.” Women's Studies Quarterly 37(1/2):101-124.

Daniels, Jessie. 2009b. Cyber Racism: White Supremacy Online and the New Attack on Civil Rights. Plymouth, UK: Rowman & Littlefield Publishers.

Bernal, Delores Delgado. 2002. “Critical Race Theory, Latino Critical Theory, and Critical Raced-Gendered Epistemologies: Recognizing Students of Color as Holders and Creators of Knowledge.” Qualitative Inquiry 8(1):105-126.

Dumas, Michael J.. 2016. “Against the Dark: Antiblackness in Education Policy and Discourse.” Theory Into Practice 55(1):11-19.

Dumas, Michael J., and kihana miraya ross. 2016. “Be Real Black for Me”: Imagining BlackCrit in Education. Urban Education 51(4):415-442.

Featherly, Kevin. 2023, September 15. ARPANET. Encyclopedia Britannica. https://www.britannica.com/topic/ARPANET

Garcia, Antero, and T. Philip Nichols. 2021. “Digital Platforms Aren’t Mere Tools—They’re Complex Environments.” Phi Delta Kappan 102(6):14-19.

Gebru, Timnit. 2020. “Race and Gender.” Pp. 251-269 in The Oxford Handbook of Ethics of AI, edited by M. D. Dubber, F. Pasquale, and S. Das. New York, NY: Oxford University Press.

Gray, Kishonna L., and David J. Leonard, eds. 2018. Woke Gaming: Digital Challenges to Oppression and Social Injustice. Seattle, WA: University of Washington Press.

Glaser, Barney G., and Anselm L. Strauss. 1967. The Discovery of Grounded Theory: Strategies for Qualitative Research. Chicago, IL: Aldine.

Hao, Karen. 2019. “AI is Sending People to Jail—And Getting It Wrong.” MIT Technology Review, January 19.

Herold, Benjamin. 2019. “Schools Are Deploying Massive Digital Surveillance Systems. The Results Are Alarming.” Education Weekly, May 30.

Hill, Kashmir. 2020. “Another Arrest, and Jail Time, Due to a Bad Facial Recognition Match.” The New York Times, December 29.

hooks, bell. 2012. “The Oppositional Gaze: Black Female Spectators.” Pp. 288-302 in Black American Cinema, edited by M. Diawara. New York, NY: Routledge.

Kelley, Robin D. G.. 2022. Freedom Dreams: The Black Radical Imagination. Boston, MA: Beacon Press.

Laird, Elizabeth, and Maddy Dwyer. 2023. Off Task: EdTech Threats to Student Privacy and Equity in the Age of AI. Washington, D. C.: Center for Democracy and Technology.

McMillan Cottom, Tressie. 2016. “Black Cyberfeminism: Ways Forward for Intersectionality and Digital Sociology.” In Digital Sociologies, edited by J. Daniels, K. Gregory, and T. McMillan Cottom. Bristol, UK: Policy Press.

Noble, Safiya. 2012. “Searching for Black Girls: Old Traditions in New Media.” Doctoral Dissertation, Dept of Library and Information Science, University of Illinois at Urbana-Champaign.

Noble, Safiya Umoja. 2014. “Teaching Trayvon: Race, Media, and the Politics of Spectacle.” The Black Scholar 44(1):12-29.

Noble, Safiya Umoja. 2018a. “Critical Surveillance Literacy in Social Media: Interrogating Black Death and Dying Online.” Black Camera 9(2):147-160.

Noble, Safiya Umoja. 2018b. Algorithms of Oppression: How Search Engines Reinforce Racism. New York, NY: NYU Press.

Pérez Huber, Lindsay. 2009. “Disrupting Apartheid of Knowledge: Testimonio as Methodology in Latina/O Critical Race Research in Education.” International Journal of Qualitative Studies in Education 22(6):639‐654. https://doi.org/10.1080/09518390903333863

Pérez Huber, Lindsay, and Daniel G. Solorzano. 2015. “Visualizing Everyday Racism: Critical Race Theory, Visual Microaggressions, and the Historical Image of Mexican Banditry.” Qualitative Inquiry 21(3):223‐238. https://doi.org/10.1177/1077800414562899

Perkowitz, Sidney. 2021. “The Bias in the Machine: Facial Recognition Technology and Racial Disparities.” MIT Case Studies in Social and Ethical Responsibilities of Computing Winter 2021(February).

Prunty, Megan, Laura Bukavina, and J. C. Hallman. 2021. “Anarcha, Lucy, and Betsey: The Mothers of Modern Gynecology.” Urology 157:1-4.

Raji, Inioluwa Deborah, Timnit Gebru, Margaret Mitchell, Joy Buolamwini, Joonseok Lee, Emily Denton. 2020. “Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing.” Pp. 145-151 in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. New York, NY: ACM.

Tanksley, Tiera. 2020. “Texting, Tweeting, and Talking Back to Power.” P. 175 in Black Girl Civics: Expanding and Navigating the Boundaries of Civic Engagement, edited by G. Logan and J. Mackey. Charlotte, NC: Information Age Publishing.

Tanksley, Tiera. 2022a. “Tech, Trauma and Transformational Resistance: How Black Girls Use Technology to Combat and Cope with Black Death, Dying and Spirit Murder.” Presented at the 2022 American Educational Research Association (AERA) Conference, April 21-26, San Diego, CA.

Tanksley, Tiera. 2022b. “The Marathon Continues…in Computer Science: How Black Girls Use Technology to Grieve Black Death in STEM.” Presented at the 2022 American Educational Research Association (AERA) Conference, April 21-26, San Diego, CA.

Tanksley, Tiera. 2022c. “Race, Education and# BlackLivesMatter: How Online Transformational Resistance Shapes the Offline Experiences of Black College-Age Women.” Urban Education. https://doi.org/10.1177/0042085922109297

Tanksley, Tiera. In press. “Towards a Critical Race Algorithmic Literacy: How Black Youth “Talk Back” to Algorithmic Bias and Platformed Racism.” In Literacies in the Platform Society: Histories, Pedagogies, Possibilities, edited by A. Garcia and P. Nichols. New York, NY: Routledge.

Thanawala, Sudhin. 2023. “Facial Recognition Technology Jailed a Man for Days. His Lawsuit Joins Others From Black Plaintiffs.” AP News, September 24.

Solorzano, Daniel G. 1997. “Images and Words That Wound: Critical Race Theory, Racial Stereotyping, and Teacher Education.” Teacher Education Quarterly 24(3):5-19. http://www.jstor.org/stable/23478088

Solórzano, Daniel G., and Tara J. Yosso. 2002. “Critical Race Methodology: Counter-storytelling as an Analytical Framework for Education Research.” Qualitative Inquiry 8(1):23‐44. https://doi.org/10.1177/107780040200800103

Vakil, Sepehr. 2018. “Ethics, Identity, and Political Vision: Toward a Justice-Centered Approach to Equity in Computer Science Education.” Harvard Educational Review 88(1):26-52.

Vossoughi, Shirin, and Sepehr Vakil. 2018. “Toward What Ends? A Critical Analysis of Militarism, Equity, and STEM Education.” Pp. 117-140 in Education At War: The Fight For Students Of Color In America's Public Schools, edited by A. I. Ali and T. L. Buenavista. New York, NY: Fordham University Press.

Yosso, Tara J.. 2013. Critical Race Counterstories Along the Chicana/Chicano Educational Pipeline. New York, NY: Routledge.

Comments
0
comment
No comments here
Why not start the discussion?