Resnick & Silverman (2005) in their paper on designing construction kits for children present their final design principle as “iterate, iterate—then iterate again” a dictum they encourage for others engaged in designing principles as well. Following this we propose four principles grounded in this excellent advice and aimed at designers working to support children in understanding the algorithmic systems that are increasingly shaping their worlds as well as giving them the intellectual tools to question and resist them by (1) enabling connections to data, (2) creating sandboxes for “dangerous ideas,” (3) adopting community-centered approaches, and (4) supporting thick authenticity.
Key Findings
Suggest ways in which designers can support children in understanding, interrogating, and critiquing the algorithmic systems that shape their lives
Proposes four design principles for learning tools and experiences that allow allow children to understand not only how algorithms they work, but also to critique and question them: (1) enabling connections to data, (2) creating sandboxes for “dangerous ideas,” (3) adopting community-centered approaches, and (4) supporting thick authenticity
Introduction
As pervasive data collection and powerful algorithms increasingly shape children’s experience, children’s ability to interrogate computational algorithms is increasingly important. A growing body of work has sought to identify and equip children with the intellectual tools that their might use to understand, interrogate, and critique the powerful algorithmic systems. Although terminology is still diffuse, we call the intellectual tools that allow children to understand and critique the algorithmic systems that affect their lives critical algorithmic literacies. Unfortunately, because many powerful algorithms are invisible, development of these literacies remains a major challenge. In this article, we describe how designers can build systems to support the development of critical algorithmic literacies in children.
Reflecting on extensive observation and design work in the Scratch online community over the last decade, we offer four principles for designers that describe ways to support children in developing critical algorithmic literacies:
Enable connections to data
Create sandboxes for dangerous ideas
Adopt community-centered approaches
Support thick authenticity
Our first principle encourages designers to enable connections to data by offering children opportunities to engage directly in data analysis, especially with data sets that relate to the world that the children live, learn, and play in. The rationale for this principle is that in an increasingly data-driven world, understanding algorithms is deeply connected to understanding data. As children engage in data analysis in order to ask and answer their own questions or pursue their own interests, they create their own algorithms. Through this process, they can start to interrogate both their data and their algorithms.
Our second principle suggests that the development of critical algorithmic literacies can be supported by creating sandboxes for dangerous ideas. Algorithms are both powerful and risky. Our design work suggests that children can develop a deep understanding of both facts when they are allowed to create and experiment with algorithms using carefully designed toolkits. Because these toolkits entail giving learners the ability to “play with fire” in ways that might lead to negative outcomes, effective toolkit design needs to ensure that the possible dangers are managed and minimized. We use the metaphor of “sandboxes” to describe the goal of managing risk in this design process.
Our third principle suggests that designers should adopt community-centered approaches that allow designs to leverage community values that algorithms might challenge. Children belong to many overlapping communities and will typically share many of their communities’ values. Algorithm are seen as problematic, by children and by society in general, when they violate these socially constituted values. A community-centered approach intentionally situates algorithms within communities with particular sets of shared values. Doing so makes the problematic nature of algorithms visible to learners who are likely to be aligned with community values that an algorithm violates or challenges.
Finally, we argue that supporting thick authenticity—a principle that applies to learning technology design in general—plays a crucial role in the development of critical algorithmic literacies. Authenticity in the context of fostering algorithmic or data literacies might mean engaging in activities that consider “real-world” data or scenarios.
Our paper is structured as follows. First, we describe the theoretical work that informs the way we conceptualize “critical algorithmic literacies” as well as the empirical and design work we have conducted that has informed our design principles. Next, we describe and situate the four design principles with detailed examples. Finally, we discuss our principles’ implications for future design work and conclude with a reflection on unanswered questions and future directions.
Background
Our work draws from the literature on constructionism, a framework for learning and teaching that emphasizes contexts of learning “where the learner is consciously engaged in constructing a public entity, whether it’s a sand castle on the beach or a theory of the universe” (Kafai, 2006; Papert & Harel, 1991, p. 1). We are particularly inspired by Resnick and Silverman (2005) who provide a series of design principles for designing constructionist learning environments and toolkits based on reflections on their practice as designers. This article attempts to follow in Resnick and Silverman’s footsteps by laying out design principles for critical algorithmic literacies.
We use the term “algorithmic literacies” to describe a subset of computational literacies as articulated by diSessa (2001) in his book, Changing Minds: Computers, Learning, and Literacy. diSessa suggests three broad pillars for literacy—material, mental or cognitive, and social. Material involves signs, representations, and so on. For language literacy, the material pillar might include alphabets, syntax, conventions of writing. For computational literacies, the material might involve user interface paradigms like spreadsheets or game genres, or modes of transmission like sharing on social media. The second pillar—mental or cognitive—represents the “coupling” (diSessa, 2001, p. 8) of the material and what goes on inside learners’ minds when interacting with the material. The final pillar—social—represents communities that form the basis of literacies. diSessa posits that the emergence of a given literacy is driven by “complex social forces of innovation, adoption, and interdependence” (p. 11).
More recently, Kafai et al. (2019) has proposed a framework with three frames for understanding computational thinking: the cognitive, the situated, and the critical. They call for approaches to computational thinking that integrate “cognitive understanding” in the form of comprehension of computational concepts, “situated use” meaning that learning happens in contexts the learner cares about, and “critical engagement” to emphasize the importance of supporting the questioning of larger structures and processes behind the phenomenon being analyzed. These three frames can also be used in the context of computational literacies. In fact, one of the case studies used by Kafai et al. to illustrate their framework is framed around the concept of “critical data literacies” drawn from our work (Hautea et al., 2017).
Our use of the term “critical” draws from Agre’s (2014) idea of “critical technical practice” which ties critique and questioning to the practice of building and creation. In that sense, our goal is not merely knowledge about algorithms (e.g., what algorithms are) but an ability to engage in critique of algorithmic systems reflexively. Agre posits critical technical practice as requiring a “split identity—one foot planted in the craft work of design and the other foot planted in the reflexive work of critique” (p. 155). We recognize that as children engage with our toolkits, their design work combined with their reflection allows them to not only understand technical concepts around algorithms (what Agre describes as “esoteric terms”), but also evaluate their implications on society (“exoteric terms”).
Finally, the notion of critical algorithmic literacies is rooted in Freire’s (Freire, 1986) literacy methods. As we use it, the term was first proposed by Tygel and Kirsch (2016) who noted parallels between Freirean approaches to literacy education and the potential of models for developing data literacy. In suggesting approaches to big data literacy, D’Ignazio and Bhargava (2015) also build on Freire to posit that “[big data] literacy is not just about the acquisition of technical skills but the emancipation achieved through the literacy process” (p. 5). Relatedly, Lee and Soep (2016) have described their extensive body of work with child-driven multimedia production at the “intersection of engineering and computational thinking on the one hand, and narrative production and critical pedagogy on the other” (p. 481) in terms of critical computational literacy. This is a framework first developed by Lee and Garcia (2015) while studying children from South Los Angeles creating animations and interactive games about socio-political issues in their community such as racial profiling.
Our design principles are the result of design and empirical research around two systems we have developed and deployed over the last ten years: Scratch Cloud Variables and Scratch Community Blocks. Both tools were designed with constructionist framings of learning in mind. Both sought to support children in learning about computational concepts related to data collection, processing, and analysis. Both tools also built upon and extended the Scratch programming language—a widely used block-based programming language for children (Resnick et al., 2009)—and were deployed in the Scratch online community where Scratch users share, comment on, and remix their Scratch projects (Monroy-Hernández & Resnick, 2008).
The primary design goal of Scratch Cloud Variables was to give children the ability to collect, record, and analyze data within Scratch (Dasgupta, 2013b). The primary goal of Scratch Community Blocks was to give children the ability to engage in analysis of their own social data directly (Dasgupta & Hill, 2017). Scratch Community Blocks enabled this goal by allowing Scratch users the ability to access and analyze data from the Scratch online community website database. In deploying both systems, we found that granting children programmatic access to data led them to not only learn the techniques of data analysis but to question and critique data-driven algorithms.
The empirical data that we draw from in this article are from field deployment-based studies we conducted with members of the Scratch online community as well as from face-to-face workshops that we ran in the greater Boston area. For Scratch Cloud Variables, the deployment was a part of a larger beta test of the Scratch 2.0 software. For Scratch Community Blocks, 2,500 beta-testers were randomly selected from a pool of active Scratch users. Our studies involved observing Scratch projects, comments on projects, as well as seeking feedback through forum posts, surveys, and interviews. Because it is helpful to situate our findings, it is worth noting the median age of Scratch users is 12 and most are between 11 and 15. Although the distribution varies over time, around two-thirds of Scratch users describe their gender as male and a small number number (~5%) do not report gender or self-report using non-binary genders. Our sample of 2,500 participants in the Scratch Community Block was roughly gender balanced but similar in age to the general population of Scratch users (Dasgupta & Hill, 2017, p. 3625).
Design principles
Over the last decade, much of our research has focused on the design, deployment, and study of systems that seek to support constructionist learning around data and data-intensive algorithms in Scratch. We distill lessons from this work into four principles that we believe will be useful for a range of designers interested in supporting children to learn about data-driven computational techniques, as well as to question and to resist them.
Principle 1: Enable connections to data
Our first principle suggests that algorithmic literacies can be supported by offering opportunities that enable children to write programs that interact with data that relate to their worlds. This principle stems from our experience with both Scratch Cloud Variables and Scratch Community Blocks. In both cases, we have found that even relatively simple connections to data from a programming toolkit enables scenarios where children ask questions about the algorithms that shape, store, and use information they create and care about.
We developed Scratch Cloud Variables as a part of the second generation of the Scratch programming language (Scratch 2.0). The system allowed Scratch users to store values in variables in ways that persists beyond the run-time of their program and is global in that everyone using their project would see the same data (Dasgupta, 2013b). This support for persistent global data, combined with the fact that Scratch 2.0 projects were stored online and ran in a web-browser, allowed for functionality in Scratch projects such as global high-score lists, surveys, collaborative art projects, and more (Figure 1).
During beta testing of the system, children raised concerns about potential threats to privacy made possible by the system. As a Scratch project could store data persistently, it was possible to create relatively simple Scratch code that would ask for someone’s Scratch username (e.g., for a Scratch “guestbook” project) and store it indefinitely. The only way to remove the data, once stored, was to ask the creator of the project to do so. A Scratch community member noted:
So if I typed in my first name [into the project] without thinking about it, after that everyone who views the project can see my name […]. Furthermore to remove it I have to contact the owner of the project and request they remove it from the cloud data list. (Dasgupta, 2013a, p. 96)
This example shows that even relatively simple connections with data open up possibilities that enable Scratch community members to think about questions of algorithmic surveillance and power.
We developed the second system, Scratch Community Blocks, by adding programming blocks representing programming primitives into Scratch that could access public metadata about projects and users in the Scratch online community database (Dasgupta & Hill, 2017). An example of the system is shown in Figure 2. For example, with Scratch Community Blocks, it was possible to create Scratch programs that would access how many times a Scratch project shared in the community had been viewed. Community-wide statistics such as total number of registered users in the community were also accessible programmatically through Scratch Community Blocks. These two sets of programming blocks were combined by a young Scratch user to make a project that would calculate what proportion of the broader Scratch community has viewed a given Scratch project.
Soon after this project was shared, Scratch community members began discussing the fact that there was a difference between views and viewers (i.e., a single user may view a project multiple times in ways that cause the project’s view count to increase). These results were confusing because, at the time, the Scratch website counted views using an algorithm that both tried to count as many views as possible (e.g., from non-logged in users) so that creators of projects would see that their creations had an audience while also preventing community members from generating views synthetically (e.g., by repeatedly refreshing a project page).1 Community members noted that one of the most popular projects on the site had a view count that exceeded the number of user accounts on the site.
Commenter A: that’s so cool! almost 0.5% of all the users on scratch have viewed my projects and that’s a lot :B but crossstitch’s2 results are indeed slightly dubious… over 100% of people have viewed his projects which is awesome but impossible - love the project!! ˆoˆ
Commenter B: @CommenterA I think it’s because its based on views, not each specific player.
Commenter A: @CommenterB that’s awesome :D people who haven’t registered on scratch have viewed a significant amount of his projects yes
Project Creator: @CommenterA Yeah what @CommenterB said is correct.
(Hautea et al., 2017, p. 925)
Through these comments, users worked collectively to show how data is not objective but requires interpretation, and that the process of data generation is shaped by decisions taken by others.
Bowker (2005) has argued that “raw data is […] an oxymoron” (p. 184). In a edited volume with the same name, (Gitelman, 2013) note that “data are imagined and enunciated against the seamlessness of phenomena” (p. 3). Often, this imagination and enunciation materializes in an algorithm that collects data, such as the viewership statistics of projects in Scratch. Enabling children to access that data through algorithms that they implemented using Scratch Community Blocks led them to discover illuminating patterns in data such as the fact that view-counts of popular projects were exceeding the total number of community members. As in the extended example above, this in turn led children to attempt to reconstruct how data might have been imagined in the first place.
Resnick and Silverman (2005) posit that “a little bit of programming goes a long way” in that children can combine relatively simple and limited programming constructs toward a broad range of creative outcomes. In our work, we see a similar phenomenon emerge where simple programming constructs, combined with data in straightforward ways, enable children to uncover structures and assumptions in algorithmic systems. This process allows them to raise questions and engage in conversations about algorithmic data collection.
Principle 2: Create sandboxes for dangerous ideas
Our second principle suggests that the development of critical algorithmic literacies can be supported through the creation of sandboxes for dangerous ideas. Like any sociotechnical tool, algorithms offer benefits and carry harms. We describe algorithms as “dangerous” to highlight the way they have powerful, unanticipated, and often negative consequences (Smith, 1985). For example, the algorithm behind a real-estate search tool may allow the user to filter houses that are for sale by school rating but are unlikely to take the history of underfunding of schools in African American and low income neighborhoods into account. In this way, the search algorithm might unintentionally become a way for potential house-buyers to filter for affluent predominantly white neighborhoods (Noble, 2018, p. 167).
With the deployment of Scratch Community Blocks, metadata about user accounts such as number of followers and number of projects were made programmatically accessible. These numbers can be used as proxies for measures of experience in that more projects or more followers is an indication of more experience with Scratch—but both are far from perfect measures. Although restricting interaction with a project to more experienced users might be an attractive feature to some Scratch users, using these measures as a gate-keeping mechanism can be discriminatory for newcomers. This was exactly the concern raised by a 13-year old member of the Scratch community who noted that the algorithm to carry out such discrimination is trivial using Scratch Community Blocks and a single if statement:
I love these new Scratch Blocks! However I did notice that they could be used to exclude new scratchers or scratchers with not a lot of followers by using a code: like this:
when flag clicked
if then user’s followers < 300
stop all.
(Hautea et al., 2017, p. 925)
Thus, this young user noted that algorithmic systems can be dangerous in that they can they can enable discrimination. This type of observation is far from uncommon among Scratch users and reflects a key step toward critical algorithmic literacies. That said, it is only possible because of the danger introduced by the system.
The notion of engaging with and exploring dangerous ideas is not new in education: problematic theories are studied as a part of understanding history; discriminatory scenarios are analyzed as a part of engaging with the idea of justice; and potentially physically dangerous experiments are carried out in school and college chemistry laboratories. Although these activities all represent different types of danger, the pedagogical activities around them typically incorporate appropriate safety mechanisms. For the pedagogy of fields like chemistry, this is a topic of ongoing research and study (Alaimo et al., 2010).
An example from our own work that led us to consider the importance of making space for dangerous ideas is a feature introduced in the second generation of Scratch—an username block that “reports” the user name of the viewer of the project if they are logged in (Figure 3). The username block allowed for new types of surveillance in that it made it much easier to know who had accessed a given project. As designers, we were also concerned that that the block can be used for discriminatory purposes with a Scratch project (e.g., by disallowing certain usernames from playing a Scratch game), or to evade moderators (e.g., to have a Scratch project behave in a specific way only for known moderators in the community). On the other hand, we found that the block also made new conversations around surveillance, discrimination, data, and algorithms possible. Achieving a balance between enabling exploration of dangerous ideas and safety is not easy. This is especially the case among historically marginalized groups in STEM learning who may be more vulnerable to discrimination and surveillance.
We only considered the feature because usernames in Scratch are, by community policy, not directly tied to identities in the real world. As a result, the consequences of surveillance in Scratch would be less serious than surveillance of email accounts or other social media accounts. We also considered a range of approaches to make the username block safer. Many of these were technical. For example, an initial prototype of the block reported back an alpha-numeric value that would remain consistent for a given user accessing a given project over time so that a user interacting with the project could not be identified by username.3 Because this idea was difficult to explain, we did not adopt this approach and reverted back to the earlier, more simpler-to-understand one of reporting the actual username. However, as an added measure, the Scratch project viewing interface was modified so that it warned users about the existence of the block in a given project before they ran it and encouraged users to log out of Scratch if they wished to avoid being tracked by a project.
We also deployed the username block with considerable caution, carefully monitored its usage, and were ready to roll back the feature. The Scratch community has a complex and extensive moderation and governance infrastructure that has been described by Lombana Bermúdez (2017) as a combination of “proactive and reactive moderation […] with the cultivation of socially beneficial norms and a sense of community” (p. 35). We felt that these structures had the potential to prevent and mitigate misuse of the feature. In the design phase of the username block, we engaged in a number of conversations with the Scratch’s moderation team and with children. These conversations continued after the feature’s launch, so that we could gain an understanding of how the new block was being used by the broader community and adapt the system if needed.
A metaphor to describe our approach of trying to strike a balance between “dangerous ideas” and the risks associated with such ideas is that of “sandboxes.” Just as a sandbox allows children to play with earth in ways that may not be suitable for the rest of the playground, our design consists of creating clear boundaries and risk mitigation strategies. The metaphor of a sandbox is common in the field of computer security where untrusted applications are said to run in a “sandbox” isolated from unneeded resources and other programs (Schreuders et al., 2013). For example, a mobile phone sandboxing systems ensures that an app that does not need access to the camera does not have access to it. In the case of the username block, we spent several design iterations working with children and community moderators to ensure that there were enough safety “walls” (e.g., warning message in projects that use the block) before we felt that we had achieved a balance between encouraging exploration and safety. In computer and information security pedagogy, the use of sandboxes to allow learners to experiment with software vulnerabilities is an established practice (Du & Wang, 2008). Computer security researchers and educators have asked students to construct speculative fiction to engage with dangerous ideas and to imagine these ideas’ impact on society (Kohno & Johnson, 2011). Our experience suggests that a similar approach may work for critical algorithmic literacies as well.
Principle 3: Adopt community-centered approaches
Our third principle suggests that designers should incorporate community-centered approaches that allow a design to leverage existing community values that an algorithm might change or challenge. This principle reflects increasing recognition of the importance of centering values in computing learning. For example, in a keynote presentation to the 2012 ACM SIGCSE Technical Symposium, Abelson (2012) called for a focus on “computational values,” which Abelson defined as commitments that “empower people in the digital world,” and which he argued are “central to the flowering of computing as an intellectual endeavor.” Justice, respect for privacy, and non-discrimination are examples of such values.
Prior work in Human-Computer Interaction literature has argued that values are “something to be discovered” in the context of a given community (Le Dantec et al., 2009, p. 1145). In turn, values can also influence the sense of identity of a learner within their communities. In recent work in the Learning Sciences, Vakil (2020) has proposed the phrase “disciplinary values interpretation” to describe how learners seek to understand what a discipline being studied “is ‘all about,’ and what it might mean for them to be a part of it as they begin to imagine their future academic, career, and life goals” (p. 7). Vakil (2020) has also called for more understanding of, and attention to, “adolescents’ political selves and identities, and how these identities become intertwined with learning processes” (p. 22).
In our work, we have seen the dynamic described by Vakil play out as children evaluate technological possibilities in terms of their values. For example, we saw that children using Scratch Community Blocks questioned algorithms by describing algorithmic outcomes as in conflict with the collective value of the Scratch community. Multiple community members expressed concern about Scratch Community Blocks enabling projects that rank community members based on the number of followers, and pointed out that this might shift the values of the Scratch community from celebrating creativity and expression to an emphasis on popularity. For example, a 12 year old Scratch community member expressed concern that the new system opened up possibilities where community members with with a smaller number of followers can be made fun of through Scratch projects:
[…] you can easily make fun of someone for example, “You only have 2 followers! Ha! Well I have10!” (Hautea et al., 2017, p. 927)
Similarly, the 13-year old child using Scratch Community Blocks quoted above pointing out that code using the new system could be used to block newcomers from projects thought this was problematic because inclusivity is a core value of the Scratch community and the algorithmic discrimination that they correctly identified was made possible by the system stood in contrast to this value.
One challenge with systems that enable possibilities that go against established community values is the fact that unsocialized newcomers will frequently not share their new community’s values. Zittrain (2006) noted this as a challenge with “generative” systems and platforms where the outcomes made possible by the system include both positive and negative outcomes. With Scratch Cloud Variables, we recognized this issue and implemented a system where the Scratch Cloud Variables feature would only be made available to users who have been active in the community for some time (Dasgupta & Hill, 2018). By only granting access to the dangerous feature to users likely to have been socialized, we reasoned that newcomers would be allowed to learn Scratch’s community values before being given access to features that enabled them to flout them.4
Our experience suggests that critical approaches to algorithms are driven by the values of the communities in which algorithms are enacted. Of course, communities vary in scope and character and can range from groups of friends, to families, classrooms, and entire nations. In a 2014 report produced by the Executive Office of the President of the United States of America, values enshrined in the legal structure of the United States were invoked when it was stated that “big data technologies” have the “potential to eclipse longstanding civil rights protections in how personal information is used in housing, credit, employment, health, education, and the marketplace” (Podesta, 2014, p. 3).
Of course, not all values are aligned with outcomes that educators seek to reinforce. Values frequently conflict with each other and widely shared values can sometimes be problematic. Moreover, ways of imagining a specific value can overwhelm alternatives in ways that are described by by Benjamin (2019) as a “master narrative” (p. 134). For example, Philip et al. (2013) draw from their classroom experience in a U.S. public school setting to describe how the underlying assumption among students debating “big data” and its implications was “that students, particularly urban children of color, would academically, socially, occupationally, or politically benefit simply by virtue of exposure to big data and new technologies” (p.117). Philip et al. explain that the design of their new curriculum did not take into account the relative lack of opportunities for students of color leading to an overwhelming focus on one particular framing of big data technology as an equalizer. This example serves as a warning for designers to carefully interrogate a range of values before designing.
Principle 4: Support thick authenticity
Finally, we argue that thick authenticity—a principle that applies to learning technology design in general—plays a crucial role in the development of critical algorithmic literacies. Authenticity is a complicated concept in the context of learning. Although it is common to encounter terms related to the “real world” and “real world problems” in popular and scholarly discourse on education, degrees and dimensions of “realness” vary enormously. For example, a learning exercise might involve a fictional scenario where a problem is real but a situation is not (Petraglia, 1998).
Ultimately, “realness” is determined by learners and “real” learning experiences, problems, and metaphors may be unfamiliar to learners for individual or cultural reasons. An example based on draws from a pack of playing cards may hinder the learning experience for students of probability who have never played cards. As corollaries, stronger forms of authenticity emerge when learners have more say in the design and direction of their learning activities and higher degrees of authenticity are associated with better learning outcomes. For example, while teaching Maori schoolchildren English, Ashton-Warner (1986) found that a compelling strategy to engage her students was to ask them to write about themselves, about their own stories, in their own words—a process she called “organic writing.”
Questions of authenticity are likely to be relevant to the type of engagement necessary to support the development of algorithmic literacies in children. For example, a baseball analytics algorithm may generate critical engagement when the learner is a young baseball fan who is going to poke holes in the assumptions made by the algorithm. The same algorithm would likely elicit a lukewarm response, at best, from someone without an interest in baseball. To most learners outside of the United States, learning activities that involve baseball are not suitable at all. In our design work, we have drawn inspiration from Ashton-Warner and asked what “organic writing” might look like for developing critical algorithmic literacies? (Dasgupta, 2016)
We have also drawn from Shaffer & Resnick (1999) who describe “thick authenticity” as:
[…] activities that are personally meaningful, connected to important and interesting aspects of the world beyond the classroom, grounded in a systematic approach to thinking about problems and issues, and which provide for evaluation that is meaningfully related to the topics and methods being studied.
In our work with Scratch Community Blocks, children using the system engaged with complex ideas about power and algorithms because the data that they were accessing, and the algorithms that they were designing, reflected their experiences, friends, and community in Scratch. If the same interface within Scratch had provided access to nearly any other data source, it would have been less effective as promoting algorithmic literacy among the community of Scratch users to whom our system was deployed. In that most children do not use Scratch, the effectiveness of our systems is likely to be limited among most children.
That said, other contexts might present similarly promising opportunities. For example, families’ interactions are increasingly shaped by algorithms and data inherent in “smart home” technologies. Although it is still less common among children, a range of individuals are increasingly collecting data about aspects of their personal lives through quantified-self approaches (Lee, 2013). In that algorithms are increasingly prevalent in a range of contexts, there is an increasingly wide range settings offering rich opportunities for the promotion of algorithmic literacies through thick authenticity.
Discussion
In our own design experiences spanning many iterations, we encountered numerous tensions and open questions in terms of how to best engage the broadest possible set of Scratch community members in critiquing algorithms. The evidence emerging from our work suggests that there may be certain design principles—presented in this article—for computational construction kits that support the development of a range of critical algorithmic literacies. Our four design principles reflect a broader perspective that young learners should go beyond simply observing algorithmic systems and be given opportunities to create algorithmic systems of their own. We argue that when children take advantage of these opportunities, some will question algorithms in meaningful ways. In empirical work we have conducted, we employed grounded theory (Charmaz, 2006) to analyze the discussions, comments, and activities of children engaging in creative design activities using Scratch Community Block (Hautea et al., 2017). Most of the examples we identified of children questioning algorithmic systems emerged from the process of active creation with toolkits.
Though our work is exclusively focused on the Scratch online community, we believe that the lessons that we distill here can be applied to other contexts. For example, the first author of this article uses the “enable connections to data” principle in an introductory college-level Python course to illustrate how gender is often encoded as binary in software. As a part of a relatively simple exercise that involves the use of conditional (if-else) statements, he asks students to use publicly available data on the recommended daily allowance of calcium to make an interactive tool that asks a few questions about an individual (age, gender), and then recommends a daily allowance of calcium for them.
Because the publicly available data table that is used for this purpose encodes gender as binary,5 students end up designing their programs with the in-built assumption of binary gender. This provides the first author with an opportunity to engage students—after they have written the program—in a discussion that starts with the prompt, “what’s wrong with this assignment?” Students point to the notion of a binary variable to represent gender, and this leads to a broader discussion about the choices programmers make in modeling the world in their algorithms. (Costanza-Chock, 2018; Smith, 1985).
Similarly, on might use the principle of “sandboxes for dangerous ideas” in a web-programming exercise for college students by scaffolding an exercise around the design of a survey system and conversations around the choices students make about tracking the identity of their survey-participants. What is an acceptable technical solution to prevent repeat-participation in an anonymous survey? Can a “technical solution” even exist? There are likely many other ways to effectively engage children in understanding and questioning algorithmic systems. For example, approaches such as co-designed games have been found to yield promising results for engaging children in understanding notions of privacy (Kumar et al., 2018). Similar approaches may emerge toward other aspects of critiquing data and algorithm driven systems as well.
Limitations
Our work is limited in that it has focused on individual learners and case studies. We have yet to conduct any systematic study of outcomes around critical algorithmic literacies in learning environments where our principles have been put into practice. Our examples are also limited in that they show what is possible when our design implications are embraced, but not necessarily what is likely. For example, Scratch users—especially those who chose to engage with us by responding to our recruitment calls, coming to our workshops, etc.—are unlikely to be representative of children in general. We do not claim that the examples described here are representative of how a majority of children would respond to the same tools and contexts. Indeed, we believe that the uniqueness of children’s context, interests, and backgrounds likely means that no single tool or approach is likely to work for children in general.
Moreover, although a number of our study participants engaged in questioning and critiquing algorithmic possibilities, many engaged with at least some of these problematic possibilities uncritically (e.g., by making Scratch Community Blocks projects that would only work for community members who have more than 5 followers). More structured support such as the “what’s wrong with this assignment?” prompt in the classroom example above will often be needed to engage a broader group in considering these questions.
In a sense, these facts serve as a reminder to designers and educators of the perils of a technocentric approach in learning—“the tendency to give […] centrality to a technical object” (Papert, 1987, p. 23). Brennan (2015) offered a model of how a designer can support the work of educators toward technology-use in the service of learning in a non-technocentric way. Beyond systems and curriculum designed based on the principles that we outline in this article, such models represent essential and complementary components that need to be in place in a learning environment for learners to engage in developing critical algorithmic literacies. With these limitations stated, and with the recognition that this is work-in-progress, we offer our principles in the hopes that other designers of educational technologies and experiences will build on our work and contribute to the larger project of fostering critical algorithmic literacies in children.
Conclusion
In their paper on designing construction kits for children, Resnick & Silverman (2005) present their final design principle as “iterate, iterate—then iterate again.” They conclude by stating that this applies to their principles as well. Our four principles are no exceptions to this excellent advice. Going forward, we intend to keep iterating on our principles, taking them apart, putting them back together, and changing them. We offer our principles with humility and a sincere desire to work toward the dual goals of supporting children in understanding the algorithmic systems that are increasingly shaping their worlds as well as to do what we can to give them the intellectual tools to questioning and resist them.
Acknowledgments
Many of the projects referred to in this article were financially supported by the US National Science Foundation. We would also like to acknowledge feedback and support from Mitchel Resnick, Natalie Rusk, Hal Abelson, Brian Silverman, Amos Blanton, and other members of the Scratch team. Finally, none of this work would have been possible without the children who generously tried out our new technologies, gave us feedback, and inspired us in multiple ways with their ingenuity and kindness.
Authors
Sayamindu Dasgupta is an Assistant Professor at the School of Information and Library Science, University of North Carolina at Chapel Hill. He designs and studies new ways in which young people can learn with and about data—especially in contexts of the communities that they live, learn, and play in.
Benjamin Mako Hill is an Assistant Professor of Communication at the University of Washington and a Faculty Associate at the Berkman Klein Center for Internet and Society at Harvard University. He studies digital public goods, collective action, and collaborative learning in online communities.
Abelson, H. (2012). From computational thinking to computational values. Proceedings of the 43rd ACM Technical Symposium on Computer Science Education, 239–240. https://doi.org/10.1145/2157136.2157206
Agre, P. E. (2014). Toward a critical technical practice: Lessons learned in trying to reform AI. In G. Bowker, S. L. Star, L. Gasser, & W. Turner (Eds.), Social science, technical systems, and cooperative work: Beyond the great divide. (pp. 131–157). Taylor & Francis Group. http://public.ebookcentral.proquest.com/choice/publicfullrecord.aspx?p=1689049
Alaimo, P. J., Langenhan, J. M., Tanner, M. J., & Ferrenberg, S. M. (2010). Safety teams: An approach to engage students in laboratory safety. Journal of Chemical Education, 87(8), 856–861. https://doi.org/10.1021/ed100207d
Ashton-Warner, S. (1986). Teacher. Simon & Schuster.
Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Polity.
Bowker, G. C. (2005). Memory practices in the sciences. MIT Press.
Brennan, K. (2015). Beyond technocentrism: Supporting constructionism in the classroom. Constructivist Foundations, 10(3), 289–296. http://constructivist.info/10/3/289.brennan
Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative analysis. Sage Publications.
Costanza-Chock, S. (2018). Design justice, A.I., and escape from the matrix of domination. Journal of Design and Science. https://doi.org/10.21428/96c8d426
Dasgupta, S. (2016). Children as data scientists : Explorations in creating, thinking, and learning with data [Thesis, Massachusetts Institute of Technology]. Massachusetts Institute of Technology. http://dspace.mit.edu/handle/1721.1/107580
Dasgupta, S. (2013a). Surveys, collaborative art and virtual currencies: Children programming with online data. International Journal of Child-Computer Interaction, 1(3–4), 88–98. https://doi.org/10.1016/j.ijcci.2014.02.003
Dasgupta, S. (2013b). From surveys to collaborative art: Enabling children to program with online data. Proceedings of the 12th International Conference on Interaction Design and Children (IDC ’13), 28–35. https://doi.org/10.1145/2485760.2485784
Dasgupta, S., & Hill, B. M. (2018). How “wide walls” can increase engagement: Evidence from a natural experiment in Scratch. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18), 361:1–361:11. https://doi.org/10.1145/3173574.3173935
Dasgupta, S., & Hill, B. M. (2017). Scratch community blocks: Supporting children as data scientists. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17), 3620–3631. https://doi.org/10.1145/3025453.3025847
diSessa, A. A. (2001). Changing minds: Computers, learning, and literacy. MIT Press.
Du, W., & Wang, R. (2008). SEED: A suite of instructional laboratories for computer security education. Journal on Educational Resources in Computing, 8(1), 1–24. https://doi.org/10.1145/1348713.1348716
Freire, P. (1986). Pedagogy of the oppressed. Continuum.
Gitelman, L. (Ed.). (2013). "Raw data" is an oxymoron. MIT Press.
Hautea, S., Dasgupta, S., & Hill, B. M. (2017). Youth perspectives on critical data literacies. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 919–930. https://doi.org/10.1145/3025453.3025823
Kafai, Y. B. (2006). Constructionism. In K. R. Sawyer (Ed.), The Cambridge handbook of the learning sciences (1st ed., pp. 35–46). Cambridge University Press.
Kafai, Y., Proctor, C., & Lui, D. (2019). From theory bias to theory dialogue: Embracing cognitive, situated, and critical framings of computational thinking in K-12 CS education. Proceedings of the 2019 ACM Conference on International Computing Education Research, 101–109. https://doi.org/10.1145/3291279.3339400
Kohno, T., & Johnson, B. D. (2011). Science fiction prototyping and security education: Cultivating contextual and societal thinking in computer security education and beyond. Proceedings of the 42nd ACM Technical Symposium on Computer Science Education - SIGCSE ’11, 9. https://doi.org/10.1145/1953163.1953173
Kumar, P., Vitak, J., Chetty, M., Clegg, T. L., Yang, J., McNally, B., & Bonsignore, E. (2018). Co-designing online privacy-related games and stories with children. Proceedings of the 17th ACM Conference on Interaction Design and Children - IDC ’18, 67–79. https://doi.org/10.1145/3202185.3202735
Le Dantec, C. A., Poole, E. S., & Wyche, S. P. (2009). Values as lived experience: Evolving value sensitive design in support of value discovery. Proceedings of the 27th International Conference on Human Factors in Computing Systems - CHI 09, 1141. https://doi.org/10.1145/1518701.1518875
Lee, C. H., & Garcia, A. D. (2015). “I want them to feel the fear…”: Critical computational literacy as the new multimodal composition. In Management Association, I. (Ed.), Gamification: Concepts, methodologies, tools, and applications (pp. 2196–2211). IGI Global. https://doi.org/10.4018/978-1-4666-8200-9.ch111
Lee, C. H., & Soep, E. (2016). None but ourselves can free our minds: Critical computational literacy as a pedagogy of resistance. Equity & Excellence in Education, 49(4), 480–492. https://doi.org/10.1080/10665684.2016.1227157
Lee, V. R. (2013). The quantified self (QS) movement and some emerging opportunities for the educational technology field. Educational Technology, 53(6), 39–42. http://www.jstor.org/stable/44430216
Lombana Bermúdez, A. (2017). Moderation and sense of community in a youth-oriented online platform: Scratch’s governance strategy for addressing harmful speech. In Perspectives on harmful speech online. Berkman Klein Center for Internet & Society Research Publication. https://dash.harvard.edu/handle/1/33746096
Monroy-Hernández, A., & Resnick, M. (2008). Empowering kids to create and share programmable media. Interactions, 15(2), 50–53. https://doi.org/10.1145/1340961.1340974
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.
Papert, S., & Harel, I. (1991). Situating constructionism. In Constructionism (Vol. 36, pp. 1–11). Ablex Publishing.
Petraglia, J. (1998). Reality by design: The rhetoric and technology of authenticity in education. Lawrence Erlbaum Associates. http://www.myilibrary.com?id=232329
Philip, T. M., Schuler-Brown, S., & Way, W. (2013). A framework for learning about big data with mobile technologies for democratic participation: Possibilities, limitations, and unanticipated obstacles. Technology, Knowledge and Learning, 18(3), 103–120. https://doi.org/10.1007/s10758-013-9202-4
Podesta, J. (2014). Big data: Seizing opportunities, preserving values. Executive Office of the President, United States of America.
Resnick, M., Maloney, J., Monroy-Hernández, A., Rusk, N., Eastmond, E., Brennan, K., Millner, A., Rosenbaum, E., Silver, J., Silverman, B., & Kafai, Y. (2009). Scratch: Programming for all. Communications of the ACM, 52(11), 60–67. https://doi.org/10.1145/1592761.1592779
Resnick, M., & Silverman, B. (2005). Some reflections on designing construction kits for kids. Proceedings of the 2005 Conference on Interaction Design and Children, 117–122. https://doi.org/10.1145/1109540.1109556
Schreuders, Z. C., McGill, T., & Payne, C. (2013). The state of the art of application restrictions and sandboxes: A survey of application-oriented access controls and their shortfalls. Computers & Security, 32, 219–241. https://doi.org/10.1016/j.cose.2012.09.007
Shaffer, D. W., & Resnick, M. (1999). "Thick" authenticity: New media and authentic learning. Journal of Interactive Learning Research, 10(2), 195–215.
Vakil, S. (2020). “I’ve always been scared that someday I’m going to sell out”: Exploring the relationship between political identity and learning in computer science education. Cognition and Instruction, 38(2), 87-115. https://doi.org/10.1080/07370008.2020.1730374
Good post. There is a lot of information to read. Thank you for sharing such an interesting post. Sevenmentor's Ethical Hacking Course in Pune has been carefully designed by industry experts to ensure that the candidate understands the fundamentals of ethical hacking.