Skip to main content
SearchLoginLogin or Signup

The 4As: Ask, Adapt, Author, Analyze - AI Literacy Framework for Families

Published onJun 29, 2021
The 4As: Ask, Adapt, Author, Analyze - AI Literacy Framework for Families
·

Abstract

Families’ interactions with various forms of AI technologies have recently attracted significant attention. Since these technologies do not support developmentally adaptable and family-friendly interactions, we recognize an opportunity to create a framework that supports family AI literacy. Our novel framework is composed of four main dimensions (4As): ask, adapt, author, and analyze. We believe that in order to ensure algorithmic fairness, this framework can be used by families for developing a critical understanding of smart technologies embedded in their lives. We define our AI literacy dimensions building on prior work and through a series of co-design and AI learning sessions with families. Our current findings show how children perceive algorithmic bias differently from adults and how families engage in collaborative sense-making by probing, tricking, and authoring AI applications in playful ways. We discuss the implications of AI literacy from the broader perspective of technology development, public policy, and algorithmic justice. We argue that AI literacy is a fundamental right for families and propose a series of learning activities and guidelines in order to support and protect this right.

Key Findings

  • Children perceive AI bias differently than adults

  • Future Algorithmic Justice tools and curriculum should leverage children innate curiosity and exploration vs exploitation drive in order to inform and support meaningful use of AI systems in the family

  • Parents are partners in AI literacy for families and should be included in future designs of AI education for children

  • AI literacy for families has implications at all levels of society (local community, policy, infrastructure) and should be considered as a fundamental literacy.

1. Introduction

Children in the current digital information era are rapidly engaging with technologies that are powered by ”artificial intelligence” (AI). Artificial Intelligence (AI) refers to the intelligence possessed by machines, thus sometimes also known as machine intelligence. Unlike humans, machines acquire intelligence through algorithmic techniques inspired from domains like statistics, mathematical optimization, cognitive science, and fuelled by computer processing power and a large amount of data (Legg & Hutter, 2007). AI systems show great promise in helping children and families through improved online search quality, increased accessibility via advances in digital voice assistants, and AI supported learning (Grossman et al., 2019; Ruan et al., 2020; Ruan et al., 2019). However, AI systems can also amplify bias, sexism, racism, and other forms of discrimination, particularly for those in marginalized communities (Angwin et al., 2016; Buolamwini & Gebru, 2018). Promoting critical understanding of AI for children and families is of essential importance in this context.

Without AI literacy, families, mainly from historically marginalized groups, risk falling prey to misinformation, fear, and missing opportunities of future potential for learning (Ferguson, 2012; Gebru, 2019; O’neil, 2016). Families and children must work together to learn about AI systems and to think critically about how this technology impacts their lives (Druga et al., 2019). Prior research on family engagement with digital technologies stressed how important it is to consider variation between families and parenting styles (Coyne et al., 2017; Takeuchi, Stevens, et al., 2011). Therefore, to support algorithmic justice in families, we need to consider how a diversity of families can access these skills (DiSalvo et al., 2016; Yardi & Bruckman, 2012).

AI literacy does not occur in a vacuum, but is influenced by social, cultural, institutional, and techno-infrastructural contexts. We need to consider the ecological and situational issues surrounding families and how macro- and micro-factors influence AI literacy in the modern family. Therefore it is crucial to address the socio-ecological conditions that influence how families may adopt AI literacy and to create guidelines that integrate human-centered design into practice. An analysis of ecological systems Bronfenbrenner, 1994 can explain how families could succeed with AI literacy, and unveil the broader implications of such an intervention. There is a parallel need to develop design heuristics and frameworks that support the development of socio-technical systems to integrate systemic conditions that would facilitate non-specialized communities’ access to a critical understanding and use of AI (Gonzales, 2017).

Research on families’ interactions with technology is a growing area with implications for the design of new smart devices (Druga 2018; McReynolds et al., 2017). Prior studies demonstrate that families can play a decisive role in guiding children on how to make meaningful use of technologies (Ito et al., 2009; Stevens and Penuel, 2010; Takeuchi & Stevens, 2011). However, the rapidly changing digital landscape is making it difficult for families to integrate advanced technology in meaningful and intentional ways.

To date, very little knowledge exists on how parents or guardians learn together with their children using tools for AI literacy. We wish to advance this body of research by posing the following research questions:

  • How do children and parents from different countries and diverse socioeconomic status (SES) perceive and interact with AI?

  • How can we best support parents to scaffold their children’s use of AI technologies in the home?

  • How can we design future technologies to best support families’ AI literacy?

Our goal in this paper is to understand how to facilitate AI literacy in families better. We investigate this from two perspectives: an ecological evaluation of current AI systems, and the design of new systems for AI literacy. Our research puts forth both a conceptual and empirical understanding of how families engage with AI literacy activities. Such an understanding can inform the design of culturally-tailored tools and resources. We contribute new insights on family AI practices as a means to address critical AI literacy needs in families. Finally, we develop a foundation that can encourage innovations to take advantage of family dynamics for AI literacy learning. We analyze and compare different prior data sets to propose a novel research-based family-facing framework for thinking with and about AI.

We begin with a brief review of ecological systems as they pertain to supporting AI literacy (Bronfenbrenner, 1994). Ecological systems theory refers to the multiple nested systems (i.e., exosystems, macrosystems, mesosystem, microsystems) that influence the development of learning for people.

  • Macrosystem factors: Social and cultural values

  • Exosystem factors: Technology infrastructure and policies

  • Mesosystem factors: Community centers, libraries, and schools

  • Microsystem factors: Families, peers, siblings, extended family, neighbours

Through a review of the literature, we consider how current technological systems are supporting or not the development of AI literacy. From our evaluation of ecological systems in AI literacy, we inductively develop a design framework for supporting critical understanding and use of AI for families. Our framework considers four dimensions of AI literacy (Ask, Adapt, Author, and Analyze). We prototype and refine different learning activities such as detecting bias, testing a voice assistant, coding a smart game, and drawing what is inside the smart devices to explain how they work. These activities took place during four co-design sessions with an inter-generational group, consisting of adult design researchers, child participants (n = 11, ages 7 - 11 years old) and parents. They correspond to the different dimensions of our AI literacy framework, which we describe below.

Through a series of family co-design sessions, we found that children perceive bias in smart technologies differently than adults, and care less about technological shortcomings and failures as long as they are having fun interacting with the devices. Family members supported each other in various collaborative sense-making practices during the sessions by building on each other’s questions, suggesting repairs for communication breakdowns with the voice assistants, coming up with new and creative ways to trick the AI devices, and explaining or demonstrating newly discovered features.

We demonstrate how our novel framework supports the development of AI literacy through play, balanced-partnership, and joint-family engagement with AI learning activities and conclude with a series of guidelines for families.

Finally, we engage in a broader discussion that connects the ecological systems theory with our AI literacy framework to draw implications for the broader perspective of practice, program design, public policy, and algorithmic justice.

2. The Ecology of Family AI Literacy

Based on our evaluation of ecological systems (Bronfenbrenner, 1994), we discuss the impact of multiple nested systems (i.e., exosystems, macrosystems, mesosystem, microsystems) on family AI literacy.

2.1 Macrosystem factors: Socio-Cultural Values

Foster an environment where different identities can flourish. Macrosystems impact learning and technology practices within values, policies, and infrastructure Bronfenbrenner, 1994. One macrosystem factor in AI literacy is the importance of an inclusive AI education for multicultural and multilingual families from different socio-economic backgrounds. This approach requires us to consider diverse families other than WEIRD populations (Henrich et al., 2010). In order to include multiculturalism as a macro-system factor for AI education, we need to be reflexive and consider how researchers approach such issues Schon, 1987. We also recognize that, as Medin and Bang (2014) describe, the answers to our research questions will be impacted by the socio-cultural values of the person ”who is asking”. We build on prior work on multicultural families technology literacy and joint-media engagement (Banerjee et al., 2018; Pina et al., 2018). As we conceptualize AI literacy, we define the term ”literacy” as practice, rather than the development of one’s skills (Cole et al., 1997; Kulick and Stroud, 1993; Scribner and Cole, 1981). We situate the AI literacy practice in the constellation of socio-cultural practices that our families engage in (Rogoff et al., 2014). In our effort to discover, encourage, and promote best practices of families using AI technologies in meaningful ways, we acknowledge the need for recognition of multiple literacies and the relationships of power that they entail (Street, 2003). Therefore, we seek to foster an environment where heterogeneity, different identities, goals, and forms of learning and growth can flourish (Rosebery et al., 2010).

2.2 Exosystem factors: Technology infrastructure and policies

The brave new world of connected homes. Necessary technological infrastructure also determines access to AI literacy. For instance, Pew 2019 study shows that in the USA, access to broadband is limited by data caps and speed Anderson, 2019. As AI systems increasingly take advantage of large-scale technological infrastructures, more families may be left disengaged if they are unable to connect to broadband Riddlesden and Singleton, 2014. Moreover, we think it is essential for minority groups to be able to not only ”read” AI, but also to ”write” AI. Smart technologies do much of their computing in the cloud, and without access to high-speed broadband, marginalized families will have difficulties understanding and accessing AI systems Barocas and Selbst, 2016. Families must be able to engage with AI systems in their homes so that they can develop a deeper understanding of AI. When designing AI education tools and resources, designers need to consider how the lack of access to stable broadband might lead to an AI literacy divide Van Dijk, 2006.

Figure 1: Info-graphic showing the age of consent for youth in different EU member states, from Mikaite and Lievens (2018, 2020).

Policies and privacy. Risks to privacy are standard on the internet. Prior studies show that privacy concerns constitute one of the main worries among children in Europe (Livingstone, 2018; Livingstone et al., 2011; Livingstone et al., 2019), and adults widely support the introduction of particular data protection measures for youth, such as the art 8 from GDPR (Lievens, 2017; Regulation (EU) 2016/679 of The European Parliament and Council, 2016). According to a recent survey, 95% of European citizens believed that ’under-age children should be specially protected from the collection and disclosure of personal data,’ and 96% thought that ’minors should be warned of the consequences of collecting and disclosing personal data’ (European Parliament Eurobarometer Survey, 2011).

Furthermore, many companies do not provide clear information about the data privacy of voice assistants. In this context, policymakers and technology designers must take into consideration the unique needs and challenges of vulnerable populations. Normative and privileged lenses can impair conceptualizations of families’ privacy needs, while reinforcing or exacerbating power structures. In this context, it is crucial to have updated policies that look at how the AI technologies embedded in homes not only respect children’s and family privacy, but also anticipate and account for future potential challenges.

For example, in the United States, the Children’s Online Privacy Protection Act (COPPA) was passed in 1998, and it seeks to protect kids under the age of 13. Despite the proliferation of voice computing, the Federal Trade Commission did not update its COPPA guidance for businesses until June 2017 to account for internet-connected devices and toys. COPPA guidelines now state that online services include ”voice-over-internet protocol services,” and says that businesses have to get permission to store a child’s voice (Commission U.F.T. et al., 2017). However, recent investigations have found that in the case of the most widely used voice assistant, Amazon’s Alexa, only about 15% of ”kid skills,” provide a link to a privacy policy. Particularly concerning is the lack of parental understanding of AI-related policies and their relation to privacy (McReynolds et al., 2017). While companies like Amazon claim they do not knowingly collect personal information from children under the age of 13 without the consent of the child’s parent or guardian, recent investigations prove that is not always the case (Lau et al., 2018; Zeng et al., 2017).

Not for profit organizations such as Mozilla, Consumers International, and the Internet Society have since decided to take a more proactive approach to these gaps and created a series of guidelines which are particularly useful for families to learn how to better protect their privacy (Rogers, 2019). These efforts could be used to increase AI literacy by supporting families to understand what data their devices are collecting, how this data is being used, or potentially commercialized, and how they can control the various privacy settings, or require access to such controls when they do not exist.

2.3 Mesosystem factors: Community

Mesosystem factors refer to the interactions that take place in one setting, which can influence the interactions in another setting. For instance, what happens in a library, school, or community center for children and families can influence learning at home (and vice versa). Studies show parental involvement in learning at home significantly influences school performance (Barron, 2004; Berthelsen, Walker, et al., 2008), and can be critical to children’s future success. For instance, The AI Family Challenge (AIFC) was a 15-week program implemented with 3rd-8th grade students (n = 7,500) and their families in under-resourced communities across 13 countries. Families learned to develop AI-based prototypes that solved problems in their communities. The goal of this program was to determine whether AI was of interest to such communities and determine the impact of such intervention on participants’ AI literacy. Pre- and post- surveys were conducted, as well as interviews with participants in the US, Bolivia, and Cameroon (Chklovski et al., 2019).

After the AIFC, 92% of parents believed their child was more able to explain AI to others, and 89% believed their child was capable of creating an AI application. Findings indicate the need to improve parent training materials, connect technical mentors to local sites, and improve the curriculum to be more hands-on, engaging, and better illustrative of machine learning concepts.

2.4 Microsystem factors: Families, peers, siblings, extended family, neighbors

Microsystem factors refer to specific interactions within the local environment that influences family learning. For this review, we look closely at family interactions in the home around AI literacy.

Figure 2: Example of curriculum modules created by Technovation for the international Curiosity Machine Competition for families

Figure 3: Example of curriculum modules created by Code.org for teaching children more about supervised learning.

Figure 4: Example of family workshop and learning activity from Curiosity Machine sessions.

A survey of 1,500 parents of elementary and middle school students, commissioned by Iridescent Technovation (2019) found that 80% of parents in the United States believe AI will replace the majority of jobs (not just low-skilled jobs), less than 20% understand where and how AI technologies are currently used, 60% of low-income parents have no interest in learning about AI, and less than 25% of children from low-income families have access to technology programs (Chklovski et al., 2019). Research on families’ interactions with technology is a growing area with implications for the design of new agents (McReynolds et al., 2017). As devices become more human-like in form or function, humans tend to attribute more social and moral characteristics to them (Druga, 2018; Druga et al., 2018; Kahn et al., 2011; Kahn Jr et al., 2012). These findings raise the question of parental engagement and interventions in children’s interaction with connected toys and intelligent agents. Prior studies showed that parents scaffold their children’s behaviour when the family interacts with robots or interactive devices together (Lee et al., 2006). We observed the same behaviour when families interact with Voice User Interfaces(VUIs), and parents help children repair various communication breakdowns with the conversational agents (Beneteau et al., 2019; Druga et al., 2017; Lovato and Piper, 2015). For instance, Beneteau and her colleagues (2019) noted that family interactions around Amazon Alexa devices facilitated joint-media engagement conversations with parents. However, at the same time, the devices could not ”code switch” between adults and child requests. As a result, many frustrations occurred, and ultimately communication breakdown happened between the families and the voice assistant. In a longitudinal study analyzing families’ uses of VUIs in the home, Porcheron et al. also showed that collaborative information retrieval is prevalent (Porcheron et al., 2018). Both children and parents use classical conversation techniques such as prosody changing, or strategic use of silences even if they engage in a dialogue with a more transactional agent like Amazon’s Alexa Beneteau et al., n.d.

3. Methodology

Through our analysis of the ecological perspective on the current state of AI understanding for families, and building on theories of parental mediation and joint-media engagement (Takeuchi, Stevens, et al., 2011), we propose a new framework for defining family AI literacy (see Table 1). To examine our framework in action, we adhere to the standards and practices of Participatory Design (P.D.), precisely the method of Cooperative Inquiry (Druin, 2000; Guha et al., 2004). Under Cooperative Inquiry in P.D., adults and children work closely together as design partners, emphasizing relationship building, co-facilitation, design-by-doing together, and idea generation (Yip et al., 2017). Cooperative Inquiry works well for understanding AI systems and literacy because children already work closely with adults and are more likely to express their perceptions around childhood (Woodward et al., 2018). In design partnerships, there is a strong emphasis on relationship building, which allows children to be more open to experimentation and open dialogue.

Our co-design sessions focused on designing and eliciting responses from children and families around their perceptions of different aspects of AI systems. We conducted three 90-minute sessions across the span of October to November 2019 with 8 - 11 children. We also worked with families in co-design sessions in December 2019 to understand children’s engagements with AI together with their parents.

3.1 Participants

An inter-generational co-design group, consisting of adult design researchers (undergraduates, masters, and doctoral students) and child participants (n = 11, ages 7 - 11) participated in the four design sessions. The team is called KidsTeam UW (all names are initials). At the time of the study, children typically ranged in participation from 1 - 4 years (2016 - 2019). In the fourth session, three KidsTeam UW children and their families (e.g., parents, siblings) came on a weekend co-design session to engage together and discuss their perceptions of AI technologies.

3.2 Design Sessions

Each design session (both child and families) at KidsTeam UW consisted of snack time (15 minutes) where the children gathered to eat, share, and develop relationships through play. In Circle Time (15 minutes), we provided children a ”question of the day” to prime them to think about the design session. We also provided the instructions for engagement (verbal facilitation and activity printouts). The majority of the time was spent in designing together (45 minutes), in which children engage in some design techniques (Walsh et al., 2010; Walsh et al., 2013; Walsh and Wronsky, 2019) with an adult partner(s). Children break up into smaller teams or remain together in a single design activity. Finally, the group comes back together in discussion time (15 minutes) to reflect on the design experience.

We organized the sessions in the following way to investigate how the family AI Literacy framework could be utilized as a series of design activities:

Design Session 1 (October 2019): We showed the children different video clips of ”algorithmic bias”. Video clips included AI not being able to recognize darker skin tones, voice assistants stuck in an infinite loop, and a very young child unable to get an Alexa Echo device to start. We used Big Paper (Walsh et al., 2013), a technique that allows children to draw on large sheets of paper to reflect and consider what ”bias” means.

Design Session 2 (October 2019): We provided children different technology activities with three kinds of AI devices: Anki Cozmo (AI toy robot), Alexa Echo voice assistant, and Google Quickdraw (AI recognizes sketches). Each inter-generational team went through the stations and documented what was ”surprising” about the technology and if they were able to ”trick” the AI system into doing something unexpected.

Design Session 3 (November 2019): Using Big Paper we asked children and adults to draw out how they thought a voice assistant (Amazon Alexa) worked.

Design Session 4 (December 2019): Finally, five KidsTeam UW families came together on a weekend morning workshop to engage in multiple AI technologies stations. Stations included Amazon Alexa, Google QuickDraw, and the Teachable Machine. One station, in particular, used Cognimates (Druga, 2018) and BlockStudio (Banerjee et al., 2018) to show models in how computers made decisions. Families spent, on average, 15 minutes per activity trying out the different technologies and wrote down their ideas and reflections on the technologies.

4. Data Analysis

We used an inductive process to analyze the audio capture family AI interaction themes (Charmaz, 2006). We began with memoing and open coding during the initial transcriptions of the video files. Through memoing and open coding, we noticed emerging themes related to family AI literacy practices and family joint engagement. We then began coding literacy practices and joint-engagement from transcripts of each of the five families, developing and revising codes as we found additional examples of AI-joint engagement, reviewing a total of 17 hours of video capture. We continued this process until codes were stable (no new codes were identified) and applicable to multiple families. Once the codes were stable, we reviewed transcripts from each of the five families for AI literacy practices and family joint engagement again. We included AI literacy practices from each participant in our corpus of 350 AI family-AI interactions, systematically going through each individual family’s transcript and pulling out for each code (when present). For our final analysis of the family’s AI interaction, a total of 180 AI interactions falling under the broad themes of AI Literacy practices were deeply analyzed by two researchers. AI Literacy practices were defined as interactions between family members and the various AI technologies, as defined in table 1. We drew on the human-computer interaction conversational analysis approach to analyze family interactions set in an informal learning environment, with a focus on the participants’ experiences.

Figure 5: Scene from co-design session at KidsTeam University of Washington

5. AI Literacy Dimensions - The 4 A’s

Based on our analysis of the ecological perspective (Bronfenbrenner, 1994) of the current state of AI and building on our prior work (Beneteau et al., 2019; Druga, 2018; Druga et al., 2019; Druga et al., 2017; Druga et al., 2018), we consider ways to connect design dimensions for family AI literacy. Building on parental mediation and joint-media engagement frameworks Takeuchi, Stevens, et al., 2011, we aim to analyze and support the scaffolding parents might provide to enable their children’s mental models of intelligent systems. In this section, we highlight our novel framework for family AI literacy (see Table 1) based on a thorough examination of the literature and our inductive co-design study. Our framework is composed of four dimensions (4As): Ask Adapt, Author, and Analyze, and it describes family activities, literacy questions, and design dimensions for each of the dimensions. Touretzky et al. propose five big main ideas that children should learn about AI technologies (Touretzky et al., 2019). In contrast to this approach, our framework focuses on children as active learners and agents of change, who can decide how AI should work and not only discover its current functionalities. Another contribution of our framework is that it also addresses parents, and tries to engage and support them to make more informed and meaningful use of the smart devices they might integrate into their homes.

5.1 Kids and Parents ASK AI

In prior studies, we investigate the challenges and opportunities of children growing up with digital technologies and their impact on the digital divide. In this context, access to AI literacy for families could prevent the emergence of an AI divide for the generations of children growing up with smart technologies. With intelligent agents in the home, children do not need to read and write to access the internet; they can ask an agent any question or request, and the device will return the first result with a human-like voice and friendly prosody. What seems at first to be a playful interaction between a child and a voice assistant can easily trigger events of real consequences (stories of children buying dollhouses and candy with Amazon’s Alexa without the parental approval has already made national news). Our prior work (Druga et al., 2017) shows that overall, children found the AI agents to be friendly and trustworthy, but that age strongly affected how they attributed intelligence to these devices. Younger participants (4-6 years old) were more skeptical of the device’s intelligence, while the majority of older children (7-10 years old) declared the devices are more intelligent than they are. In a preliminary study, we found that older children mirror their parents’ choices for the smarter agent and also use very similar explanations and attributions, even if they participated in the study independently (Beneteau et al., 2020; Druga et al., 2018). These findings build on prior work in developmental and early cognitive psychology (Gopnik, 2020) to underline the importance of leveraging children's natural tendency to “think like a scientist” when interacting with smart technologies.

5.2 Families ADAPT AI

In order to compare how children use V.U.I.s in different countries, we ran a study with 102 children (7-12 years old), from 4 different countries (U.S.A., Germany, Denmark, and Sweden). Children outside of the U.S.A. were overall more critical of these technologies and less exposed to them. The way children collaborated and communicated while describing their AI perceptions and expectations were influenced both by their social-economic and cultural background. Children in low and medium S.E.S. schools and community centers were better at collaborating compared to high S.E.S. children. However, children in low and medium S.E.S. centers had a harder time advancing because they had less experience with coding and interacting with these technologies. Our findings show that children in Europe were overall more skeptical of the agent’s intelligence and truthfulness (Anders, 2019; Druga et al., 2019).

5.3 AUTHOR AI: From coding to teaching machines

Today, children cannot easily design their own AI devices, program their connected toys, or teach them proper behaviour. However, some initiatives have started to design tools and platforms for enabling authoring with AI for youth (Code.org; Druga, 2018; A guide to AI extensions to Snap!”, n.d.; “Machine Learning for Kids”, n.d).

S.T.E.A.M. Education has become a priority for schools and families around the world, and initiatives like ”Hour of Code” and ”Scratch Days” are currently reaching tens of millions of students in 180+ countries (Statistics — Code.org, n.d.) Learning how to program is also integrated into the curriculum in high schools across the U.K. and U.S.A. Meanwhile, parents are investing more resources to get their children involved in local technology and science clubs, camps, and coding events. Most of the educators, parents and policy-makers are starting to recognize programming as a new literacy, which enables our youth to acquire and apply computational thinking skills. The technology used at home and in the classroom is changing fast. These advancements raise the opportunity not only to teach children how to code, but also how to teach computers and embodied agents by training their own AI models or using existing cognitive services (Druga, 2018).

Figure 6: Examples of AI coding platforms (BlockStudio & Cognimates) piloted with families during our study

In a series of longitudinal studies, we previously found that programming and training smart devices changes the way children attribute intelligence and trust to smart devices. Participants from various S.E.S. backgrounds and different learning settings (public schools, private schools, community centers) become significantly more skeptical of AI’s smarts once they get to understand how it works (Druga, 2018; Druga et al., 2019). In traditional coding, children are used to sending a series of instructions to a machine and seeing how the code is compiled and executed. In AI learning, students have to understand the role of data and how it might influence the way machines execute algorithms (Cassell et al., 2000. Mioduser and Levy, 2010) explored how children could understand robots’ emergent behavior by gradually modifying their environment. They discovered that young people are capable of developing a new schema when they can physically test and debug their assumptions. They also showed that the number of rules and new behaviors should be introduced gradually in the coding activity (Mioduser and Levy, 2010).

5.4 Programmability helps to ANALYZE AI

Prior HCI studies analyzing adults’ mental models of AI technologies found that even a short tutorial with an experimenter (i.e., 15 min) can significantly increase the soundness of participants’ mental models. This phenomenon was consistent in Kulesza et al.’s study on intelligent music recommender systems and Bansal et al.’s study on the effect of different kinds of AI errors (Bansal et al., 2019; Kulesza et al., 2012). More so than users’ explicit mental models, research on AI systems in HCI has focused on explainability and trust. Rutjes et al. argue for capturing a user’s mental model and using it while generating explanations (Rutjes et al., 2019). At the same time, Miller invoked the concept of mental models through ideas of reconciling contradictions and our desire to create shared meaning in his comprehensive review of social science related to explainable AI (Miller, 2019).

Table 1: The 4 As: Proposed Framework for Families AI Literacy Dimensions

AI Literacy Layer

Family Activity

AI Literacy

Question

AI Design

Guideline

Ask

Interact fluently with an existing AI application

or technology

How do you make it do...? Do you? Are you?

Transparency

Explainability

Adapt

Modify or customize an AI application to serve their needs

How do I modify it?

Personalization

Transparency

Author

Create a new AI application

How do I make a new one?

Progressive

Disclosure

Analyze

Analyze the data and the architecture of their AI application and modify it to test different hypothesis

How does it work? What if?

Systemic

Reframing

When trying to understand how children and families analyze AI, we notice that programmability can play a significant role in influencing children’s perception of smart agents’ intelligence (Duuren, 1998; Scaife and Duuren, 1995; Scaife and Rogers, 1999). Parental mental models and attitudes can influence the children’s attributions of intelligence to smart devices (Druga et al., 2018). Within this frame, we define sense-making as a process by which people come across situations or contexts that are unfamiliar but need to process and understand to move forward (Klein et al., 2006). By creating activities and technologies that support families to generate and test various hypotheses about how smart technologies work, we allow family members to not only test and understand how AI works, but also engage in systematic reframing and imagine how AI should work in order to support meaningful family activities (Dellermann et al., 2019).

5.5 The 4A Framework In Action

5.5.1 Ask Dimension - Identify AI Bias

When we initially asked children to describe what bias means and give examples of bias, we found ourselves at a crossroads as we realized none of our participants understood what this term means. We quickly noticed that children understood the notions of discrimination, preferential treatment, and knew how to identify situations where technology was treating unfairly specific groups of people.

Figure 7: Examples of Families engaging with the Smart Toys activity during our co-design sessions

”Bias? It means bias” - L. 7 years old boy. During the initial discussion in the first study session, we tried to identify examples of bias that children could relate to, such as cookies or pet preferences. When talking about cat people versus dog people, D., a nine years old girl, said ’Everything they own is a cat! cat’s food, cat’s wall, and cat(...)’. We then asked kids to describe dog people. A., an 8 years old boy, answered: ’Everything is a dog! The house is shaped like a dog, bed shapes like a dog’. After children shared these two perspectives, we discussed again the concept of bias referring to the assumptions they made about cat and dog people.

Race and Ethnicity Bias. In the final discussion of the first session, children were able to connect their examples from daily life with the algorithmic justice videos they just watched. ”It is about a camera lens which cannot detect people in dark skin,” said A. while referring to other biased examples. We asked A. why he thinks the camera fails in this way, and he answered: ’It could see this face, but it could not see that face(...) until she puts on the mask’. B., an 11 years old girl, added ’it can only recognize white people’. These initial observations from the video discussions were later reflected in the drawings of children. When drawing how the devices work (see fig. 8), some children depicted how smart assistants separate people based on race. ”Bias is making voice assistants horrible; they only see white people” - said A. in a later session while interacting with smart devices.

Age Bias. When children watched the video of a little girl having trouble communicating with a voice assistant because she could not pronounce the wake word correctly, they were quick to notice the age bias. ”Alexa cannot understand baby’s command because she said Lexa,”- said M., a 7 years old girl, she then added: ”When I was young, I did not know how to pronounce Google”, empathizing with the little girl in the video. Another boy, A., jumped in saying: ”Maybe it could only hear different kinds of voices” and shared that he does not know Alexa well because ”it only talks to his dad”. Other kids agreed that adults use voice assistants more.

Figure 8: Examples of bias instances identified by children in sessions 1 and 2.

Gender bias After watching the video of the gender-neutral assistant and interacting with the voice assistants we had in the space, M. asked: ”Why do AI all sound like girls?”. She then concluded that ”mini Alexa has a girl inside and home Alexa has a boy inside” and said that the mini-Alexa is a copy of her: ”I think she is just a copy of me!”. While many of the girls were not happy with the fact that all voice assistants have female voices, they recognized that ”the voice of a neutral gender voice assistant does not sound right” -B., 11 years old. These findings are consistent with the Unesco report on implications of gendering the voice assistants, which shows that having female voices for voice assistants by default is a way to reflect, reinforce, and spread gender bias (UNESCO, EQUALS Skills Coalition, 2019).

5.5.2 Adapt Dimension - Trick the AI

In the second design session, we invited participants to engage directly with the smart technologies and see if they can trick them. We wanted to provide the children with concrete ways in which they can test the device’s limitations and bias, and we learned from our prior studies that children enjoy finding glitches and ways to make a program or a device to fail (Druga, 2018). Such prompts not only give them a sense of agency but also provide valuable opportunities for debugging and for them to test their hypotheses about how the technology works. During our workshop, children imagined and tested various scenarios for tricking the different smart devices and algorithmic prediction systems. When playing with Anki’s social robot Cozmo, they decided to disguise themselves with makeup, masks, glasses, or other props so the robot cannot recognize them anymore. They also decided to disguise other robots and make them look like humans and see it would trick the robots’ computer vision algorithm (see fig. 9). Children also used this strategy in our prior AI literacy workshops for families in Germany, and it is a fun activity that could easily be replicated at home.

Figure 9: Examples of ways in which children are trying to trick Cozmo robot

When playing with the Quick Draw app, children were at first amazed at how quick and efficient the program was in guessing their drawings, so they decided to deploy many different strategies in order to confuse the program. They first tried to draw nonsensical drawings and see if they would still get objects predictions after they decided that multiple children should try to draw on the same device at the same time so that the program will have a hard time keeping up with their drawing speed. When interacting with Amazon’s voice assistant, Alexa participants found various ways to probe if it is biased. In essence, they tried to speak Spanish and see if the device would recognize a new language, they used different names for calling the device ”Lexa” to see if it could deal with more informal language, they asked ”silly” questions to see if the device can engage in child play (i.e. ”Call me princess”), they also tried to see if it could sing songs from different locations such as the North Pole or the Indian Ocean. Very often, children build on each other’s questions during the interaction and also help each other reformulate a question when needed. This finding is consistent with prior work done in this field, where we learn how much peers or family members can help communication breakdowns repairs when interacting with voice assistants (Beneteau et al., 2019; Druga et al., 2017). While trying to probe and trick the voice assistant, children voiced several privacy concerns: ”Amazon can hear everything users have said to their Alexas” said A., he then added, ”Alexa buys data, takes data, and gives it to people who build Alexa.” D. was worried that ”the tiny dots on Alexa are tiny eyes where people can see users,” so she decided to cover the device with post-its. From these examples, we see how children’s privacy concerns can vary widely based on their naive theories (Inagaki, 1993), prior experiences with these technologies, and conversations they had with or heard from their parents.

Figure 10: Examples of ways in which children were trying to trick the AI

Figure 11: Examples of children coding a game with BlockStudio and a family training a custom model with Teachable Machine

5.5.3 Author Dimension - Design, Code, Teach the AI

The democratization of current AI technologies allows children to communicate with machines not only via code but also via natural language and computer vision technologies. These new interfaces make it easier for a child to control and even ”program” an agent via voice, but it makes it harder for a child to debug when the machine does not behave the way he expects. During our design sessions, children had the opportunity to discover a series of AI programming applications individually, and then also use them together with their parents. Sometimes families would start by playing with example games that would recognize their gestures or objects. We would then asked them to make the games more or less intelligent. Other times families would come up with their project ideas and would start a program from scratch. We would ask the children to explain specific concepts from their project. ”What does the loop mean?”, asked one of the researchers. M. answered by drawing a circle in the air. We also asked both children and parents to reflect on how they can make the technology suitable and meaningful for their families. D.’s older sister said they could program the Sphero ball robot for ”maybe dog chasing.”

In all the authoring activities, families were trying to test their programs in various ways, moving their bodies together, standing up and sitting down. Meanwhile, one of the family members was going back and forth to modify the code blocks or the parameters of the smart games to see what would happen. Children and parents engaged in a balanced partnership, especially when using the applications where it was straightforward for multiple people to take turns when interacting with the program (i.e., Quickdraw, Cognimates motion games, Teachable machine vision training). Similar to prior studies, parents helped scaffold their children’s behavior when interacting with robots or interactive devices together (Chang and Breazeal, 2011; Freed, 2012).

When M. and her dad were playing together with the Teachable Machine Platform (ref fig. 11), the dad would always probe his daughter with helping questions. ”So I put in 150 pictures, and you put in 25, so that model knows me better because I put more pictures in it. The more pictures I put in, the more the model will learn. How would you fix it?” asked M.’s dad. M. replied, ”add more” and proceeded to add more pictures of herself. When she realized she could not add more pictures after a model was trained, she would say, ”No, we have to re-do it. Daddy goes first this time.” After training their model for a second time, M. and her dad tried to trick it, and both faced the camera at the same time to see which one would be recognized. M. noted that for the machine, they looked very similar but she had a pink bow, and she thinks that is why the machine can recognize her. She thought of another way of tricking the machine by giving her pink bow to her dad.

We observed the same behaviour when families interact with voice assistants. All family members helped each other to repair various communication breakdowns, similarly to prior studies (Beneteau et al., 2019). For example, R.’s dad was trying to get the voice assistant to act like a cat. He said ”meow” when talking to the device. ”Oh, you have to say something” replied R., his 11 years old son, then R. added, ”if you wanna wake her up, you should say something like Alexa”. The device turned blue, and R. said, ”meow.” After, the voice assistant started to meow.

From these examples, we see how children build on experiences and skills developed in the prior study sessions for probing the technology as they are designing it, either by asking it questions, trying to trick their games, debugging collaboratively, or by teaching and supporting each other. In this way, our Ask, Adapt, and Author framework dimensions become intertwined in practice, and serve as a support in helping families gain a more in-depth understanding and control of AI technologies.

5.5.4 Analyze Dimension - How does it work? How do we make it better?

The last step in our design sessions with families was the critical analysis of the technologies that were discussed, used, or created in all the other study sessions. This critical analysis was done as part of a group discussion, at the end of the study in which children, parents, and researchers participated in a circle. The analysis was also done throughout the other sessions every time we asked participants to draw and explain how the devices work and what they have inside. With these prompts, we aimed to discover the families’ mental models of AI technologies, and observe how these explanations draw on or influence their direct interaction with smart devices. The purpose of Analyze discussion was also to elicit systematic reframing for families to reflect on how they might make better use of AI systems in the future and think about when and if they should use such technologies.

What is inside? In order to help uncover how children conceptualize smart devices, we asked them to draw what is inside the device and explain how it works. Children resorted to various representations and explanations: either by saying there is a computer inside, a series of apps, a robot, a phone, or a search engine. ”There is a search engine inside the Alexa, but I do not know what it looks like” said L., a 10 years old boy.

Y. and S., two 9 years old girls, said that there is an army of people who sit at their computers inside the ”Company of Alexa” and reply to all the questions after they research the answers online. ”There is a bunch of cords and a speaker inside the Alexa. It would connect to a computer and link it to Amazon people. If the question is what is the weather, it [the person] would search the weather and type it up and let Alexa say it” said Y.,a 9 years old girl.

The most common analogy children made was that of the mobile apps they are well familiar with. Children imagined how the voice assistant would use different mobile apps depending on the question the user asks (see fig. 12). D., another 9 years old girl, also imagined how the different devices are linked to each other: ”if Alexa does not know an answer, it asks other Alexa first before asking Amazon, once one Alexa gets the answers...every single Alexa in the world will get that answers”. The younger children (6-7 years old) provided more vitalistic explanations, consistent with prior studies (Inagaki, 1993). ”There is a brain inside Alexa, and there is a part that connects to a computer with a speaker. The speaker will shout out the answer” said M., a 7 years old girl. The older children (8-11 years old) had a very different explanation, which was primarily related to other technologies or applications they are currently using:” Alexa looks at every place it can search for an answer: Amazon, YouTube, Internet, Weather, Map, any place” said A., an 8 years old boy. ”The database is a box with stuffs in it. The stuffs are statements you tell Alexa” added R., an 11 years old boy.

It is as simple as 2+2. During the design sessions, children tried to validate their mental models by probing the different devices with questions. Children also tried to find out the age of the devices in order to determine how much they can trust them. Children were disappointed by the answer Alexa gave them when they asked how old it is: ”it is as simple as 2+2.” They described this answer as ”questionable”, as they would find it hard to believe a voice assistant could possess so much information at the age of 4. B. said the assistant must be at least 20 years old (see fig. 13).

Figure 12: Examples of drawings from children explaining what is inside the voice assistant

Figure 13: The poster made by B., an 11 years old girl, for describing her process of determining how old the voice assistant is

When children would find bugs or limitations in the device’s answers, they thought the errors happen because the device ”relies too much on the internet.” Children requested to know who programmed the voice assistant in order to understand why the device is lying about its age. From this example, we see how our participants were able to draw on prior workshop experiences and not only understand how the device behaviour is linked to the way it was programmed, but also figure out what questions to ask in order to test the device.

6. Discussion

Today’s modern world is now governed by the decisions made through AI and algorithms. While these tools show incredible promise in healthcare, education, and other fields, there is also a need to support ways in which people (mainly from vulnerable and marginalized populations) can carefully critique the ways AI could amplify racism, sexism, and other forms of discrimination. For people to start considering algorithmic justice early, we must find ways in which they develop forms of literacy around AI. We argue that AI justice and AI literacy begins in early interactions, inquiries, and investigations in the family unit.

AI literacy, however, is not a form of knowledge that can be simply taught in a didactic and lecture-based form (Druga, 2018). Instead, designers need to consider how to promote sense-making, collaboration, questioning, and critical thinking. How to design future AI systems for families which are tapping into the idea of ”children as scientists” and are leveraging their curiosity and both the explore/exploit paradigms? Prior work shows that children are developmentally primed for this type of exploration (Gopnik, 2020) and we believe it is a missed opportunity to not provide AI literacy opportunities by design of future smart technologies and via parenting.

Based on our prior research and the findings of this study, we propose a novel AI literacy framework for designers and educators to consider in order to support critical understanding and use of AI systems for families. We believe it is important to consider this design framework in the context of our current analysis of nested ecological systems (Bronfenbrenner, 1994).

In Asking sessions, children and families can inquire and interact with AI agents through various means, such as calling out with voice interactions, drawing, and playing. However, embedded in these interactions with Asking, is the notion of privacy policies that need to be transparent for families (exosystem). Families have several questions about privacy, technology, policies, and their children (Zeng et al., 2017). Therefore, how do we support families to ask and interact with AI agents in a way that deems their information safe and confidential? Designers also need to consider how at-home interactions happen between children and families (microsystems). In this context, are families able to collaborate and ask AI agents together? How do prior relationships in families mediate how comfortable family members are to engage with AI at home?

With Adaptation sessions, families are shifting and mitigating their perceptions and engagements around AI to fit their contexts. However, in adapting to AI, there remain questions of negotiation and power, Barocas and Selbst, 2016. AI systems are unable to code switch and recognize children and adults Beneteau et al., 2019. How are more substantial cultural capital and social contexts (macrosystems) of families thought about with AI? For instance, bilingual families can switch and merge languages (e.g., Spanglish). For AI voice assistants, this means having to adopt a single language. Similarly, AI systems have difficulty recognizing different languages and accents (macrosystems). In this case, families who may have grown together in specific social and cultural norms now face systems that are unable to adapt to these larger macrosystems.

For the Author dimension, families need a chance to build and create in order to develop AI literacy. We ask, though, who has a chance and opportunity to build? Even if designers create authoring systems for AI engagement, this can be solely dependent on technology infrastructure at home (exosystems) (Riddlesden and Singleton, 2014). Authoring may also mean learning how to build, which may privilege individual families in communities, libraries, schools, and networks that can teach and build knowledge capacity.

Finally, under Analyze, the design of AI learning tools can be situated towards collaboration and sense-making (Ash, 2004; Paul and Reddy, 2010). This approach assumes that different family units work together (microsystem). Therefore, how is a careful reflection on AI designed to deal with real family constraints, like working families, families with limited time, and families that always move (i.e., children living between households)? How might designers create activities and technologies that support diverse families to generate and test various hypotheses about how smart technologies work and engage in systematic reframing of how AI should work in order to support meaningful and inclusive family activities (Dellermann et al., 2019).

Overall, while complex ecological systems need to be considered within design frameworks, there are still takeaways for families with AI literacy and justice. Our study shows that with the Ask, Adapt, Author, and Analyze dimensions, parental roles and relationships still matter when families are learning about AI together. Aarsand (2007) describes “asymmetrical relations” between parents and children concerning assumptions about expertise with computers and video games as both a challenge, and an opportunity for joint engagement with these media. The so-called “digital divide” through which children are considered to be experts with digital media, while adults are positioned as novices becomes a “resource for both children and adults to enter and sustain participation in activities” (Aarsand, 2007). Children can teach parents about AI technologies, but it is also parents responsibility to teach children about the values in their community that matter and how AI tools and systems align with these values (Friedman et al., 2008).

6.1 Design Features that Encourage AI Literacy for families

Using our findings, we can examine the conditions and processes that our family AI Literacy framework could support. We use our findings to show how the Ask, Adapt, Author, and Analyze dimensions can lead to critical understanding of AI for families (Druga, 2018; Druga et al., 2019), through a balanced engagement with these new technologies (Sobel et al., 2004; Takeuchi, Stevens, et al., 2011; Yip et al., 2017).

Mutual engagement (i.e., multiple family members should be equally motivated to participate). Families in this study were able to participate in different ways, whether they were asking several questions to voice assistants, playing and authoring together with new AI systems, or trying to analyze how bias is introduced into smart technologies.

Dialogic inquiry (i.e., inspiring collaboration and meaning-making): Families can try to analyze the AI system and try to figure out how it works. They can also determine how the AI systems need to adapt to their families’ culture, rules, and background.

Co-creation (i.e., through co-usage, people create shared understanding): Parents and children can come together to ask, adapt, author, and analyze AI systems in order to find out what they all currently know, and what they would like to know more about.

Boundary crossing (i.e., spans time and space): Families can consider how AI systems are pervasive in multiple technologies. Whether in Internet searches, YouTube recommendation systems, and voice assistants of multiple forms, the ability to recognize how pervasive AI is becoming on many platforms can shape how AI itself is crossing boundaries.

Intention to develop (i.e., gain experience and development): Families can consider how they are adapting to AI systems. For instance, are the questions they are asking voice assistants changing? Are families noticing when AI systems may be present? Interestingly, families can develop as they understand how AI systems themselves are adapting to different people and contexts.

Focus on content, not control (i.e., interface does not distract from interaction): With some AI systems, families can engage via multiple straightforward means of engagement. Through asking voice assistants questions, seeing if AI systems can recognize drawing and sketches, and engaging with computer vision models, there are now many different and simple mechanics that allow families to question and critique AI systems.

7. Conclusion

Our aim in designing is to ensure we are supporting families to raise a generation of children who are not merely passive consumers of AI technologies but, instead, active creators and shapers of its future. With our AI Literacy Framework, we aim to encourage and enable families to learn how to develop a critical understanding of AI. We propose this framework from an ecological systems theory perspective and present examples of implications for supporting family AI literacy across various nested layers of our society. As designers of technologies, we aim to support a diverse population of children and adults and provide significant inspiration and guidance for future designs of more inclusive human-machine interactions. We hope that by democratizing access to AI literacy through tinkering and play, we will enable families to step-in and decide when and how they wish to invite AI in their homes and lives.

Authors

Stefania Druga (Code and Cognition Lab at the University of Washington) is the creator of Cognimates, a platform for AI education for families, and a Ph.D student at the University of Washington. Her research on AI education started during her master’s in Personal Robots Group at MIT Media Lab. She is also a former Weizenbaum research fellow in the Critical AI Lab and an assistant professor at NYU ITP and RISD, teaching graduate students how to hack smart toys for AI education.

Jason Yip (Digital Youth Lab at the University of Washington) is a professor at the Information School and an adjunct assistant professor in the Department of Human-Centered Design and Engineering at the University of Washington. His research examines how technologies can support parents and children learning together. His research is supported by the National Science Foundation, National Institutes of Health, Mozilla, and the Institute of Museum and Libraries.

Michael Preston (Joan Ganz Cooney Center) is the Executive Director of the Joan Ganz Cooney Center at Sesame Workshop, a research and innovation lab that works to advance children's learning and health in the digital age. His work has focused on using technology to improve teaching and learning, drive student agency and interest, and create models for systemic change in K-12 and university contexts.

Devin Dillon (Technovation), Senior Director, leads AI Family Challenge at Iridescent. In the program’s first year, Devin scaled the initiative into 13 countries with 7500+ participants.

References

Aarsand, P. A. (2007). Computer and video games in family life: The digital divide as a resource in intergenerational interactions. Childhood, 14(2), 235–256.

Anders, L. (2019). AI for kids — it is our responsibility to enable children worldwide to engage with artificial intelligence. Medium. [(Accessed on 04/11/2020)]. https://medium.com/@_tlabs/ai-for-kids-it-is-our-responsibility-to-enable-children-worldwide-to-engage-with-artificial-ec0d5c627945

Anderson, M. (2019). Mobile technology and home broadband 2019. Pew Research Center, 2.

Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica, May, 23, 2016.

Ash, D. (2004). Reflective scientific sense-making dialogue in two languages: The science in the dialogue and the dialogue in the science. Science Education, 88(6), 855–884.

Banerjee, R., Liu, L., Sobel, K., Pitt, C., Lee, K. J., Wang, M., Chen, S., Davison, L., Yip, J. C., Ko, A. J., et al. (2018). Empowering families facing english literacy challenges to jointly engage in computer programming. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 622.

Bansal, G., Nushi, B., Kamar, E., Weld, D. S., Lasecki, W. S., & Horvitz, E. (2019). Updates in human-ai teams: Understanding and addressing the performance/compatibility tradeoff. Proceedings of the AAAI Conference on Artificial Intelligence, 33, 2429–2437.

Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. Calif. L. Rev., 104, 671.

Barron, B. (2004). Learning ecologies for technological fluency: Gender and experience differences. Journal of Educational Computing Research, 31(1), 1–36.

Beneteau, E., Guan, Y., Richards, O. K., Zhang, M. R., Kientz, J. A., Yip, J., & Hiniker, A. (2020). Assumptions checked: How families learn about and use the echo dot.

Beneteau, E., Richards, O. K., Zhang, M., Kientz, J. A., Yip, J., & Hiniker, A. (2019). Communication breakdowns between families and alexa. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 243.

Berthelsen, D., Walker, S. et al. (2008). Parents’ involvement in their children’s education. Family matters, (79), 34.

Bronfenbrenner, U. (1994). Ecological models of human development. Readings on the development of children, 2(1), 37–43.

Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Conference on fairness, accountability and transparency, 77–91.

Cassell, J., Sullivan, J., Churchill, E., & Prevost, S. (2000). Embodied conversational agents. MIT press.

Chang, A., & Breazeal, C. (2011). Tinkrbook: Shared reading interfaces for storytelling. Proceedings of the 10th International Conference on Interaction Design and Children, 145–148.

Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative analysis. sage.

Chklovski, T., Jung, R., Fofang, J. B., Gonzales, P., Hub, B. T., & La Paz, B. (2019). Implementing a 15-week ai-education program with underresourced families across 13 global communities. International joint conference on artificial intelligence.

Code.org [(Accessed on 04/11/2020)]. (n.d.).

Cole, M., Amsel, E., & Renninger, K. (1997). Cultural mechanisms of cognitive development. Change and development: Issues of theory, method, and application, 245–263.

Commission, U. F. T. et al. (2017). Children’s online privacy protection rule: A six-step compliance plan for your business.

Coyne, S. M., Radesky, J., Collier, K. M., Gentile, D. A., Linder, J. R., Nathanson, A. I., Rasmussen, E. E., Reich, S. M., & Rogers, J. (2017). Parenting and digital media. Pediatrics, 140(Supplement 2), S112–S116.

Dellermann, D., Calma, A., Lipusch, N., Weber, T., Weigel, S., & Ebel, P. (2019). The future of human-ai collaboration: A taxonomy of design knowledge for hybrid intelligence systems. Proceedings of the 52nd Hawaii International Conference on System Sciences.

DiSalvo, B., Khanipour Roshan, P., & Morrison, B. (2016). Information seeking practices of parents: Exploring skills, face threats and social networks.

Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 623–634.

Druga, S. (2018). Growing up with ai: Cognimates: From coding to teaching machines (Doctoral dissertation). Massachusetts Institute of Technology.

Druga, S., Vu, S. T., Likhith, E., & Qiu, T. (2019). Inclusive ai literacy for kids around the world. Proceedings of FabLearn 2019, 104–111.

Druga, S., Williams, R., Breazeal, C., & Resnick, M. (2017). Hey google is it ok if i eat you?: Initial explorations in child-agent interaction. Proceedings of the 2017 Conference on Interaction Design and Children, 595–600.

Druga, S., Williams, R., Park, H. W., & Breazeal, C. (2018). How smart are the smart toys?: Children and parents’ agent interaction and intelligence attribution. Proceedings of the 17th ACM Conference on Interaction Design and Children, 231–240. https://doi.org/10.1145/3202185. 3202741

Druin, A. (2000). The role of children in the design of new technology. submitted to. ACM Transactions on Human-Computer Interaction.

Duuren, M. V. (1998). Gauging children’s understanding of artificially intelligent objects: A presentation of”counterfactuals. International Journal of Behavioral Development, 22(4), 871–889.

Eurobarometer. (2011). SPECIAL EUROBAROMETER 359: Attitudes on data protection and electronic identity in the European Union. Conducted by TNS Opinion & Social at the request of Directorate-General Justice, Information Society & Media and Joint Research Centre. European Commission.

Ferguson, A. G. (2012). Predictive policing and reasonable suspicion. Emory LJ, 62, 259.

Freed, N. A. (2012). ” this is the fluffy robot that only speaks french”: Language use between preschoolers, their families, and a social robot while sharing virtual toys (Doctoral dissertation). Massachusetts Institute of Technology.

Friedman, B., Kahn, P. H., & Borning, A. (2008). Value sensitive design and information systems. The handbook of information and computer ethics, 69–101.

Gebru, T. (2019). Oxford handbook on ai ethics book chapter on race and gender. arXiv preprint arXiv:1908.06165.

Gonzales, A. (2017). Technology maintenance: A new frame for studying poverty and marginalization. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 289–294.

Gopnik, A. (2020). Childhood as a solution to explore–exploit tensions. Philosophical Transactions of the Royal Society B, 375(1803), 20190502.

Grossman, J., Lin, Z., Sheng, H., Wei, J. T.-Z., Williams, J. J., & Goel, S. (2019). Mathbot: Transforming online resources for learning math into conversational interactions.

Guha, M. L., Druin, A., Chipman, G., Fails, J. A., Simms, S., & Farber, A. (2004). Mixing ideas: A new technique for working with young children as design partners. Proceedings of the 2004 Conference on Interaction Design and Children: Building a Community, 35–42.

A guide to ai extensions to snap! [(Accessed on 04/11/2020)]. (n.d.).

https://ecraft2learn.github.io/ai/

Henrich, J., Heine, S. J., & Norenzayan, A. (2010). Most people are not weird. Nature, 466(7302), 29.

Inagaki, K. (1993). Young children’s differentiation of plants from nonliving things in terms of growth. biennial meeting of Society for Research in Child Development, New Orleans.

Ito, M., Baumer, S., Bittanti, M., Cody, R., Stephenson, B. H., Horst, H. A., Lange, P. G., Mahendran, D., Martınez, K. Z., Pascoe, C., et al. (2009). Hanging out, messing around, and geeking out: Kids living and learning with new media. MIT press.

Kahn, P. H., Reichert, A. L., Gary, H. E., Kanda, T., Ishiguro, H., Shen, S., Ruckert, J. H., & Gill, B. (2011). The new ontological category hypothesis in human-robot interaction. Human-Robot Interaction (HRI), 2011 6th ACM/IEEE International Conference on, 159–160.

Kahn Jr, P. H., Kanda, T., Ishiguro, H., Freier, N. G., Severson, R. L., Gill, B. T., Ruckert, J. H., & Shen, S. (2012). “Robovie, you’ll have to go into the closet now”: Children’s social and moral relationships with a humanoid robot. Developmental psychology, 48(2), 303.

Klein, G., Moon, B., & Hoffman, R. R. (2006). Making sense of sensemaking 2: A macrocognitive model. IEEE Intelligent systems, 21(5), 88–92.

Kulesza, T., Stumpf, S., Burnett, M., & Kwan, I. (2012). Tell me more? The effects of mental model soundness on personalizing an intelligent agent. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1–10.

Kulick, D., & Stroud, C. (1993). Conceptions and uses of literacy in a papua new guinean village. Cross-cultural approaches to literacy, 30–61.

Lau, J., Zimmerman, B., & Schaub, F. (2018). Alexa, are you listening? privacy perceptions, concerns and privacy-seeking behaviors with smart speakers. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), 1–31.

Lee, M. K., Davidoff, S., Zimmerman, J., & Dey, A. (2006). Smart homes, families, and control.

Legg, S., & Hutter, M. (2007). Universal intelligence: A definition of machine intelligence. Minds and machines, 17(4), 391–444.

Lievens, E. (2017). Children and the gdpr: A quest for clarification and the integration of child rights considerations: Panel: Generation zero-data & digital marketing protections for children and teens under the gdpr, coppa and the new fcc privacy rules. Computers, Privacy & Data Protection: The Age of Intelligent Machines.

Livingstone, S. (2018). Children: A special case for privacy? Intermedia, 46(2), 18–23.

Livingstone, S., Haddon, L., G¨orzig, A., & Olafsson, K. (2011). Risks and safety´ on the internet: The perspective of european children: Full findings and policy implications from the eu kids online survey of 9-16 year olds and their parents in 25 countries.

Livingstone, S., Stoilova, M., & Nandagiri, R. (2019). Children’s data and privacy online: Growing up in a digital age: An evidence review.

Lovato, S., & Piper, A. M. (2015). Siri, is this you?: Understanding young children’s interactions with voice input systems. Proceedings of the 14th International Conference on Interaction Design and Children, 335–338. Machine learning for kids [(Accessed on 04/11/2020)].

McReynolds, E., Hubbard, S., Lau, T., Saraf, A., Cakmak, M., & Roesner, F. (2017). Toys that listen: A study of parents, children, and internet-connected toys. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 5197–5207.

Medin, D. L., & Bang, M. (2014). Who’s asking?: Native science, western science, and science education. MIT Press.

Milkaite, I., & Lievens, E. (2018). Children’s right to privacy and data protection around the world: Challenges in the digital realm. European Journal of Law and Technology, 10(1).

Milkaite, I., & Lievens, E. (2020). Child-friendly transparency of data processing in the eu: From legal requirements to platform policies. Journal of Children and Media, 14(1), 5–21.

Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38.

Mioduser, D., & Levy, S. T. (2010). Making sense by building sense: Kindergarten children’s construction and understanding of adaptive robot behaviors. International Journal of Computers for Mathematical Learning, 15(2), 99–127.

Regulation (EU) 2016/679 of The European Parliament, Art. 8GDPR: Conditions applicable to child’s consent in relation to information.

society services. https://gdpr-info.eu/art-8-gdpr/

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.

Paul, S. A., & Reddy, M. C. (2010). Understanding together: Sensemaking in collaborative information seeking. Proceedings of the 2010 ACM conference on Computer supported cooperative work, 321–330.

Pina, L. R., Gonzalez, C., Nieto, C., Roldan, W., Onofre, E., & Yip, J. C. (2018). How latino children in the U.S. engage in collaborative online information problem solving with their families. Proceedings of the ACM on HumanComputer Interaction, 2(CSCW), 140.

Porcheron, M., Fischer, J. E., Reeves, S., & Sharples, S. (2018). Voice interfaces in everyday life. proceedings of the 2018 CHI conference on human factors in computing systems, 640.

Riddlesden, D., & Singleton, A. D. (2014). Broadband speed equity: A new digital divide? Applied Geography, 52, 25–33.

Rogers, J. (2019). Privacy included: Rethinking the smart home. Internet Health Report.

Rogoff, B., Najafi, B., & Mejıa-Arauz, R. (2014). Constellations of cultural practices across generations: Indigenous american heritage and learning by observing and pitching in. Human Development, 57(2-3), 82–95.

Rosebery, A. S., Ogonowski, M., DiSchino, M., & Warren, B. (2010). the coat traps all your body heat”: Heterogeneity as fundamental to learning. The Journal of the Learning Sciences, 19(3), 322–357.

Ruan, S., He, J., Ying, R., Burkle, J., Hakim, D., Wang, A., Yin, Y., Zhou, L., Xu, Q., AbuHashem, A., et al. (2020). Supporting children’s math learning with feedback-augmented narrative technology. Proceedings of the Interaction Design and Children Conference, 567–580.

Ruan, S., Jiang, L., Xu, J., Tham, B. J.-K., Qiu, Z., Zhu, Y., Murnane, E. L., Brunskill, E., & Landay, J. A. (2019). Quizbot: A dialogue-based adaptive learning system for factual knowledge. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–13.

Rutjes, H., Willemsen, M., & IJsselsteijn, W. (2019). Considerations on explainable ai and users’ mental models. In Where is the human? Bridging the gap between AI and HCI. Workshop at CHI’19, May 4–9, Glasgow, Scotland UK. ACM.

Scaife, M., & Duuren, M. (1995). Do computers have brains? What children believe about intelligent artifacts. British Journal of Developmental Psychology, 13(4), 367–377.

Scaife, M., & Rogers, Y. (1999). Kids as informants: Telling us what we didn’t know or confirming what we knew already. The design of children’s technology, 27–50.

Sch¨on, D. A. (1987). Educating the reflective practitioner.

Scribner, S., & Cole, M. (1981). Unpackaging literacy. Writing: The nature, development, and teaching of written communication, 1, 71–87.

Sobel, D. M., Tenenbaum, J. B., & Gopnik, A. (2004). Children’s causal inferences from indirect evidence: Backwards blocking and bayesian reasoning in preschoolers. Cognitive science, 28(3), 303–333.

Statistics — code.org [(Accessed on 04/11/2020)]. (n.d.).

Stevens, R., & Penuel, W. R. (2010). Studying and fostering learning through joint media engagement. Principal Investigators Meeting of the National Science Foundation’s Science of Learning Centers, 1–75.

Street, B. (2003). What’s “new” in new literacy studies? critical approaches to literacy in theory and practice. Current issues in comparative education, 5(2), 77–91.

Takeuchi, L., Stevens, R. et al. (2011). The new coviewing: Designing for learning through joint media engagement. New York, NY: The Joan Ganz Cooney Center at Sesame Workshop.

Touretzky, D., Gardner-McCune, C., Martin, F., & Seehorn, D. (2019). Envisioning ai for k-12: What should every child know about ai? Proceedings of the AAAI Conference on Artificial Intelligence, 33, 9795–9799.

Technovation. (2019). 2019 Impact Report. (Accessed on 04/11/2020).

https://www.technovation.org/wp-content/uploads/2020/04/Technovation-2019-General-Impact-1.pdf

UNESCO, EQUALS Skills Coalition. (2019). I’d blush if I could: closing gender divides in digital skills through education. UNESCO Digital Library. Retrieved 03/09/21 https://unesdoc.unesco.org/ark:/48223/pf0000367416.page=85

Van Dijk, J. A. (2006). Digital divide research, achievements and shortcomings. Poetics, 34(4-5), 221–235.

Walsh, G., Druin, A., Guha, M. L., Foss, E., Golub, E., Hatley, L., Bonsignore, E., & Franckel, S. (2010). Layered elaboration: A new technique for codesign with children. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1237–1240.

Walsh, G., Foss, E., Yip, J., & Druin, A. (2013). Facit pd: A framework for analysis and creation of intergenerational techniques for participatory design. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2893–2902.

Walsh, G., & Wronsky, E. (2019). Ai+ co-design: Developing a novel computer-supported approach to inclusive design. Conference Companion Publication of the 2019 on Computer Supported Cooperative Work and Social Computing, 408–412.

Woodward, J., McFadden, Z., Shiver, N., Ben-hayon, A., Yip, J. C., & Anthony, L. (2018). Using co-design to examine how children conceptualize intelligent interfaces. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 1–14.

Yardi, S., & Bruckman, A. (2012). Income, race, and class: Exploring socioeconomic differences in family technology use. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 3041–3050.

Yip, J. C., Sobel, K., Pitt, C., Lee, K. J., Chen, S., Nasu, K., & Pina, L. R. (2017). Examining adult-child interactions in intergenerational participatory design. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 5742–5754.

Zeng, E., Mare, S., & Roesner, F. (2017). End user security and privacy concerns with smart homes. Thirteenth Symposium on Usable Privacy and Security ({SOUPS} 2017), 65–80.

Comments
0
comment
No comments here
Why not start the discussion?