This essay explores what artificial intelligence (AI) based cyberbullying interventions in social media which consider youth as humans, rather than merely users, might look like in practice. Through young people’s own accounts, gathered in interviews and focus groups, we seek to provide fresh evidence about their relationship with social media; how this relationship impacts their wellbeing; and, consequently, how the design of cyberbullying interventions that resonate with their needs and that positively impact their wellbeing, might look on popular social media platforms. As such, this chapter also gives evidence-based insight into a technology development effort relying on co-design with youth as a methodology for discovering insights and designing effective interventions.
More specifically, we examine how youth social norms can adversely affect the need to seek help in cyberbullying incidents and the implications these norms have for designing technology for effective cyberbullying intervention. Our work constitutes an example of how technology companies might choose to approach product design by nurturing empathy for young users,1 and we seek to demonstrate how such youth-centered design approaches are aligned with the United Nations Convention on the Rights of the Child (UNCRC). As per General Comment No. 25,2 the UNCRC applies in digital environments and, following Article 12, children have the right to be consulted on matters that concern them. For those reasons and because AI-based cyberbullying interventions are of direct relevance to children and youth, it is essential to try to capture their views on the development of technologies designed to address cyberbullying, which is what we attempt to demonstrate, here.
Along with embedding children’s rights to participation into technology design, we further explore how dignity theory (Hicks 2011; Fuller and Gerloff 2008; Hartling and Lindner 2016), which is a framework that has not been extensively applied in studying youth and digital technology, could be used to inform design approaches. Dignity theory allows us to examine how the value of equal worth of all human beings might be promoted through technology design, specifically in AI-based cyberbullying interventions; and how this design could be executed in a way that resonates with youth rather than being imposed upon them. After discussing the findings of our research in this context, we conclude by making explicit connections with how youth-centered, digital-focused design interventions work in conjunction with changing business models and emerging technology regulation (such as, but not limited to, the Digital Services Act3 in the European Union, the Online Safety and Media Regulation Bill4 in Ireland, the Online Safety Bill5 in the UK and the Australian Online Safety Act6).
Artificial intelligence, or algorithmic techniques intended to assist with automating content moderation, such as natural language processing (NLP), machine and deep learning, are increasingly relied upon by various online platforms to address cyberbullying (Milosevic et al. 2022a; Vidgen and Derczynski 2020). Cyberbullying can be defined as intentional and repetitive hurtful behavior that can take the form of posts, abusive messaging and comments, among other examples (Patchin and Hinduja 2015; O’Higgins-Norman 2020). With the development of virtual reality technology, AI will play an even more important role in cyberbullying moderation, which will become more complex and challenging.7 At the same time, and despite the most recent efforts to involve children and families in the design of various safety tools,8 little is known about whether and how technology companies engage children in AI design for safety purposes (such as cyberbullying moderation); and, perhaps more importantly, what it means to consider - and even center - children’s perspectives during such design process (Reith 2021; James and Weinstein 2021).
The authors of this essay have previously argued that cyberbullying is primarily a relational issue frequently rooted in offline (school) sociality and that its classification as strictly an online safety issue is not always conducive to good prevention education or platform-based interventions that youth view to be effective (Milosevic et al. 2022). Such prevention is often prescriptive (e.g., “Be kind,” “Don’t bully!”), and sometimes even patronizing, talking down at children, failing either to address the motivations for engaging in bullying behaviors or to resonate with youth (Søndergaard and Rabøl Hansen 2018), or ill-conceived and potentially harmful (e.g., shutting down accounts where bullied youth express self-harming or suicidal tendencies see Ship et al. 2022). Safety-inspired interventions tend to take insufficient account of children’s perspectives as to the relational aspects of the problem, resulting in actions such as content take-downs and blocking (Jones et al. 2014; Jones and Mitchell 2016; Milosevic 2018). Such actions can sometimes help, to an extent (e.g., removing the very visible online part of the relational problem or limiting audience size), but they are often insufficient as they do not address offline context and the life circumstances of the youth involved (Finkelhor et al. 2021), youth culture or the values and norms around bullying behaviors.
In this essay, we draw from the lead author’s qualitative research,9 which sought to co-design AI-based cyberbullying interventions on social media platforms popular with children. We conducted 15 in-depth interviews and 6 focus groups (N=59) with pre-teen and teen children aged 12-17 from across the country of Ireland, soliciting their input on a set of imaginary AI-based cyberbullying interventions (presented to them in design software Figma10) conceptualized based on previous research into peer support and bystander involvement (Bastiaensens et al. 2014; DiFranzo et al. 2018; Macaulay et al. 2022).
The fieldwork was executed during Covid-19-related restrictions, and all interviews and two FGs had to be carried out online. The research team faced difficulties recruiting children of diverse ethnic, racial, gender, sexual and socio-economic backgrounds. The research team finally contracted a private research agency in Ireland, Amarach Research11, which were able to help us engage with children, but they were only able to recruit children by age and sex, and no other socio-demographic criterion. We were, therefore, not able to ensure equity as reflected in having children of various backgrounds in our sample. Subsequently, we did not record any information on children’s socio-economic status, ethnic/racial or immigration-related background, sexual orientation, different ability or gender unless children proffered such information themselves in relation to the interview/focus group questions. The sample predominantly consisted of white children. This is a significant shortcoming of the study, but we hope that these findings and the dignity-centered orientation of our approach will pave the way for future research that will be able to incorporate greater diversity.
Informed written assent/consent was sought from participants and their parents/guardians following ethics approval from the host institution’s ethics committee. The project received competitive funding from a Facebook/Meta Content Policy Award in response to the company’s call for proposals. The researchers had complete independence in terms of research design and execution, use of data, analyses and conclusions. Facebook/Meta did not advise or influence the project in any way. 12
The co-design element in this research involves children providing their views as to the impact on their rights to privacy and freedom of expression, as well as to the effectiveness of existing safety interventions that are currently being employed by social media platforms (such as the AI-based proactive content removal of cyberbullying content). The goal is to relay this feedback to companies so that they can reflect on introducing changes into the existing technology design based on this feedback from children. Furthermore, the research team designed hypothetical AI-based interventions (not available on social media platforms, based on previous research into designing technology for support in cyberbullying incidents); and asked children for feedback as to the perceived effectiveness of these interventions; as well as for alternative suggestions as to how these interventions should be designed. We explained the goals of the process to children and their role in this project.
We met all participants only once and we did not work with children over a period of time. Children did not participate in the design of the interview and focus group questions and children did not design the interventions themselves. Meeting children over time and allowing them to design interventions themselves, would have constituted a higher level of children’s participation on the child-participation ladder (Druin 2002; Hart 2008; Kumar et al. 2018). Nonetheless, since the key goal of the project was to inquire into children’s views on the AI-based interventions that are already being used on social media platforms, we relied upon interviews and focus groups as our primary method rather than a series of workshops. Given the Covid-related constraints our decision was also influenced by what was feasible at the time.
Following transcription and anonymization, three coders examined the data in an iterative way, looking for research-questions driven themes (deductive coding); as well as other themes, following an open-ended inductive round of coding (Boyatzis 1998). We coded semantic or explicit themes, approaching the data from an essentialist or realist perspective (Braun and Clarke 2006).13 We then searched for any age and sex-related patterns, and co-occurrence of themes, interpreting their relevance and meaning in the context of the literature on online safety, cyberbullying and children’s rights.
Throughout the essay, we will be referring to dignity theory (Hicks 2011, 2018; Hartling and Lindner 2016). Following contemporary scholars working in the paradigm, dignity is defined as the inherent worth of every human being, which – unlike respect – need not be earned and cannot be stripped away (Hicks 2018; Fuller and Gerloff 2008). Dignity is the foundation of human rights as stated in many human rights documents, most notably the Universal Declaration of Human Rights (UN General Assembly 1948) and the UN Convention on the Rights of the Child (adopted 1989). Following dignity theory, dignity violations are at the heart of human conflict, and human actions are motivated by the need to affirm one’s dignity. Seen through the dignity theory lens, bullying behaviors are dignity violations that involve humiliation or the diminishing of someone physically or psychologically and contribute to lower self-esteem or self-worth (Shultziner and Rabinovici 2012; Yamada 2007). Dignity theory uncovers implicit non-dignitarian social values and posits them as key enablers of bullying behaviors. These implicit non-dignitarian values that can give rise to bullying behaviors involve the underlying assumption that dignity comes from external sources and what people and society deem to be worthy, such as any form of success, status, income, education or better grades, good looks, etc. These can also constitute sources of power for those who engage in bullying behaviors as early as preadolescence (Nelson et al. 2019).
Following this assumption, those who hold these socially coveted characteristics have more dignity, and these advantages can also act as protectors against bullying behaviors and social exclusion. One reason why a bystander might hesitate to become involved in helping the target, for instance, is the fear that their dignity, too, can be questioned by a more socially powerful perpetrator; or, should they defend the target, they fear being excluded from the group by the perpetrator in retaliation. Dignity theory allows us to map child bullying behaviors to broader social and cultural patterns by positing that what enables bullying behaviors among children is the same set of values that enables workplace bullying and so-called micro-aggressions later on in life, including the assumption that dignity is not inherent and can be taken away. Bullying behaviors are often expressions of social positioning, the process through which people act on that assumption in the belief that they must establish their status and worth in the social group. This struggle for status or power is often a key driver of bullying behaviors (De Vries et al. 2021; Faris et al. 2020; Thornberg 2015).
Following dignity theory, when one thinks that dignity and worth need to be earned or established, they are struggling for what dignity scholar Dr. Donna Hicks terms “false dignity,” “the belief that our worthiness comes from external sources” (Hicks 2011, p. 116). In such cases, we might for example be acting from the belief that we need: praise or approval from others to feel good about ourselves; high status positions to show to ourselves and others that we are in fact successful or worthy; or we are better than others due to our class, race or ethnicity, income or good looks (Hicks 2011, 2018). Therefore, false dignity is seen as a key psychosocial and cultural driver of bullying and cyberbullying.
The process of social positioning is driven by the desire to establish oneself as worthy in a group, and bullying is a behavior aimed at achieving that, a “tool” of social positioning. For example, one scenario that we showed to participants in interviews and focus groups (FGs) was a case of three girls excluding a fourth one (whom we called “Solveig”) from an offline event and then tagging her in photos of the event on Instagram in which she of course did not appear. The girls did this in order to perform exclusion visually, to illustrate for Solveig and a broader audience that Solveig has been excluded. The scenario indicates that the three girls did this on purpose: In an exchange of direct messages, they discussed the fact that Solveig had done something that one of the girls did not like, and so exclusion was the punishment. According to Instagram, this type of bullying was common on the platform a few years ago.14
Excluding someone and attempting to hurt and humiliate them by performing the act of exclusion publicly, for a social media audience, socially positions Solveig as a person unworthy of being in the group. She is made the abject, the discarded one (Søndergaard 2012 cf. Butler 1999). The attack itself is aimed at stripping her dignity away, a demonstration of false dignity. The event is inherently relational, the motivations behind the attack are various and could include: the fear on behalf of the initiator of the exclusion that Solveig had too much power within the group; a perceived offence and therefore insult to the dignity of the perpetrator; or Solveig’s “failure” to show respect to the perpetrator, which may have triggered some in-group dynamics and the need of the initiator to put Solveig in “her place.” Whatever it may be, in as much as there is a relational aspect to the incident, the incident itself exceeds the scope of online safety. And yet, cyberbullying is still typically operationalized in the literature as an online safety issue, which therefore has profound, and as we argue, limiting implications for AI design. In order to design interventions that are meaningful from a youth perspective, we need to see cyberbullying through a broader, relational lens that factors in both online and offline aspects of youth sociality.
Designing AI that could detect and prevent the act of exclusion automatically in order to aid the target might be helpful, but it does not necessarily solve the offline relational aspect of the problem nor the offence to the target’s dignity. The same applies to content take-down: We could continue working on optimizing AI-based cyberbullying detection in order to make content take-down more efficient, and the very act of taking threats or insults down could certainly be helpful to the target at times, but content deletion does nothing to address any relational aspect of the problem. So in the interest of getting at the latter, among the interventions we tested, following a review of the literature (Papatrainou et al. 2014; Bauman and Yoon 2014; Bastiaensens et al. 2014; DiFranzo et al. 2018; Macaulay et al. 2022), we proposed to involve support contacts – designated helpers chosen by young people when signing up for platforms – who could provide assistance in such situations.
Scholars have recently stressed that, when we think of bullying and cyberbullying as subsets of aggressive behaviors, we imply that “bullies are aggressors” (Kofoed and Staksrud 2019), which confines the issue to problems and behaviors within an individual, downplaying group dynamics and social and cultural contexts. Dignity theory posits that what motivates bullying behaviors and social positioning is inseparable from the social. The motivation is embedded in normative values that create the conditions of social relations, including social aggression and public humiliation, and these seep into child and youth social relations from the adult world (Milosevic et al. 2022b).
Applying this lens, we seek to understand what a dignity-based AI technology intended for cyberbullying intervention and prevention and designed with young people’s needs in mind might look like in practice (Wiseman 2009). Can young people adopt a dignitarian outlook that posits the inherent worth of every human being (i.e., that greater popularity does not mean more dignity or self-worth)? To what extent and how might young people already be adopting stances and taking actions that reflect dignity, but which perhaps remain hidden from adults’ line of sight?
Could this core dignity principle be taught in a way that resonates with young people, rather than imposed upon them in the manner that online safety education messages often are? How can Dignity inform effective cyberbullying interventions on platforms?
In our study, we created a set of cyberbullying scenarios and subsequent AI-based interventions in Figma, which we then showed to young people in FGs and interviews, soliciting their feedback as to whether they perceived the interventions to be desirable and effective – and if they had suggestions as to how these might best be designed. We also inquired about youth perceptions of how these AI-based interventions affected their rights to privacy and freedom of expression, which are also protected under the UNCRC.
Rather than waiting for a young person to report bullying content to the social media platform for the content to be examined and taken down, applying AI allows platforms to crawl the content proactively and detect cyberbullying before it is reported (proactive moderation). We tested with youth a series of cyberbullying scenarios and subsequent interventions that involved AI-based proactive moderation.
The interventions were informed by social learning and social norm theories which propose that aggressive behaviors such as bullying and cyberbullying can be inhibited or reinforced by social norms (Espelage et al. 2012; Hinduja and Patchin 2013). For example, if role models in the peer group behave in ways that support aggression, power struggles, exclusion etc., it will be easier for perpetrators to engage in such behaviors. Furthermore, if such behaviors are seen as a legitimate or normative means to attain group power and status, such attitudes can facilitate perpetration. On the other hand, if aggression is not normative, it will be more difficult for perpetrators to engage in it. Following the concept of peer mentoring (Papatrainou et al. 2014; Bauman and Yoon 2014), we introduced the option to add a so-called “support contact” or “helper” upon sign up to Instagram,15 TikTok16 and Trill17, which were the platforms featured in the study, the former two being particularly popular among teens. The support contact could be anyone they chose – a friend their age, a sibling or an adult – and could but didn’t have to have a presence on the platform.
As the intervention is designed, once perceived cyberbullying is detected by AI, the target is notified without having the cyberbullying content itself shown to them in order to avoid re-traumatization. They are offered the option to contact their designated support contact. If they choose to do so, the support contact is prompted to provide help via direct support, such as messaging the target by whom they’d been designated; indirectly by reporting the content to the platform; or by messaging the perpetrator asking them to stop. We also asked the youth about alternative approaches, such as 1) having the platform alert the support contact immediately when an incident is discovered, or 2) involving a bystander, which was anyone who may have witnessed the incident and was detected by AI, for example, because the bystander commented or viewed a post/story that received negative comments/reactions. The bystander would be notified that cyberbullying has potentially taken place and they would be prompted to provide help.
Existing research has already explored the technological conditions that facilitate bystander involvement to support the target of cyberbullying (Bastiaensens et al. 2016; DeSmet et al. 2014). Research on offline bullying established that “diffused responsibility,” or the belief that there is someone else who will help the target, reduces the likelihood of supportive engagement (Latane and Darley 1970). Empathy, for example, could facilitate positive bystander behavior (Barlińska et al. 2013; Macháčková and Pfetsch 2016). Technological design that triggers a sense of personal responsibility (van Bommel et al. 2012) or the belief that one is being watched in public (Pfattheicher and Keller 2015), for example by showing a “seen” notice to a bystander who witnesses a cyberbullying message/post, can increase the likelihood of supportive behavior toward the target (DiFranzo et al. 2018).
Another scenario proposed was giving the target the option to contact the designated professional at their school via the social media platform. Under this arrangement, every secondary school in Ireland would have an official account on Instagram (and other relevant social media platforms) which would be monitored by the professional school staff, such as counselors, and to whom children could report incidents directly. Reports detected by AI could also be routed to the school. This approach is intended to facilitate school involvement. In Ireland, schools have the responsibility to intervene in online incidents, even when they take place outside school premises, in as much as they affect children’s right to education. The perpetrator would, with this approach, be penalized by having less engagement/visibility/ priority by the algorithm on all his posts, regardless of their nature (and thus even the positive posts that did not humiliate anyone) for one month – a decision that the perpetrator could appeal. We asked our respondents about the perceived effectiveness and freedom of expression implications of such a sanction. It is worth mentioning that this sanction is similar to “shadow banning,”18 which already exists on some platforms, except that users are not notified of it and do not have the right to appeal the ban.
In the above exclusion scenario, where Solveig was targeted, we devised an AI-based intervention that would detect such exclusion by understanding that more people are tagged in the photo than are present therein; and through an analysis of associated direct messages where the three girls who excluded Solveig discuss the intentionality behind the exclusion. From then on, Solveig would be prompted to seek support from the support contact.
Finally, we also designed an intervention involving an anti-bullying video. Upon sign-up on TikTok, children would be given an option to create an anti-bullying video with a message against bullying that they could record and edit however they pleased, either alone or with a friend. This video would then be sent to the perpetrator or peer group exhibiting bullying behavior either automatically if AI detects the bullying of the child or the child would trigger its sending manually. We discussed the desirability and effectiveness of such an option with young people.
In interviews, the majority of children thought that having a support contact or helper as an option could be a good idea. Nonetheless, they were uncertain if they would actually use it, and youth who took part in focus groups were particularly vocal about concerns they had in relation to setting up and relying on a support contact for help. At the surface level, youth welcomed the option of being able to rely on various AI-based interventions. However, their own accounts of social norms regarding requesting help and a need to “grow a bit of a thick skin to be online” did not appear to dovetail with a system that encourages and reinforces admitting that one needs help. We discovered a normative demand for self-reliance in solving one’s own problems, such as bullying victimization and, as such, youth culture as expressed by our respondents stood in sharp contrast with a supportive, kindness-promoting environment with the idea of having a support contact embedded in it. Such youth culture can stigmatize the need for help, thus leaving those who experience cyberbullying victimization on their own, and celebrate proactive coping strategies for building resilience. This perspective is perhaps best expressed in a comment from a 16-year-old girl in focus groups during a discussion about the scenario that involved excluding a girl (Solveig) from an event and subsequently tagging her in Instagram to perform exclusion:
“Stick up for yourself ... delete the comment and block her” [as an explanation that one should not need AI-based involvement and assistance from a support contact but rather that one can handle such a case on their own].
Participants identified a sense of stigma around admitting that one has opted in for AI-based assistance or involvement of support contacts and helpers. For example, youth sometimes implied those who might need such support to be “sensitive,” a term that appeared to imply weakness. There was almost an expectation that, if you cannot handle mean comments and some level of meanness, then maybe you should not be online and especially on social media:
“You’re gonna have to ... some of the comments, at least… you’re gonna have to be able to put up with it, like … and just delete a comment. And not let everything get to you. You know what I mean?” (FG, Girls 13-14)
While for the most part, youth were aware that cyberbullying can have serious psychological consequences, it nonetheless appeared that people say all sorts of things online and that one cannot be expected to take everything seriously. Reminiscent of the “ethics gap” (James 2014), online meanness needed to be brushed off as “just a joke” rather than being reckoned with.
But it’s individual, situational and also a matter of degree, our respondents indicated. For example, for some young people, one might be expected not to take insults directed at physical appearance so seriously and to learn to deal with them on their own; whereas in cases of racial or sexual orientation, which were perceived as more severe, asking a third party for help might be warranted:
Even like setting up the whole thing [support contact system], whatever it is, yeah. I feel like it would be a lot to go through when, like, no matter what, people are gonna post. Even if it’s in their heads. So it doesn't really matter in the long term. One comment on some posts. It's a good idea in theory, but I feel like even having to set up the whole thing I think a lot of people wouldn’t bother to set it up…. I think it also takes case by case because if it was someone actually like being racist or homophobic or something like that, that's a different thing… It’s not with someone calling you ugly. You know what I mean? Like it would be upon that person.
(FG Girls, 15-16).
Some children were also concerned that their support contacts might consider AI-triggered requests for help to be too much or annoying, and that becoming involved to assist others could take a significant toll on their time. If one were to ask someone repeatedly for support, the process of asking could damage the relationship between friends, they thought. Furthermore, in female focus groups, there was even a sense that one should not ask someone else to deal with one’s own problems – everyone should look out for themselves. This was especially the case when the support contact was supposed to reach out to the perpetrator and ask them politely to take abusive content down or to apologize.
Why is she asking her friend? I just I don't think it’s any of her [target friend’s] business. Why is she, like, asking her friend that? Like, if you have a problem with someone, just go say it to them. Why are you bringing another person into it? (FG, Girls 16-17)
It is for these reasons that a number of young people said they would be hesitant to involve bystanders. They pointed out that those who witnessed bullying could be random strangers who may well be annoyed to be asked to help some people they do not know, especially if they were to receive a large volume of such requests. What could bystanders do, anyway? was the implied question, when what happens online has roots in offline peer relations? Some surmised, some even emphasized that bystander interventions could backfire and that those asked to help the target could in fact turn against them and join the perpetrator, if nothing else out of sheer annoyance or for fun.
Another issue raised was that young people tend to avoid drawing attention to victimization for fear of making a big deal out of it, a sense of shame and a preference to deal with it on their own. Some did think that, if the bystander was indeed someone they knew and was not hostile, it could be a good idea to have the option to involve them. Nonetheless, what warrants further discussion is the fact that anti-bullying campaigns often advocate for kindness online, and the urgency of being an upstander rather than a passive bystander. Previous research has identified the conditions under which a bystander is more likely to support the target (such as non-diffused responsibility, transparency and accountability and a limited audience, as discussed above). The question, however, is to what extent can such requests be effective, if asking for help implies sensitivity, weakness – othering? What does this mean for the design of AI-based interventions that take into account youths’ experiences and perspectives?
Similar concerns about involving others were voiced in the context of AI involvement in reporting cyberbullying to the target’s official school account on social media. While many thought it may be a good idea in theory, they were not always sure if they would use it. Teachers and other school staff were characterized as not being best positioned to do something constructive to solve the problem, and there the fear was voiced that they would make things worse; others thought that if the perpetrator did not attend the school, there was little that the school could do anyway. In other words, young people tended to think of school involvement more in terms of sanctions for the perpetrator than in terms of helping the target. These findings appear to align with the most recent survey data for Ireland on a nationally representative sample of 9-17-year-old Internet-using children, whereby no boys who were targeted by cyberbullying and only 13% of targeted girls spoke to a teacher when an incident happened to them (NACOS 2021).
As for an intervention involving an anti-bullying video on TikTok, there were some young people who thought the video could be a good idea even when they were unsure if they would use it. Nonetheless, the option to create an anti-bullying video was largely perceived as something that might be more suitable for younger children (under 12 or even younger), but definitely not something that they or their peers might use. Some young people even thought that it could backfire, in as much as it would be seen as so uncool that the perpetrators might even be triggered to intensify the bullying by laughing it off. Videos promoting kindness were not seen by our respondents as a particularly effective way to address cyberbullying and yet promoting kindness is a common feature of online safety campaigns designed by various advocacy and other organizations that target cyberbullying.
Finally, there was the question of privacy rights – and monitoring as a safety intervention. Our respondents seemed to be aware of the balance that needs to be struck between privacy and freedom of expression on the one hand and protection from cyberbullying on the other, and they understood that might include monitoring private messaging as well as publicly shared content. Many thought this would be permissible if there was an explicit opt-in they could choose, and they tended to justify the use of monitoring with the “greater good” argument: if it is used for the purpose of catching cyberbullying, then such monitoring tended to be seen as justified. Yet, when asked to consider realistic situations such as the scenario of social exclusion involving “Solveig,” many had second thoughts. For example, they were afraid of platforms using facial recognition for detection of perpetrators and targets, describing it as “creepy.”
Regarding freedom of expression, young people tended to be concerned about AI not being able to differentiate between language used in friendly jokes with good intentions, which is common among young people, and cyberbullying. They thought too many false positives might be picked up on and that content could be moderated in unhelpful ways, limiting the ability to express themselves on these platforms. The right to appeal content take-downs or punishments involving less engagement (such as shadow banning) were seen as particularly important and needing to be taken seriously by platforms that use AI proactively for cyberbullying prevention.
Recent legislative efforts across the world – from the Online Safety Act in Australia to the Online Safety Bill in the UK to the Online Safety and Media Regulation Bill (OSMR) in Ireland and now the Digital Services Act (DSA) in the European Union – have signaled an end to platform self-regulation and a demand for greater scrutiny of the effectiveness of online platforms’ moderation mechanisms, which include AI-based efforts to detect, prevent and intervene in cyberbullying content. The scrutiny will be helpful. Problematically, however, despite the shift to systemic regulation (which scrutinizes content circulation), the remedy policymakers worldwide are still focused on is content take-down. Their aim to codify the process is unlikely to help substantially (Douek 2022), and our youth-informed research further reinforces the evidence that content removal addresses only a limited aspect of the complex relational and social norms-based problem of cyberbullying and, among adults, online hate speech.
Given intensifying regulatory activity, demands on AI, interest in youth-centered, co-designed approaches, and much talk of the metaverse as the newest space for teens to hang out, we see this as the right time to:
Ensure that industry as well as policymakers and risk prevention educators uphold children’s rights to be heard on matters that concern them, as stipulated in the UNCRC, soliciting youth perspectives via research that goes beyond market research and supports product safety as well as legal compliance and informing business models.
Incorporate adolescent experience and perspectives into technology design, considering what that means for AI-based cyberbullying interventions.
As companies invest in optimizing AI-based technologies for intervention, consider measures that have not received attention and resources when content take-down is prioritized.
For social media and other technology platforms but it is also important that school administration and staff keep the following in mind: Ensure that youth have the tools to act on (or not act, due to) their social and developmental norms and needs for safety interventions in digital environments (e.g., not having to ask for help if they see that as humiliating or the ability to untag themselves from a photo that is meant to humiliate them) and to act in accord with their own and each other’s dignity as they perceive it.
Develop legislation that mandates the implementation of research via online safety codes that are based on the understanding that cyberbullying is not purely an online content issue.
Align online safety education with youth social norms, such as their reluctance to ask for help or to be identified as someone who needs help, fear of burdening others with one’s problems and the preference for dealing with cyberbullying on their own, per our respondents. Current kindness-promoting campaigns around support contacts, helpers and bystander interventions, apparently based on dignity principles, are not aligned with youth social norms we encountered in the study.
At the same time, it is important that technology companies assume responsibility for effective management of bullying on their platforms; rather than relegating that responsibility entirely to young people via the tools that allow children to self-manage bullying without platform involvement (such as untagging, blocking, muting, restricting).
It is also essential to understand how youth who do wish to ask for help might be adversely affected by the stigma of vulnerability and sensitivity, and to ensure that they receive the help that they need in a manner that preserves their sense of dignity.
Increase cross-disciplinary and cross-sector (young users, government, industry, academia) communication to find solutions and design from the point of view of youth where and when contradictions arise (such as child safety “versus” privacy and freedom of expression).
Consider and honor the interoperability of children’s dignity, rights and experience in designing education, legislation and technology.
Our research has demonstrated that youth participation in the design stage of interventions is not only instrumental to efficiency but also to the process of technological capacity-building and keeping abreast of the evolution of youth cultures (and sub-cultures), understanding emerging needs and identifying risks for youth and their online+offline ecosystems. Further to the empirical evidence provided by this research which makes the case for participatory design, dignity theory offers an anthropocentric view of technology and the ethics that should be governing both design and moderation practices. The dignity-based framing of technology is premised on the ethics of care and stipulates the active involvement of people (victim/perpetrator) and technology (AI) – in ways that prioritize a fruitful and mutually beneficial relationship that improves both human wellbeing and technological efficiency19.
In this essay, we have proposed that effective designs for AI-based cyberbullying interventions be informed by dignity theory and children’s rights under the UNCRC, and we have discussed the need to design intervention technology that honors children’s dignity, rights and online experiences from their own perspective.
Barlińska, Julia, Anna Szuster, and Mikolaj Winiewski. 2013. “Cyberbullying Among Adolescent Bystanders: Role of the Communication Medium, Form of Violence, and Empathy.” Journal of Community & Applied Social Psychology 23(1):37-51.
Bastiaensens, Sara, Heidi Vandebosch, Karolien Poels, Katrien Van Cleemput, Ann DeSmet, and Ilse De Bourdeaudhuij. 2014. “Cyberbullying on Social Network Sites. An Experimental Study Into Bystanders’ Behavioural Intentions to Help the Victim or Reinforce the Bully.” Computers in Human Behavior 31:259-271.
Bastiaensens, Sara, Sara Pabian, Heidi Vandebosch, Karolien Poels, Katrien Van Cleemput, Ann DeSmet, and Ilse Bourdeaudhuij. 2016. “From Normative Influence to Social Pressure: How Relevant Others Affect Whether Bystanders Join in Cyberbullying.” Social Development 25(1):193-211.
Bauman, Sheri, and Jina Yoon. 2014. “This Issue: Theories of Bullying and Cyberbullying.” Theory Into Practice 53(4):253-256.
Boyatzis, Richard E.. 1998. Transforming Qualitative Information: Thematic Analysis and Code Development. Thousand Oaks, CA: Sage Publications.
Braun, Virginia, and Victoria Clarke. 2006. “Using Thematic Analysis in Psychology.” Qualitative Research in Psychology 3(2):77–101.
Butler, Judith. 1999. Gender Trouble. New York, NY: Routledge.
de Vries, Elsje, Tessa M. Kaufman, René Veenstra, Lydia Laninga-Wijnen, and Gijs Huitsing. 2021. “Bullying and Victimization Trajectories in the First Years of Secondary Education: Implications for Status and Affection.” Journal of Youth and Adolescence 50(10):1995-2006.
DeSmet, Ann, Charlene Veldeman, Karolien Poels, Sara Bastiaensens, Katrien Van Cleemput, Heidi Vandebosch, and Ilse De Bourdeaudhuij. 2014. “Determinants of Self-Reported Bystander Behavior in Cyberbullying Incidents Amongst Adolescents.” Cyberpsychology, Behavior, and Social Networking 17(4):207-215.
DiFranzo, Dominic J., Samuel Hardman Taylor, Franccesca Kazerooni, Olivia D. Wherry, and Natalya N. Bazarova. 2018. “Upstanding by Design: Bystander Intervention in Cyberbullying.” Pp. 1-12 in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. New York, NY: ACM.
Druin, Allison. 2002. “The Role of Children in the Design of New Technology.” Behaviour and Information Technology 21(1):1-25.
Douek, Evelyn. 2022. “Second Wave Content Moderation Institutional Design: From Rights To Regulatory Thinking.” Harvard Law Review 136. http://dx.doi.org/10.2139/ssrn.4005326
Espelage, Dorothy L., Mrinalini A. Rao, and Rhonda G. Craven. 2012. “Theories of Cyberbullying.” Pp. 49-67 in Principles of Cyberbullying Research: Definitions, Measures, and Methodology, edited by S. Bauman, D. Cross, and J. Walker.
Faris, Robert, Diane Felmlee, and Cassie McMillan. 2020. “With Friends Like These: Aggression from Amity and Equivalence.” American Journal of Sociology 126(3):673-713.
Finkelhor, David, Kerryann Walsh, Lisa Jones, Kimberly Mitchell, and Anne Collier. 2021. “Youth Internet Safety Education: Aligning Programs with the Evidence Base.” Trauma, Violence, & Abuse 22(5):1233-1247.
Floridi, Luciano. 2016. “On Human Dignity as a Foundation for the Right to Privacy.” Philosophy & Technology 29:307–312. https://doi.org/10.1007/s13347-016-0220-8
Fuller, Robert W., and Pamela A. Gerloff. 2008. Dignity for All: How to Create a World without Rankism. Oakland, CA: Berrett-Koehler Publishers.
Garandeau, Claire F., Inho A. Lee, and Christina Salmivalli. 2014. “Inequality Matters: Classroom Status Hierarchy and Adolescents’ Bullying.” Journal of Youth and Adolescence 43(7):1123-1133.
Hart, Roger A.. 2008. “Stepping Back From ‘The Ladder’: Reflections on a Model of Participatory Work with Children.” Pp. 19-31 in Participation and Learning: Developing Perspectives on Education and the Environment, Health and Sustainability, edited by R. Jensen. Dordrecht, NL: Springer Netherlands.
Hartling, Linda M., and Evelin G. Lindner. 2016. “Healing Humiliation: From Reaction to Creative Action.” Journal of Counseling & Development 94(4):383-390.
Hicks, Donna. 2011. Dignity. New Haven, CT: Yale University Press.
Hicks, Donna. 2018. Leading with Dignity: How to Create a Culture That Brings Out the Best in People. New Haven, CT: Yale University Press.
Hinduja, Sameer, and Justin W. Patchin. 2013. “Social Influences on Cyberbullying Behaviors among Middle and High School Students.” Journal of Youth and Adolescence 42(5):711-722.
James, Carrie, and Emily Weinstein. 2021. “We’re All Worried About Teens and Tech. HX Might Be the Answer.” Tech Crunch, November 12. Retrieved from: https://techcrunch.com/2021/11/12/were-all-worried-about-teens-and-tech-hx-might-be-the-answer/
Jones, Lisa M., Kimberly J. Mitchell, and Wendy A. Walsh. 2014. A Content Analysis of Youth Internet Safety Programs: Are Effective Prevention Strategies Being Used? Durham, NH: Crimes Against Children Research Center (CCRC), University of New Hampshire. Retrieved from: https://scholars.unh.edu/ccrc/41/
James, Carrie. 2014. Disconnected: Youth, New Media, and the Ethics Gap. Cambridge, MA: MIT Press.
Jones, Lisa M., and Kimberly J. Mitchell. 2016. “Defining and Measuring Youth Digital Citizenship.” New Media & Society 18(9):2063-2079
Kofoed, Jette, and Elisabeth Staksrud. 2019. “‘We Always Torment Different People, So By Definition, We Are No Bullies’: The Problem of Definitions in Cyberbullying Research.” New Media & Society 21(4):1006-1020.
Kumar, Priya, Jessica Vitak, Marshini Chetty, Tamara L. Clegg, Jonathan Yang, Brenna McNally, and Elizabeth Bonsignore. 2018. “Co-designing Online Privacy-related Games and Stories with Children.” Pp. 67-79 in Proceedings of the 17th ACM Conference on Interaction Design and Children. New York, NY: ACM.
Latané, Bibb, and John M. Darley. 1970. The Unresponsive Bystander: Why Doesn't He Help?. Upper Saddle River, NJ: Prentice Hall.
Macháčková, Hana, and Jan Pfetsch. 2016. “Bystanders’ Responses to Offline Bullying and Cyberbullying: The Role of Empathy and Normative Beliefs About Aggression.” Scandinavian Journal of Psychology 57(2):169-176.
Macaulay, Peter J., Lucy R. Betts, James Stiller, and Blerina Kellezi. 2022. “Bystander Responses to Cyberbullying: The Role of Perceived Severity, Publicity, Anonymity, Type of Cyberbullying, and Victim Response.” Computers in Human Behavior 131.
Milosevic, Tijana. 2018. Protecting Children Online?: Cyberbullying Policies of Social Media Companies. Cambridge, MA: The MIT Press.
Milosevic, Tijana, Kathleen Van Royen, and Brian Davis. 2022a. “Artificial Intelligence to Address Cyberbullying, Harassment and Abuse: New Directions in the Midst of Complexity.” International Journal of Bullying Prevention 4:1-5.
Milosevic, Tijana, Anne Collier, and James O’Higgins Norman. 2022b. “Leveraging Dignity Theory to Understand Bullying, Cyberbullying, and Children’s Rights.” International Journal of Bullying Prevention 5(2):108-120.
National Advisory Council for Online Safety. 2021. Report of a National Survey of Children, their Parents and Adults Regarding Online Safety. Retrieved from:https://www.gov.ie/en/publication/ebe58-national-advisory-council-for-online-safety-nacos/
Nelson, Helen J., Sharyn K. Burns, Garth E. Kendall, and Kimberly A. Schonert-Reichl. 2019. “Preadolescent Children’s Perception of Power Imbalance in Bullying: A Thematic Analysis.” PLoS One 14(3):e0211124.
O’Higgins Norman, James. 2020. “Tackling Bullying From the Inside Out: Shifting Paradigms in Bullying Research and Interventions.” International Journal Of Bullying Prevention 2(3):161-169.
Patchin, Justin W., and Sameer Hinduja. 2015. “Measuring Cyberbullying: Implications for Research.” Aggression and Violent Behavior 23:69-74.
Papatraianou, Lisa H., Diane Levine, and Dean West. 2014. “Resilience in the Face of Cyberbullying: An Ecological Perspective on Young People’s Experiences of Online Adversity.” Pastoral Care in Education 32(4):264-283.
Pfattheicher, Stefan, and Johannes Keller. 2015. “The Watching Eyes Phenomenon: The Role of a Sense of Being Seen and Public Self‐awareness.” European Journal of Social Psychology 45(5):560-566.
Reith, James. 2021. The User is a Problematic Idea. But So is the Human. Medium, August 16. Retrieved from:
https://uxdesign.cc/the-user-is-a-problematic-idea-but-so-is-the-human-8e2479085073
Shipp, Jonny, Anno Mitchell, Ioanna Noula, and Patrick Grady. 2022. Accountability Report 2.0: An Independent Evaluation of Online Trust and Safety Practice. The Internet Commission. Retrieved from: https://inetco.org/report
Shultziner, Doron, and Itai Rabinovici. 2012. “Human Dignity, Self-Worth, and Humiliation: A Comparative Legal–Psychological Approach.” Psychology, Public Policy, and Law 18(1):105.
Søndergaard, Dorte Marie. 2012. “Bullying and Social Exclusion Anxiety in Schools.” British Journal of Sociology of Education 33(3):355-372.
Søndergaard, Dorte Marie, and Helle Rabøl Hansen. 2018. “Bullying, Social Exclusion Anxiety and Longing for Belonging.” Nordic Studies in Education 38(4):319-336.
Thornberg, Robert. 2015. “Distressed Bullies, Social Positioning and Odd Victims: Young People's Explanations Of Bullying.” Children & Society 29(1):15-25.
United Nations General Assembly. 1948. “Universal Declaration of Human Rights.” Retrieved from: https://www.un.org/en/about-us/universal-declaration-of-human-rights
Van Bommel, Marco, Jan-Willem van Prooijen, Henk Elffers, and Paul A. M. Van Lange. 2012. “Be Aware To Care: Public Self-Awareness Leads to a Reversal of the Bystander Effect.” Journal of Experimental Social Psychology 48(4):926-930.
Vidgen, Bertie, and Leon Derczynski. 2020. “Directions in Abusive Language Training Data: A Systematic Review: Garbage In, Garbage Out.” PloS One 15(12):e0243300.
Wiseman, Rosalind. 2009. Owning Up Curriculum: Empowering Adolescents to Confront Social Cruelty, Bullying, and Injustice. Champaign, IL: Research Press.
Yamada, David C.. 2007. “Dignity, Rankism, and Hierarchy in the Workplace: Creating a Dignitarian Agenda for American Employment Law.” Berkeley Journal of Employment & Labor Law 28(1).