To address the challenges posed by social media, school districts, law enforcement, and criminal justice organizations are leveraging artificial intelligence (AI) to identify and predict harmful content online. Due to lack of understanding of language, cultural nuances, and social context, there are numerous impacts when AI technologies wrongly interpret youths’ posts as violent. As such, AI systems may create and reinforce new systems of marginalization and oppression, even put youth at risk by creating digital pathways to incarceration (Patton et al., 2017). We suggest social work ethical principles be integrated broadly as a framing guide for the development of AI technologies to prevent violence against and among youth and marginalized groups.
We suggested adopting a social work ethics approach to reduce harm and bias in AI systems.
Co-creation and collaborations between social workers, computer scientists, youth, and community stakeholders will make for safer and more inclusive AI systems.
Bringing diverse voices into emerging technology can be accomplished by promoting community participation.
Artificial Intelligence (AI) systems are being created to monitor and predict youth violence that occurs globally on social media. Some youth use social media as a psychosocial tool for help-seeking, grief processing, and general support. Conversely, other youth may use social media to exclude others or engage in cyberbullying, violence, and additional acts of isolation. To combat these challenges, school districts, law enforcement, and criminal justice organizations are leveraging artificial intelligence (AI) (e.g., machine learning, computer vision) to identify and predict harmful content online (Patel et al., 2020).
While the concept of predicting and preventing harmful content online seems hopeful, there are deep concerns regarding the extent to which an AI system can correctly decipher context, which is critical to any interpretation of language or action. Due to the lack of understanding of language, cultural nuances, and social context, there are numerous impacts when AI technologies wrongly interpret youths’ posts as violent. As such, AI systems may create and reinforce new systems of marginalization and oppression and even put youth at risk by creating digital pathways to incarceration (Patton et al., 2017). As social work researchers from the U.S., India, and Israel serving youth and researching AI in three different countries, we have a front-row seat to the transnational interactions between youth, technology, government, and the private sector. Our social work and lived experiences in these countries brought us together to discuss social work approaches in data science to prevent online violence against young people in our respective countries. In this chapter, we suggest that social work ethical principles of respect for diversity, human rights, anti-oppression, privacy, and safety be integrated broadly as a framing guide for the development of AI technologies to prevent violence against and among youth and marginalized groups.
India has 600 million youth under 25-years-old (Jack, 2018). There are around 645 distinct tribes in India and more than 19,500 languages and dialects are spoken in India as mother tongues. While only 22 languages are scheduled officially, Google only recognizes and supports nine of these languages (Office of the Registrar General & Census Commissioner, India, 2011; Shukla, 2019). India is home to 560 million internet subscribers with 351 million monthly-active social media users, which is predicted to double by 2023 (PTI, 2019; Pragati, 2019). Growing access to new-age technology and free access to social media has opened new spaces for young people to express their feelings and thoughts on social media. Artificial Intelligence has been applied in various ways to Indian youth’s daily life, through social media monitoring, linking biometric IDs with services, and digital marketing irrespective of caste, gender, sexuality, and religion (Chawla, 2020; Jalan, 2020; Singh & IANS, 2020). Nevertheless, Indian scholars worry that the new AI systems might reinforce caste and religious discrimination through modern tech bias in employment, imprisonment, and access to finances, similar to the consequences of racial bias in the U.S. (Kalyanakrishnan et al., 2017).
In India, young people covered their faces during recent protests against citizenship law due to the fear that police were using facial recognition systems to identify and arrest them. Social activists are continuously concerned about insufficient regulations applied to emerging technologies (Ulmer & Siddiqui, 2020). The Aadhaar ID, a digital biometric ID system, collects personal details like photos, fingerprints, demographic profiles and links them with the individual’s welfare and banking services (Jain, 2019; Pandya & Cognitive World, 2019). This digital biometric system is moved towards AI programs that will scan and flag certain citizens as suspicious or dangerous. State governments, like Punjab and Delhi, already use AI-enabled facial recognition algorithms to screen crowds at political rallies and people’s protests after installing the Automated Facial Recognition System (AFRS) software in airports, offices, and cafes to identify ‘criminals’ (Chandran, 2019; Jain, 2019; Ulmer & Siddiqui, 2020). The Indian National Strategy for Artificial Intelligence discussion paper acknowledges that data-driven decision making, and AI algorithms may have a bias, and it is important to critically assess the impact on the society and find ways to reduce bias (NITI Aayog, 2018). There is a significant chance that AI may mispredict youth expressions in local/regional languages on social media. Moreover, there is no policy in India to ensure safe, inclusive, and participative AI technologies for young people, so the rise of AI in India might cause offline violence and increase polarization in society. The wide spread of social media rumors has aggravated much communal violence, including the “Dadri Mob Lynching” in 2015 and the “Kathua Rape Case” in 2018 (Teitelman, 2019). In India, AI technologies are subject to socio-ethical study and practice.
During the past decade, social media has changed how Israeli youth socialize. Although youth have more possibilities to enhance existing friendships and engage in new relationships, they still experience exclusion, cyberbullying, and other violent behaviors on social media. The unique makeup of Israeli society-- comprising different beliefs, cultures, and norms-- may further exacerbate these on- and offline tensions (Aizenkot & Kashy-Rosenbaum, 2019; Landau et al., 2019; Mesch, 2017). Yet the deployment of AI technologies for safe online environments for youth is in its infancy. New Israeli high-tech companies, like L1ght, are developing AI systems for monitoring social media and the Internet in the hope of preventing and protecting youth from cyberbullying, shaming, and sexual predation (Chaimovich, 2020). At the same time, the central law enforcement agency has plans to develop an AI system to monitor negative social media posts, such as threats, incitements, and online shaming, that are directed towards police officers. This plan is concerning, as it has the potential to provide the police with unlimited access to Israelis’ social media without any restraint or consent. Israel currently lacks ethical AI policies and guidelines, igniting growing concern that companies and police have access to unrestricted surveillance, thus violating fundamental human rights of privacy and consent (Kabir, 2019). Without training AI technology to understand the different cultures, norms, and beliefs of youth within the country, the development of these technologies may reinforce biased assumptions that can lead to further exclusion and violence.
In the United States, the integration and deployment of AI technologies have fundamentally changed our lives and, in particular, the lives of young people. There has been tremendous discussion about the use and misuse of facial recognition systems, particularly for communities of color and transgender individuals. Much of this work has come into focus because of the research and advocacy of Black women, like Joy Buolamwini, Timnit Gebru, and Mutale Nkonde, who discuss the rather large racial bias in algorithmic systems that extend and amplify racial inequity (Buolamwini et al., 2020; Raji et al., 2020; Nkonde, 2019). These results of faulty facial recognition systems were underscored in a relatively recent New York Times article, which described the experience of a Black man who was falsely arrested, in Detroit, for a crime he did not commit (Hill, 2020). In addition to facial recognition, new research from the Brennan Center at NYU indicates that over the last five years, new surveillance companies have developed and are selling software, powered by AI, that can allegedly detect signs of violence or other concerning behavior among youth on social media (Patel et al., 2020). One example is Chicago Public Schools. Armed with a U.S. Department of Justice grant, the large urban district hired intelligence analysts and purchased a social media monitoring service to analyze online conversations among students. The analysts used keyword searches to find threats at the program’s target schools (Patel et al., 2020). The program is particularly concerning because students were not made aware of this initiative and it remains unclear what words or phrases connote a “threat.” This is precarious, given recent research from Patton and colleagues (2019) who found that Chicago youth from a neighborhood with high rates of gun violence did not agree on how to interpret Twitter posts identified as “aggressive” with peers from the same neighborhood. Nevertheless, research from the SAFElab at Columbia University documents back and forth arguments on social media between youth that in some cases lead to online aggression, school fights, increased bullying, and gun violence. Prevention is critical and the AI violence prevention system needs local youth participation to find meaning, language, context and communication styles, as much of the language and images used on social media are susceptible to misinterpretation. While there is excitement to leverage AI systems to identify potential harms, there is little to no evidence that suggests AI meets the goals for which they are deployed.
Around the world, social workers are practicing a similar code of ethics, adopted by the International Federation of Social Workers (IFSW) and national associations, irrespective of the socio-cultural complexities in the world (IFSW, n.d.). In India, recognizing and incorporating indigenous knowledge is essential in understanding youth and the complex local language expressions on social media and the Internet. It is critical that AI systems are optimized to identify the pragmatic ways in which youth use social media and contextually understand languages, particularly from marginalized communities with myriad languages and hyper-local context. Although there are no standard national social work ethical principles in India, the IFSW Code of Ethics is widely adopted and practiced across the country to ensure professional ethics in human services. The development, integration, and application of AI systems in India should prioritize principles that underscore human rights, social justice, community participation, equity, and ethical use of technology as highlighted in the IFSW Code of Ethics. These ethical considerations outline the role of technology in social work practice, as well as offer scope for social work’s role in the development and deployment of emerging technologies in a real-time practice to foster inclusion and prevent violence.
Israel’s Social Work Code of Ethics guides social workers in practice to support their clients', families', and communities’ participation and quality of life (Israeli Association of Social Work (IASW), 2018). Youth in Israel come from different beliefs, languages, races, and ethnicities, such as Jewish, Christian and Muslim, Israeli-Arabs, and immigrants, and it is essential to obtain their insights around violence to reduce bias assumptions. As AI technology for violence prevention has ethical connotations, the development and implementation of AI technologies for youth violence prevention in Israel should consider adopting a social work ethical approach that involves youth participation from different sectors of the country to increase objectivity and to develop more effective AI systems.
In the U.S., social workers follow the National Association of Social Workers (NASW) Code of Ethics (NASW, 2017), which frames everyday professional conduct and practice for social workers. At its core, the framework espouses ideas of service, social justice, dignity and worth of the person, importance of human relationships, integrity, and competence. In 2018 the NASW along with the Association of Social Work Boards, the Council for Social Work Education, and the Clinical Social Work Association developed standards to consider the role of technology in social work practice and education. The standards cover four main areas: provide information for the public, design and deliver services, gather and manage information about a client, and educate and supervise students. At its core, the relatively new standard is grounded in ethics, the text proclaiming, “when social workers use technology to provide information to the public, they shall take reasonable steps to ensure that the information is accurate, respectful, and consistent with the NASW Code of Ethics” (NASW Cultural Standards, pp 16)” . Let’s take for example the use of AI for predictive risk assessment in child welfare. Social Work researchers from the Children’s Data Network, a research initiative at the University of Southern California’s Suzanne Dworek- Peck School of Social Work have used AI to link health records across all aspects of a child’s life to include health, education, or Department of Child and Family Service (DCFS) data with the goal of improving the well-being children, perhaps better identifying children at risk before serious injury or death, and influence policy decisions (Cuccaro-Alamin et al., 2017). While there are many ethical considerations to contend with, a social work approach might consider working with directly impacted groups as domain experts to co-design those AI systems. This means seeking qualitative insights and expertise to consider and anticipate potential challenges, harms, or benefits of an AI deployment. It is also critical to consider issues of privilege, oppression, race, and power and how they play out in creating data sets, and how and what labels or codes are created, and how and in what spaces and places the AI system is deployed.
AI systems are used in all the above three counties without, or with limited, socio-cultural contexts. The addition of social work ethical principles and an anti-oppressive approach can add socio-cultural contextual value to the process of AI development and integration. These ethical considerations can outline the role of technology in social work practice as well as offer scope for social workers' role in the development and deployment of emerging technologies beyond borders to promote inclusion and prevent violence.
Transnational Approach to Social Work Ethics in Emerging Technologies | |||
---|---|---|---|
Transnational Approach | India | U.S. | Israel |
Tech challenges in Society | No policy to regulate AI technologies which leads to potential Surveillance, Digital Victimization, E-incarceration, Bias and discrimination, and the Digital divide. | No federal policy to regulate AI technologies which leads to potential Surveillance, Digital Victimization, E-incarceration, Bias and discrimination, and the Digital divide. | Israel lacks ethical AI policies and guidelines, potentially providing tech companies and police access to unrestricted surveillance. |
Key Social Differences and Social Workers Engagement in the welfare systems | Caste is an invisible systemic social problem. Religious polarizations, diverse language, and cultural groups. There is an active engagement of social workers in the welfare systems. | The race is the most visible structural systemic social problem. Multi-national cultural groups and indigenous populations. There is an active engagement of social workers in the welfare systems. | Multi-religious and ethnic groups There is an active engagement of social workers in the social welfare systems. |
Social Work Ethics | India adopts ethical principles from the International Federation of Social Work (IFSW) highlighting Human rights values, social justice, anti-oppressiveness, people's participation, self-determination, diversity, and indigenous knowledge (IFSW, n.d.). | NASW Code of Ethics highlights the role of technology in social work practice and education. Emphasizing Human rights values, social justice, anti-oppressiveness, people's participation, self-determination, diversity, and indigenous knowledge (NASW, 2017). | The IASW guides highlight social work practice emphasizing human rights values, social justice, anti-oppressiveness, people's participation, self-determination, and diversity (IASW, 2018). |
Potential transnational benefits of tech social work collaboration |
|
AI tools are used in India, Israel, and the United States under the guise of youth violence prevention. While there is evidence that problematic content does occur on social media across populations and platforms, there is a dearth of evidence that suggests AI can actually reduce harmful and hateful content online. The field of social work offers prevention and intervention models that provide a framework and context for working with diverse populations, leveraging domain expertise, tech social work, and promoting social cohesion. We argue these varied social work principles shared globally may offer a more ethical and humane approach to developing AI technologies and tools for violence prevention or may also serve as a check against using AI when the tool does not fit the social problem, the research question, or social context. This chapter promotes the needs and importance of co-creation and collaboration between social workers, computer scientists, local youth, and other stakeholders of youth development with appropriate ethical standards to prevent harm and bias in the AI systems.
Desmond Upton Patton, Ph.D., MSW is the founding director of the SAFElab, Associate Professor of Social Work, Sociology, and Data Science. Associate Dean for Curriculum Innovation and Academic Affairs, Columbia School of Social Work, Associate Director, Diversity, Equity, and Inclusion. Data Science Institute at Columbia University. His research interests involve emerging technologies, ethics in AI, Black youth wellbeing, social work, and youth violence prevention.
Siva Mathiyazhagan, Ph.D., MSW serves as an Associate Director of Strategies and Impact at SAFElab, Columbia University. Siva is the founding director of Trust for Youth and Child Leadership TYCL International and representative to the United Nations. His research focuses on tech social work practice, ethics in AI, and youth suicidal expressions on social media.
Aviv Y. Landau, Ph.D., MSW worked for several years as a social worker in Israel and is a postdoctoral research scientist at the Data Science Institute, and Associate Director of the SAFElab, Columbia University. His research focuses on health informatics, child abuse and neglect, youth, and emerging technologies.
Aizenkot, D., & Kashy-Rosenbaum, G. (2019). Cyberbullying victimization in WhatsApp classmate groups among Israeli elementary, middle, and high school students. Journal of Interpersonal Violence. https://doi.org/10.1177/0886260519842860
Buolamwini, J., Ordóñez, V., Morgenstern, J., & Learned-Miller, E. (2020). Facial recognition technologies: A primer. Algorithmic Justice League. https://global-uploads.webflow.com/5e027ca188c99e3515b404b7/5ed1002058516c11edc66a14_FRTsPrimerMay2020.pdf
Chaimovich, H. (2020, February 25). The “shark” that will protect children: Zohar Lebovitz’s L1ght raised 15 million dollars. Geek Time. https://www.geektime.co.il/l1ght-raised-15m/
Chandran, R. (2019). Use of facial recognition in Delhi rally sparks privacy fears. Reuters. https://www.reuters.com/article/us-india-protests-facialrecognition-trfn/ use-of-facial-recognition-in-delhi-rally-sparks-privacy-fears-idUSKBN1YY0PA
Chawla, V. (2020). Is social media analytics redundant today? Analytics India Magazine. https://analyticsindiamag.com/is-social-media-analytics-redundant-today/
Cuccaro-Alamin, S., Foust, R., Vaithianathan, R., & Putnam-Hornstein, E. (2017). Risk assessment and decision making in child protective services: Predictive risk modeling in context. Children and Youth Services Review, 79(January), 291–298. https://doi.org/10.1016/j.childyouth.2017.06.027
Hill, K. (2020, June 24). Wrongfully accused by an algorithm. The New York Times. https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html
Israeli Association of Social Work [IASW]. (2018). The Israeli social work code of ethics. https://socialwork.org.il/prdFiles/%D7%A7%D7%95%D7%93%20%D7%94%D7%90%D7%AA%D7%99%D7%A7%D7%94%20%D7%94%D7%97%D7%93%D7%A9%202018.pdf
International Federation of Social Work [IFSW]. (n.d.). Statement of ethical principles and professional integrity. IFSW. https://www.ifsw.org/wp-content/uploads/2018/06/13-Ethics-Commission-Consultation-Document-1.pdf
Jack, I. (2018). India has 600 million young people – and they’re set to change our world. The Guardian. https://www.theguardian.com/commentisfree/2018/jan/13/india-600-million-young-people-world-cities-internet
Jain, M. (2019). The Aadhaar card: Cybersecurity issues with India’s biometric experiment.” The Henry M. Jackson School of International Studies, University of Washington. https://jsis.washington.edu/news/the-aadhaar-card-cybersecurity-issues-with-indias-biometric-experiment/
Jalan, T. (2020). Indian government again proposes social media surveillance, this time in the name of fake news. MediaNama. https://www.medianama.com/2020/06/223-india-social-media-surveillance-proposal-fake-news/
Kalyanakrishnan, S., Panicker, R. A., Natarajan, S., & Rao, S. (2017). Opportunities and challenges for artificial intelligence in India. AIES Conference 2018. https://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_52.pdf
Kabir, O. (2019, October 7). Police shaming procedure: Monitoring and removing citizens’ statements against police. Calcalist. https://www.calcalist.co.il/internet/articles/0,7340,L-3771613,00.html
Landau, A. Y., Eisikovits, Z., & Rafaeli, S., (2019). Coping strategies for youth suffering from online interpersonal rejection. Hawaii International Conference on System Sciences, 52, 2176–2185. https://scholarspace.manoa.hawaii.edu/bitstream/10125/59656/0216.pdf
Mesch, G. S. (2017). Race, ethnicity and the strength of Facebook ties. Journal of Youth Studies 21(5), 575-589. https://doi.org/10.1080/13676261.2017.1396303
National Association of Social Work [NASW]. (2017). The NASW Code of Ethics. https://www.socialworkers.org/About/Ethics/Code-of-Ethics/Code-of-Ethics-English
NITI Aayog. (2018). National strategy for artificial intelligence #AIforall. https://niti.gov.in/sites/default/files/2019-01/NationalStrategy-for-AI-Discussion-Paper.pdf
Nkonde, M. (2019). Automated anti-Blackness: Facial recognition in Brooklyn, New York. Harvard Kennedy School Journal of African American Policy, 20, 30-36. https://search-proquest-com.ezproxy.cul.columbia.edu/docview/2404400349/abstract/BC605C60A49A45F8PQ/1?accountid=10226
Office of the Registrar General & Census Commissioner, India (2011). 2011 census data [data set]. Ministry of Home Affairs, Government of India. https://censusindia.gov.in/2011-common/censusdata2011.html
Pandya, J. & Cognitive World. (2019). Nuances of Aadhaar: India’s digital identity, identification system and ID. Forbes. https://www.forbes.com/sites/cognitiveworld/2019/07/16/nuances-of-aadhaar-indias-digital-identity-identification-system-and-id/?sh=85a2ed1209da
Patel, F., Levinson-Waldman, R., Koreh, R. & DenUyl, S. (2020). Social media monitoring: How the Department of Homeland Security uses digital data in the name of national security. Brennan Center for Justice. https://www.brennancenter.org/our-work/research-reports/social-media-monitoring
Patton, D. U., Blandfort, P., Frey, W. R., Gaskell, M. B., & Karaman, S. (2019). Annotating Twitter data from vulnerable populations: Evaluating disagreement between domain experts and graduate student annotators. Hawaii International Conference on System Sciences, 52, 2142-2151. https://www.researchgate.net/publication/330261697_Annotating_Twitter_Data_from_Vulnerable_Populations_Evaluating_Disagreement_Between_Domain_Experts_and_Graduate_Student_Annotators
Patton, D. U., Brunton, D., Dixon, A., Miller, R. J., Leonard, P., & Hackman, R. (2017). Stop and frisk online: Theorizing everyday racism in digital policing in the use of social media for identification of criminal conduct and associations. Social Media + Society. https://doi.org/10.1177%2F2056305117733344
Pragati. (2019). Social media statistics in India. Talkwalker. https://www.talkwalker.com/blog/social-media-statistics-in-india.
PTI. (2019). Internet users in India to rise by 40%, smartphones to double by 2023: McKinsey. The Economic Times. https://economictimes.indiatimes.com/tech/internet/internet-users-in-india-to-rise-by-40-smartphones-to-double-by-2023-mckinsey/articleshow/69040395.cms
Raji, I. D., Gebru, T., Mitchell, M., Buolamwini, J., Lee, J, & Denton, E. (2020). Saving face: Investigating the ethical concerns of facial recognition auditing. AAAI/ACM AI Ethics and Society Conference 2020. https://doi.org/10.1145/3375627.3375820
Shukla, G. (2019). Google search to add support for 3 new Indian languages by end of this year, updated mobile search UI also coming. NDTV: Gadgets 360. https://gadgets.ndtv.com/internet/news/google-search-oriya-urdu-support-3-new-languages-station-wi-fi-gram-panchayat-2104303
Singh, S. K. & IANS (2020). Govt exploring use of AI to tackle social media misuse. Outlook. https://www.outlookindia.com/newsscroll/govt-exploring-use-of-ai-to-tackle-social-media-misuse/1711741
Teitelman, C. (2019). Communal violence, social media, and elections in India. Columbia | SIPA: The Journal of International Affairs. https://jia.sipa.columbia.edu/online-articles/communal-violence-social-media-and-elections-india
Ulmer, A. & Siddiqui, Z. (2020). India’s use of facial recognition tech during protests causes stir. Reuters. https://www.reuters.com/article/us-india-citizenship-protests-technology/indias-use-of-facial-recognition-tech-during-protests-causes-stir-idUSKBN20B0ZQ