Skip to main content
SearchLoginLogin or Signup

Do Educational Technologies Have Politics? A Semiotic Analysis of the Discourse of Educational Technologies and Artificial Intelligence in Education

Published onJun 29, 2021
Do Educational Technologies Have Politics? A Semiotic Analysis of the Discourse of Educational Technologies and Artificial Intelligence in Education
·

Abstract

Proponents of automated instructionist technologies often follow a standard pattern where entrepreneurs propose a new automated educational technology by establishing a strong opposition to an overly stereotyped version of traditional education,building on intertextuality to generate discourse that makes use of existing narratives, and then use media and marketing strategies to establish themselves as vital to the future of education. By following this blueprint the creators of these technologies shield themselves from failure while making hyperbolic appeals that play on common desires for improved outcomes devoid of evidence. As this pattern emerges in new technologies like AI consumers, educators, and parents need to be more critical of the claims and better informed on the kinds of outcomes that are realistic for these sorts of technologies.

Key Findings

  • Enthusiasm for certain types of automated instructionist technologies persists despite scant evidence of their efficacy, and often in the face of evidence of demonstrated failure.

  • Advocates of these sorts of technologies seek to shape and control narratives about learning in ways that are divorced from evidence-based claims.

  • Educators should be skeptical of claims about increasing learning speed or allowing for self-directed learning as this may not best serve all types of learners.

  • Many of these technologies are more invested in growing their market share and edging out competitors than they are in provable or testable educational outcomes.

Introduction

With every technological generation, we seem to forget their politics. In his controversial “Do Artifacts Have Politics?”, Langdon Winner recounts cases of industrial technologies intentionally implemented not to improve products but to displace workers’ unions, roads designed to privilege private cars, and city planning created to diffuse public protests. Winner’s claims and some of the historical facts he cited have been contested. However, his examples undoubtedly sparked a decades-old debate about the complex relationships between designers’ intentions, their resulting technical or architectural artifacts, and the social systems in which they operate. Today, this conversation has moved from simply stating that designed artifacts might serve a political purpose to understand the more nuanced ways in which, for buildings or inventions to “implement” their politics, they need to be enacted within enabling systems (see, for example, Joerges, 1999; Woolgar and Cooper, 1999; Latour, 2004).

This is more important than ever to understand our world. While today’s technology corporations operate under the guise of benign mottos such as “connecting people” or “organizing information,” they do far more than connect and organize: they are implementing a vision for reshaping human interaction in line with their technology and business models, displacing competing versions regardless of their quality. Moreover, this is only possible because these companies are enacting their artifacts under systems that are particularly well-suited to augment their impact. Social media apps would undoubtedly be less potent under a regulatory framework that would limit their ability to collect data—what came first, the lax regulation or the app, or both?

This awakening to the era of digital surveillance and manipulation (Zuboff, 2015) led to a re-examination of the systemic impact of modern digital technologies on all areas of human activity, including education. What was believed to be unequivocally beneficial—such as universal, free access to educational materials—started to be observed in light of many previous critiques of technology and society. Examining with more attention the politics of educational technologies and their enabling systems became imperative. Audrey Watters (Watters, 2015a), Neil Selwyn (Selwyn, 2010, 2013; Selwyn & Facer, 2013), Sepehr Vakil (Vakil, 2018), and Ben Williamson (Williamson, 2018a, 2018b) have been producing foundational work in the field, indeed revealing that the issue is not just the creation of such learning technologies, but their (often malign) affinities with the larger socio-technical systems that generated them. Going beyond the simplified critique about “artifacts having politics,” today we have to examine how technological artifacts augment, enable, and facilitate specific visions of education that were there all along. What comes first, an education system that privileges testing and ranking, or the app that facilitates those operations—or both?

This article builds on those critiques, investigating the process by which the world became enamored with a specific type of educational technology, resulting in an unprecedented flow of billions of dollars and attention to it in just a few years. Nevertheless, it is essential to be precise about the “certain type” of educational technology we will discuss. Winner himself (2009) and others (e.g., Tyack & Cuban, 1995) often fail to understand that not all educational technologies are created equal and that ascribing unequivocal intentions to designers fails to capture the entire picture (Joerges, 1999): the politics of technological artifacts can go both ways. Yes, a computer can be used to mimic the traditional, oppressive classroom, but it can also offer students novel, subversive tools for knowledge creation to escape schoolified oppression (Freire, 2014; Di Salvo, 2014; Latour, 2004; Papert, 1980; Buechley, 2009). Failing to understand the subversive “Papertian” or “Buechleyan” uses of computing, and denying students access to them, could well be another subtle form of oppression.

This caveat is crucial because if there are explicit or unintended political intentions in the design of educational technologies, we cannot simply state that all of them are adverse and that digital artifacts are intrinsically oppressive. Even though Winner, in later years, ventures into this type of wholesale critique, putting the Logo language and an automated tutoring system in the same category, his earlier writings made more nuanced distinctions in other areas of human activity.

In this critique, we are referring to a particular type of educational technology. Obscuring their differing philosophical commitments and design principles results in less-than-useful generalizations. In this article, we will discuss the techniques and artifacts designed to teach pre-determined content to students through electronic media (such as video classes) accompanied by automated assessment. We will term these as “automated instructionist technologies.”

We want to investigate how the “enamoring” of the educational world with these technologies happened despite historical accounts of earlier failures, hyperbolic promises only ever partially fulfilled, and decades of accumulating negative evidence (e.g., Watters, 2020; Cole, Kemple, Segeritz, 2012; Ready, Coon, Bretas & Daruwala, 2019). Still, automated instructionist technologies simply refuse to go away. We hypothesize that this unparalleled feat of persuasion was, counterintuitively, not solely due to sophisticated products or efficient marketing campaigns but also of the skillful construction of resilient discourse (Bakhtin, 1984).

We want to dive into how automated instruction, rather than improving schooling, is changing the nature of education itself by asserting a self-referential claim about the inevitability of a future driven by automation. In doing so, these reforms follow a typical playbook: the introduction of educational technologies has less to do with their effectiveness and more to do with taking control of a narrative of what education is about, silencing dissenting voices (e.g., experienced teachers), and defining what counts as innovation.

Nevertheless, these discursive moves are subtle and barely visible to the unaided eye. Today, no reformer in her right mind would explicitly state that educational automation or the replacement of teachers is positive. The analytical tools of discourse analysis and semiotics, which we will examine in the next section, come in handy to break the narrative into pieces and see what lies behind it.

Methods and Discourse Analysis

Semiotic analysis tries to examine the discourse for “barely perceptible” traces that might reveal the rationales and mental models that drove the creation of the message. Thus, our first methodological move to get to the crucial presuppositions of discourse is to start by deciphering not the visible but the intelligible, in the same way that art critics distinguish an authentic painting from its forgery by inspecting insignificant details, or how Freudian psychotherapists dwell on minor lapses of memory or language (Ginzburg, 1991).

Our analytical lens will also use much of Bakhtin’s discourse analysis theories. Our first tool is the idea of dialogical discourse, a type of construction in which a product or idea is qualified based on the creation of an antagonist. As a result, one must inevitably consider other discourses that will be in dialogical opposition with his or her own (Bakhtin, 1984; Todorov, 1989). We will also explore the idea of polyphony: discourse is not monophonic or autonomous—conversely, it is “spoken” by many voices intertwined in time and space. Finally, the analytics lens of intertextuality will enable us to examine how authors—intentionally or not—use texts from the past and present to legitimize arguments in favor of their product or idea, which might require a semiotic archeological “excavation” to recover their significance.

Using these three analytic lenses—intertextuality, polyphony, dialogism—we hope to disentangle and reveal the constructs and ideas about education buried in the discourse of automated instructionist innovation. We do not mean that discourses are generated with the explicit goal of disguising the actual message, but that they are long, dialogical, and polyphonic social constructions that might even escape the comprehension of their direct beneficiaries (Blikstein, 2019).

We also have to remember Benveniste’s (1974) and Jakobson’s (1969) note about discourse being more than the mere transmission of information. It also has a conative function, by which authors focus on generating a positive response and a favorable effect instead of only concentrating on the actual content of the message (a lesson well-learned by modern politicians and marketers).

Thus, in our analysis, we should first forgo the idea that a handful of technology entrepreneurs “disrupting” education or creating those discourses in isolation. Conversely, intertextuality shows that these discourses are built collectively and in multilayered structures of validation, ideology, economic interests, and individual ambitions (Bakhtin, Holquist, & Liapunov, 1993). Second, we consider that these automated educational technologies are dissimilar from most other products that are simply marketed to students and parents (such as pens and notebooks), teachers (such as textbooks), or school districts (such as furniture). Today’s “edtech” products require engagement from multiple stakeholders and passionate, almost “transcendent” buy-in.

Imagine if a school chair were marketed as a device that could “double the rate of learning”—the rules of engagement of traditional advertising would no longer suffice. Educational technology products must be more than just “better” than other solutions: they also need to be disruptive, innovative, revolutionary, “game-changing.” Thus, they have to be associated with powerful brands and personalities or connected to a grand, utopian educational progress theory; otherwise, the transcendent buy-in is never achieved.

This is not entirely foreign to traditional commercial advertising. Often it is not enough to say that a brand of chairs is better—you might have to recruit orthopedists to record testimonies and encourage happy customers to post pictures on social media. Ultimately, this collection of texts will not only speak about the furniture but also about the ethos of the company, its sustainable practices, and the charitable acts of the CEO. Automated educational technologies employ an even more powerful and complex set of texts to eventually normalize and naturalize the message (Moles, 1958), using this polyphony and intertextuality to make the idea not only desirable but inevitable.

Our analysis will also follow innovative and foundational methodologies and research put forth by the pioneering work of Audrey Watters (Watters, 2015b, 2020), who combines historiography, technical analysis, and critical theory.

Data Collection

Our methodology included three data collection moments. First, we collected the self-reported “company’s mission” public information from the websites of about 15 major edtech companies working on automated instructionist technologies and AI in education. These companies were selected based on systematic online searches within specialized databases such as TechCrunch (https://techcrunch.com/tag/edtech/) and CrunchBase (https://www.crunchbase.com).

We then collected publicly available interviews with some of the prominent leaders in the field and news pieces in which they are quoted. These interviews and news were then filtered for the most relevant content (i.e., deleting redundant descriptions of the products or redundant recounting of the history of the companies or the biographies of the CEOs), resulting in about 20,000 words of data spanning about eight years. The interview data were analyzed for themes and topics, and representative excerpts were selected.

Finally, we used web-scrubbing techniques to extract the most recent public news pieces with the keywords “AI in Education” and “MOOCs.” These keywords do not represent the entire gamut of educational products available today, but they capture two main types of technologies from the past decade. We limited the search to those terms to limit the number of results while capturing critical services within the world of automated instructionist technologies. This resulted in 623 articles, from which we extracted the titles and first 20 words.

In this article, however, we will not conduct a quantitative analysis of this dataset. Our goal with the automated data collection was to capture a broad enough set of texts to use the lens of discourse analysis on a sample that would represent the industry’s leading voices. We will, thus, present our qualitative analysis of the first three data sources (mission statements, interviews, and news reports), using the web-scrubbed data only for triangulation.

“Edtech:” History and Discourses

The early days and Pressey’s Teaching Machine

Since the mid-19th century, many inventions were supposed to revolutionize schooling, such as primitive slide projectors and erasable writing devices (see, for example, Cuban, 1986). Watters’ account on educational technologies describes a machine for teaching spelling from 1866, an idea from Thorndike for a “personalized textbook” from 1912, Aikins’ testing contraption from 1913 (an “educational appliance to teach any subject”), Pressey’s testing machine from 1924, and an IBM test-scoring machine from 1937 (Watters, 2020). Considering our analytical framework, we want to interpret the more recent waves of innovation not as mere copies of what previous generations did, but as an ongoing intertextual dialogue between multiple generations of educational technologists, by which ideas, rationales, and justifications flow through time, alternating between hibernation and intense public interest.

The Pressey machine is the most famous of the first wave of automated devices and one of the first cases of the phenomenon that we will repeatedly explore in this article: the creation of a dialogical antagonist (Bakhtin et al., 1993) that serves to justify the existence of a product or idea, diminishing the need to go into the specifics of how the new product or invention will fulfill its goals. The Pressey contraption auto-graded multiple-choice questions and, despite its simplicity, was touted as revolutionary:

“There must be an industrial revolution in education, in which educational science and the ingenuity of educational technology combine to modernize the grossly inefficient and clumsy procedures of conventional education. […] developing in her pupils fine enthusiasms, clear thinking, and high ideals.” (Pressey, 1933, apud Watters, 2020)

Pressey worked hard at creating the discursive antagonist: the “grossly inefficient and clumsy procedures” of education. However, his solution, rather than some advanced device to bring “clear thinking, and high ideals,” was a simple mechanical contraption for revealing the correct answer to a multiple-choice question. In reading his quote, however, it is hard to oppose his education diagnosis—who would advocate for the “clumsy procedures” of education? Here, Pressey seems to follow what Benveniste (1966) would formalize a few decades later: while his message is a gross overgeneralization and the content is imprecise and hyperbolic, it creates a positive effect of persuasion, which causes us to forget to question the connection between the stated problem and the solution. It is not clear how a standardized testing machine would develop “high ideals,”—but a positive reaction to the discourse is created, despite the content of the message. The competent creation of an indefensible antagonist makes us want to jump on board with Pressey’s invention—even if it is not clearly defined, and there are hazy connections between what the technology does and the inventor’s critique of traditional education. This strategy, perfected in later years, would enter the playbook of automated educational technologists and stay there for decades.

Skinner’s teaching machine

Skinner continued Pressey’s agenda by creating his teaching machine but brought along a more comprehensive theory (behaviorism) and a powerful brand (Harvard University), amplifying the impact and reach of this type of device. His discourse got more refined, too. In one of the videos recorded to present the invention in 1954, we can see intertextuality at work in his own words (Skinner admittedly had many conversations with Pressey):

“I should like to discuss some of the reasons why studying with the help of a teaching machine is often dramatically effective […] as soon as the student has written his response, he operates the machine and learns immediately whether he is right or wrong. This is a great improvement over the system in which papers are corrected by a teacher where the student must wait perhaps till another day to learn whether or not what he has written is right. Such immediate knowledge has two principal effects: it leads most rapidly to the formation of correct behavior; the student quickly learns to be right. But there is also a motivating effect: the student is free of uncertainty or anxiety about his success or failure, his work is pleasurable, he does not have to force himself to study. A classroom in which machines are being used is usually the scene of intense concentration.” (emphasis added, Skinner, 1954)

Skinner makes similar discursive moves: first, creating the antagonist with overgeneralizations and hyperboles: schools are ineffective because “the student must wait perhaps till another day” to know if they are correct, children are “uncertain” and “anxious,” and the work is not “pleasurable.” His answer to those intractable problems is a simple mechanical machine that asks questions and shows correct answers. Again, there is no apparent connection between the stated educational problem (uncertainty, lack of motivation, anxiety, etc.) and the proposed technological solution (a question-and-answer machine). However, note that Skinner’s carefully crafted discourse is not one of the mechanization of education—even though he designed a mechanical device. It is of further humanization: less effort, less toil, more learning.

“Most students feel that machine study has compensating advantages. They work for an hour with little effort, and they report that they learn more in less time and with less effort than in conventional ways.” (Skinner apud Watters, 2015a)

Using our semiotic toolbox to examine the imperceptible, the “inverse of the discourse” shows that even Skinner surreptitiously admits that “machine study” is problematic: it has “compensating advantages.” Skinner’s claim of humanization would also become part of the standard discourse of automated educational technologies. Its focus on “humanization” is a preemptive reaction to the fact that replacing teachers with technology sounds indeed dehumanizing. Nevertheless, Skinner is quick to assign a new meaning to humanization: it is about less effort, less repetitive work, and, above all, “moving at your own speed.” But it is never about deviating from the pre-set curriculum offered by his machine. This crucial and consequential discursive move—redefining what “humanizing education” is— would transform our perception of automated educational technologies. “Moving at one own’s pace,” personalization, and individualization would become the cornerstones of automated educational technologies for the next seven decades:

“Another important advantage is that the student is free to move at his own pace. When a whole class is forced to move forward together, the bright student wastes time waiting for others to catch up, and the slow student (who may not be inferior in any other respect) is forced to go too fast. […] he gets farther and farther behind and often gives up altogether.” (emphasis added, Skinner, 1954)

Even today, we see little interrogation about the fact that Skinner considers “moving at your own pace” as the cornerstone of humanization—even if the moving is happening along a strictly defined road, defined without any participation of students. A rudimentary mechanical machine is associated with “freedom,” while the classroom, full of humans, is associated with “forced” study. Skinner inverts the normal state of affairs: a machine that forces children to learn standardized content is humanized and becomes an instrument of liberty, while the teacher, portrayed as more mechanical than a machine, becomes an instrument of oppression. However, this reversal goes beyond that—the testing machine was no longer about testing:

“[The student] is not in any sense being tested, instead, helpful hints, suggestions, and prompts maximize the chances that he will be right. […] Programs have been constructed in which without any prior study, the average student is right 95% of the time. This result is partly due to the fact that the student only moves on when he has completely mastered all the preceding material. A conservative estimate [is that a] high school student can cover about twice as much material with the same amount of time and effort as with traditional classroom techniques.” (Skinner, 1954, emphasis added)

Skinner’s machine was not a passing fad—it persisted for decades, had multiple versions, and commercial products were sold to the public for years. How is it possible that, again, a mere mechanical contraption could embody so many ambitious goals and do so much for education—to the point of doubling the rate of learning and making students get it right 95% of the time?

In 1954, Skinner’s words were connected to three other intertextual discourses: first, Pressey’s ideas of technical mechanization to overcome the “clumsy procedures” of education. Technology and industry were transforming the world, and the freedom derived from automation (machines freeing people from repetitive tasks) was on everyone’s mind—so why not apply it to education? Second, an updated conception of schooling was slowly emerging, one that moved away from Pressey’s focus on cognition (“clear thinking”) and was about motivation, enjoyment, student-centeredness. A “twisted” version of equity even enters the stage: the “slow student,” who would often be deemed genetically inferior and undeserving of any attention during Cubberley’s (1909) profoundly racist era, now, during Skinnerian times, “may not be inferior in any other respect.” Thirdly, by the 1950s, there was also the notion that all aspects of human life could be measured and optimized: an increased need for education to be scientifically studied and precisely engineered. The teaching machine, in conversation with these texts, found its place in the imagination of many educational reformers: it offered the opportunity to not only mechanize but measure, rank, and quantify in minute detail.

Skinner promised that “a high school student can cover about twice as much material with the same amount of time.” Although this has never been observed or measured in any study, his machine stayed alive, even after the partial dismantling of the behaviorist edifice in the 1960s. In Skinner’s defense, at times, he was transparent about what the machine was about—a glorified textbook. Towards the end of the film presentation, he comments that:

“There is no magic about this teaching machine… it is simply a convenient way of bringing the student into contact with the man who writes the program.” (Skinner, 1954)

That passing, humble acknowledgment would slowly disappear from future texts about the mechanization of teaching—especially in the Silicon Valley, many decades later.

Silicon Valley and the invention of the rewind button

The examination of the discourses of Pressey and Skinner is crucial because they laid the foundations of the entire industry of automated instruction (see Watters, 2021) by formulating justifications and ideas that had, at the same time, the appeal of humanization and automation. This was a remarkable combination because they allowed for the apparent resolution of one of the uncomfortable facts of public education—humanization and automation are often at odds with each other due to issues such as cost, teacher/student ratio, and the automaticity of assessment. When you automate education, most likely you dehumanize it. But Skinner and Pressey managed to insert a magic device in that equation that did the exact opposite. This is partly why their ideas showed remarkable resiliency and why the “texts” they left behind kept getting revived by future generations.

However, it was the Silicon Valley educational awakening of the early 2010s effectively brought the teaching machine back in full swing. One of the main characters was Khan Academy, the online video lecture website famously introduced to the world in a 2011 TED Talk. Following the playbook of earlier educational technologists, Salman Khan begins by creating the usual antagonist: “traditional education.”

A teacher, no matter how good, has to give this one-size-fits-all lecture to 30 students -- blank faces. […] Good students start failing algebra all of a sudden, and start failing calculus all of a sudden, despite being smart, despite having good teachers, and it’s usually because they have these Swiss cheese gaps that kept building throughout their foundation. […] When those teachers are doing that [using the videos] there’s the obvious benefit -- the benefit that now their students can enjoy the videos in the way that my cousins did, they can pause, repeat at their own pace, at their own time. But the more interesting thing -- and this is the unintuitive thing when you talk about technology in the classroom -- by removing the one-size-fits-all lecture from the classroom, and letting students have a self-paced lecture at home, then when you go to the classroom, letting them do work, having the teacher walk around, having the peers actually be able to interact with each other, these teachers have used technology to humanize the classroom. (emphasis added, Khan, 2011)

By 2011, Skinner had long fallen out of favor—so it is not a coincidence that Khan does not mention him. In the 21st century, Skinner’s behaviorism came to represent old, traditional, oppressive schooling: not the type of ideas with which venture capitalists and philanthropists would like to associate. However, the magic of intertextuality was doing its job: Khan follows Skinner’s ideas with astonishing fidelity—almost word-for-word. Khan’s central claim is that students can rewind video lectures and play them again—not exactly a revolution since the same can be done with a textbook and a plethora of other educational materials. In addition, a student in a teacher-led classroom (face-to-face or remote) can also ask the instructor to repeat a piece of information, and the teacher can always check for understanding. Good teachers know that when students express difficulties understanding the content, they can rephrase the explanation, mention examples, or employ new pedagogical moves. Rarely would a teacher “press the rewind button” and repeat the same sentences, and seldom would a simple repetition word for word lead to deeper understanding (except for basic content). However, Khan’s portrait of the classroom downplays all the possibilities available to teachers—or assumes that most teachers do not do it. In his efficient creation of an overgeneralized, stereotyped antagonist, he finds a way to justify his technology.

Nevertheless, the discursive moves here require a more nuanced analysis. We want to avoid oversimplifying Khan as an ill-intentioned designer who is merely copying Skinner’s ideas while hiding his inspirations. Like many of the innumerable digital educational repositories in existence, Khan’s library of videos can actually be helpful for students. Some might benefit from these repositories at home when reviewing the day’s materials or doing homework—in the same way as they would use a textbook. But this is hardly how the technology of the “rewind button” is portrayed: the narrative needs to adhere to the hyperbolism of education disruption. Khan was introduced at the TED conference in 2011 by none other than Bill Gates—the stakes were high, and the humble invention of a mere “library of supplemental videos” would not cut it. The way to compensate for the simplicity of the invention itself is to employ polyphony. The TED stage, Bill Gates, an MIT-educated “inventor”... even before Khan uttered a word, these non-verbal discourses were already doing their job (Benveniste, 1974): the stage is already set for success. Then, as he speaks, Khan picks up texts left “in the air” for decades by Pressey and Skinner. First, the caricature and simplification of what schools are and what teachers do. Then, the actual classroom, in which (despite all possible and fair criticism) there are humans with multiple possibilities of interaction, is supplanted by a video lecture library with a rewind button that always plays the same video. Persuasion here is carried out by portraying reality “upside-down”: the human becomes the oppressor, and the machine becomes the humanizer.

One of Khan’s hallmark ideas is “mastery learning”—again, an idea imported directly from Skinner. Khan says,

Learn math the way you’d learn anything, like riding a bicycle. Stay on that bicycle. Fall off that bicycle. Do it as long as necessary, until you have mastery. The traditional model, it penalizes you for experimentation and failure, but it does not expect mastery. We encourage you to experiment. We encourage you to fail. But we do expect mastery. (Khan, 2012)

The “detective” work of semiotics enables us to unpack his metaphor: learning to ride a bicycle is a vivid example of a skill that is learned in a purely automatic way—in learning it, through trial and error, we develop no mechanistic understanding of the science behind it. Learning how to ride a bicycle bears no resemblance to learning how to think mathematically or make sense of mathematical relationships. Here, Khan is betrayed by his mental model of learning, clearly inspired by the behaviorists. “Experimentation”—a word associated with progressive pedagogies—finds itself here walled in by his system: it becomes the mere playing back of videos and trying out different answers in quizzes. It removes the fundamental agency of experimentation, with which students can create hypotheses, design experiments to test them, or generate inventions: it gets reduced to being right or wrong until you are “always right.” (remember Skinner (1954): “Programs have been constructed in which […] the average student is right 95% of the time. […the] student only moves on when he has completely mastered all the preceding material.”)

Knewton, another Silicon Valley titan of edtech, and its former CEO Jose Ferreira made similar claims as Khan. Since it came to market a few years later, its almost-identical discourse already used some of the vocabulary of artificial intelligence in education that came into vogue in the late 2010s:

If a student struggles to complete an assignment, our adaptive technology diagnoses and remediates that student’s knowledge gaps with personalized content and assessment that will help them achieve proficiency. (Knewton, 2020)

Intertextuality, again, proves to be a pivotal lens to understand the discourses of Khan, Skinner, Jose Ferreira, and others. We see the recurrence of terms like “gaps in knowledge,” “mastery/proficiency,” “at your own pace,” and “personalization”—even almost 70 years later.

Justin Reich comments on this phenomenon with an illuminating example. While Jose Ferreira of Knewton was claiming that his system was “like a robot tutor in the sky that can semi-read your mind and figure out what your strengths and weaknesses are, down to the percentile,”

…Knewton engineers were simultaneously publishing blog posts with titles like “Understanding Student Performance with Item Response Theory.” Lift up the hood of the magical robot tutor, and underneath was a 40-year-old technology powering the whole operation. (Reich, 2020)

In other words, Reich reveals that the existing technology used by Knewton (and many other systems) was not advanced machine learning, but good and old statistics—but similarly to Khan’s video library, not enough in an era where hyperbolic educational disruption is a necessity for entrepreneurial survival.

The year of the MOOC?

The rise of the Massive Online Open Course was one of the most spectacular edtech stories of the 2010s, with 2012 named by the New York Times as “the year of the MOOC.” We see the same phenomenon yet again: an old technology “powering the whole operation,” while the creation of a stereotyped antagonist compensates the simplicity of the product. Daphne Koller, one of the proponents of MOOCs, explained how they were different from “old-fashioned” online learning:

What made these courses so different? After all, online course content has been available for a while. This [a MOOC] was a real course experience. It started on a given day and then students would watch videos on a weekly basis and do homework assignments. And these would be real homework assignments for a real grade. With a real deadline. (emphasis added, Koller, 2012)

In touting pre-MOOC online learning as the antagonist for MOOCs, she notes that these earlier courses had no start and end dates, homework, or grades, which is not true: much of pre-MOOC online learning was exceedingly formalized. Koller goes on to note that, freed from the constraints of the one-hour classroom lecture, online materials could be broken up into discrete chunks of less than 15 minutes, leading to customizable options, like extra enrichment material. And she continues:

This format allows us to break away from the one-size-fits-all model of education and allow students to follow a much more personalized curriculum. (Koller, 2012)

After touting MOOCs as a “real” course with “real deadlines,” she turns to characterize it as the exact opposite, which “breaks away from the one-size-fits-all model of education.” If in the first excerpt the antagonist were online courses that had no structure, in the second, the antagonist changes: now, it is courses with structure, following the “one-size-fits-all model” (which, supposedly, has a start and end dates, assignments, deadlines). Against those, MOOCs are touted as “a much more personalized curriculum.” Paradoxically, the product is better at first because it has “real deadlines,” then—just a few seconds later—it is good because it is not “one-size-fits-all.”

These examples show how in the “folds of the discourse” of automated educational technologies, we detect a pattern. With most products being based on long-existing technologies (Watters, 2020; Reich, 2020), and with little conceptual or technological innovation, the discourse around them becomes a patchwork of comparisons with an imaginary, stereotyped antagonist which looks conspicuously like the very innovations that the new products are trying to replace.

AI: the new frontier of mechanized education

In most AI systems, there is the problematic assumption that the abundance of data will eventually provide insight and effective solutions to all kinds of problems—even without a theory of human cognition or learning. This applies to AI in education as well. First of all, this mindset incentivizes researchers to seek data sources that are easier to manage but frequently poor in information about learning, often with all kinds of biases. Clickstreams and data from online educational environments are abundant but impoverished portraits of complex learning. As Reich says,

“Our assessment technologies are particularly good at assessing the kinds of human performance that we no longer need humans to perform.” (Reich, 2020).

Second, in complex social environments such as schools, the number of dimensions of the problem space is so large, and the combinatorics are so daunting that the sheer availability of data, without an initial theory, will likely never be sufficient to render usable findings. Third, AI in education has a scaling problem; technosolutionists might imagine that computers will eventually get fast enough to solve all types of problems—even the most complex mysteries of human learning—but solving an algebra problem is orders of magnitude simpler than designing and executing a scientific experiment, or writing a sophisticated essay on a historical topic. Early successes with algebra tutors or with apps for arithmetic cannot be taken as evidence that complex topics can be addressed with the same type of student modeling (Berger, 2018).

Many of those assumptions and limitations were behind failed educational startups that promoted AI uses for automated instructionist technologies, such as the School of One, AltSchool, and Knewton (Reich, 2020). Still, those systems create an entirely new model of teaching and learning driven by the advancement of technology rather than best practices from educational research. For those companies, many of the components of good classroom practice, such as social interaction, culturally relevant pedagogy, and pedagogical flexibility, are obstacles for the technology to work, so they are systematically excluded from the systems they produce. One recent landmark study in MOOCs has put to rest the claims that large-scale, low-cost behavioral manipulations (using AI or not) could be reproducible or effective in large-scale online classes (Kizilcec et al., 2020).

Despite these limitations, the discourse of AI in education marches on. Squirrel AI, one of the largest AIEd companies globally, claims to have a system with a…

“simulated human teacher giving the student a personalized learning plan and one-on-one tutoring, with 5 to 10 times higher efficiency. There is a growing ability to customize the teaching, then students can learn the same amount of material much faster… [it] keeps the students more engaged in a lesson to learn more material in a smaller amount of time. […] Recognizing from facial expression when the student is happy or bored or frustrated.” (emphasis added, Squirrel AI, 2020)

The intertextuality in the discourse is, again, clear. The same themes from Skinner and Pressey emerge: acceleration, automaticity, efficiency, personalization, as we will discuss in the next section.

Discussion

This article described the various discursive threads of automated instructionist technologies and found common themes in the data. In what follows, we will comment on each and point out possible contributions to discussing how those technologies are impacting school systems.

Neutral, apolitical reform: antagonizing the stereotyped “lecture” while keeping everything the same

A clear discursive move seen in the data was to pinpoint a stereotyped, caricatured version of the “lecture” as the source of all evil in education—and separating it from the rest of the system. The creation of this convenient antagonist was much easier than battling the entire educational system with its obsession for tests, outdated and overcrowded curriculum, or oppressive, racist incentive systems. Fighting such a system would entail critiques of power, privilege, control, and inequality—it is much more convenient to limit the problem to an uncomplicated and less controversial target: lecturing. In doing so, these technologists selectively appropriate the critique of traditional schooling by progressive educators (Blikstein & Zuffo, 2003), minus the overarching analysis of the historical, economic, and political reasons that generated the “lecture.” The task at hand thus became to replace it while keeping the rest of the superstructure intact. No venture capitalist wants to finance the disruption of the social order or complicated discussions about educational and social justice, nor discussions about the social function of schooling or costly rewritings of national standards. The solution is to replace the 45-min face-to-face lecture with the 10-minute online one. However, since this born-again lecture has to be different, technologists re-signify frivolous design elements as revolutions, adding “novelties” such as the ability to rewind videos or badges for getting all of the answers right on a quiz. Then, to elevate such trivialities to the level of genuine educational innovations, polyphony is recruited: famous personalities record testimonies, emotional stories of children in developing countries are recounted, TED talks are delivered, and a moving book is written.

Acceleration of learning and the lost Einsteins

Another recurrent topic in the data concerns the acceleration of learning. Skinner, Khan, Squirrel AI, Jose Ferreira’s Knewton, and Koller mention “learning more in less time” as a core goal. Talking about speed avoids conversations about giving students more agency, making curricular topics more flexible, or attending culturally meaningful standards. These new systems take for granted that the current school content is the appropriate one and that the focus should be on productivity and speed. “Learning at your own pace” is never associated with “choosing to learn fewer topics” or “learning about topics of personal interest”—for all of Khan’s concern with equity, the freedom given to the “slow student” is simply to catch up with the class by devoting more hours at home to study the same topic. The “personalization” refers only to the video playback speed, or the ability to pause and rewind—not the agentic type of personalization of John Dewey or Paulo Freire that generates depth of learning and engagement.

The “acceleration” of learning is, in fact, a cyberspace version of school tracking, a practice in which students are sorted into different classrooms (and life trajectories) by performance. The rhetoric is revealing: Khan talks about slow students being allowed to “repeat videos at their own pace, at their own time” (Khan, 2012), and Skinner about the “bright student wasting time waiting for others to catch up” (Skinner, 1954). What is expected from the “slow ones” is mere curricular compliance, done “in their own time” (emphases added), not “our” time. In other words, the “slow” students should remediate their speed of learning on their own—it ceases to be a problem that the school system has to address. Indeed, in many systems worldwide, school systems—especially in low-resourced areas—are increasingly (and unfortunately) relying on automated electronic resources for remedial education.

The other side of this focus on tracking is revealed by how often Khan and others talk about how we are “losing Einsteins” by offering low-quality education around the world:

“Imagine if we could increase by a factor of ten the number of Albert Einsteins in the world” (Khan, 2015)

Notwithstanding the poor choice of Einstein as an example (he famously begrudged schooling), here the stance comes full circle: while we send the “slow” students home to catch up on the day’s lesson on their own time, we look forward to the “Einstein-level” students who might bring real value to society through their genius.

The categorization of “slow” and “fast” students shows, again, echoes of Pressey’s obsession with intelligence tests and social sorting and an educational mindset in opposition to the idea of equity and inclusion. Again, the similarity of the “lost genius” discourse, the obsession with sorting students into speed-of-learning categories, and technology being able to accelerate learning are another set of well-aligned discourses that connect, through intertextuality, Pressey and Skinner with their modern instantiations.

Changing the nature of learning: Educational Soylent

The significant dissonance between the publicized goals and the real solutions of the automated instruction of “edtech” has been apparent since Pressey’s claim that his mechanical contraption for testing could generate “clear thinking, and high ideals.” Even Larry Berger, the CEO of Amplify, one of the largest companies in automated instruction, recognized in 2018 that:

I was a great believer in “personalized learning” […] Here’s the problem: The [learning] map doesn’t exist, the measurement is impossible, and we have, collectively, built only 5% of the library. […] [it] doesn’t exist for reading comprehension, or writing, or for the more complex areas of mathematical reasoning, or for any area of science or social studies. The existing measures are not high enough resolution to detect the thing that a kid should learn tomorrow. (Berger, 2018)

Berger’s recognition reinforces the hypothesis that edtech’s only salvation is not to improve education as we know it but to change its very nature, transforming it into something easily automated. However, education is anything but automatable—at least in its current form. As an analogy, take a family’s rituals for preparing and/or eating meals. They serve multiple educational, social, and psychological purposes. It would be challenging to create products to replace all of them—unless you re-signify eating and make it about eliminating the feeling of hunger. That is precisely what a well-known startup company—Soylent—did in the mid-2010s. It created an “all-in-one” shake containing 33% of a human’s nutrient needs and advertised that three bottles were all we needed to survive. To be successful, Soylent had to make customers believe that cooking and eating were a waste of time and that food was just about getting the essential nutrients into the body efficiently. Its then-CEO, Rob Rhinehart, famously said that supermarkets were “endless confusing aisles [with] the smell of rotting flesh,” kitchens were akin to torture chambers with “red hot heating elements and razor-sharp knives.” In other words, the company had to transform cooking and eating into an experience of inefficiency, inconvenience, and danger. The company eventually changed its mission and ousted the CEO, but not before raising 75 million dollars and occupying headlines for two years (McAlone, 2015).

Consequently, the entire project of edtech and AI in education can only succeed if they change the nature and purpose of education itself, erasing the socialization of children in schools, non-curricular learnings, unquantifiable knowledge, complex facilitation, play, inquiry, curiosity, public control, and other aspects of education that we cannot measure, package, and automate (for examples of how over measurement in medicine and business can decrease the quality of services, see Muller, 2018).

When we focus on straightforward content and multiple-choice tests—even disguised as “personalized”— automation becomes a much easier task. Thus, the project of changing the nature of education is the most dangerous of all since it depends on undervaluing and ultimately rendering invisible the work that teachers do in classrooms beyond content recitation—because that work is precisely what automated instructionist technologies cannot do.

Furthermore, there is another problem in making this work invisible: AI-based or automated educational systems, with their dependence on vast amounts of easy-to-discretize data, will be much more cost-effective for specific topics within STEM disciplines. Teaching for complex problem solving, exploring multidimensional phenomena, or learning outside of STEM will increasingly be outside of the realm of these technologies.

Public education could adapt to what low-cost automated instruction and AI can do, feeding their students glorified educational Soylent, while affluent schools will continue to offer rich, complex, hard-to-automate learning experiences for their students. Therefore, automated teaching, due to its intrinsic technological limitations, could be a harmful tool that not only would communicate to children their place in society and fossilize it but might also deny them access to symbolic capital that is already not taught as content topics in lectures (Anyon, 1980)—and much less so when they are automated.

Intertextuality in action: “moving at your own pace” and “mastery learning”

The several excerpts selected for this article show consistency amongst discourses spanning an entire century. Two are particularly remarkable. The first, in Figure 1, shows intertextuality in action for the idea of inefficiency and massification of existing educational systems and the vilification of lectures. It also shows an astonishing, almost literal permanence of the idea of “moving at one’s own pace,” from Skinner to Khan.

Figure 1. Intertextuality spanning one century: from Pressey’s technology to Khan and Summit/Facebook Schools. Note, in the text with the black background, the theme of “learning at your own pace” repeated over seven decades, and, in bold, the creation of the antagonist of edtech: traditional lectures.

The second set of discourses shows the permanence of two core ideas in behaviorist systems: mastery learning and immediate feedback. The former suggests that students should not progress until they have “mastered” a given topic, as measured by their answers in tests. The latter holds that by repeatedly being tested and receiving rapid feedback, students will learn more and in a “pleasurable” way—equating enjoyable learning with curricular compliance and claiming the answering of simple questions is the only valid demonstration of learning.

Figure 2. Intertextuality in Skinner, Koller, Khan, and AI companies. Note, in the bold text, the Skinnerian idea of “not tests but hints” and immediate feedback and, in the text with the black background, the mastery learning concept: students are only allowed to move on when one topic is “mastered.”

Both examples illustrate how Pressey’s and Skinner’s old discourses “traveled in time” almost intact and are still at the foundation of the entire industry of automated instruction (for extensive documentation, see Watters, 2021). Despite their inconsistencies and the lack of evidence-based support, these concepts are not only still efficient in occupying headlines, raising money, and exciting policymakers but, in fact, show extraordinary resilience.

Conclusion: Devaluing Educators by Overvaluing Automated Teaching Technologies

Our analysis revealed a familiar pattern. Namely, entrepreneurs propose a new automated educational technology by establishing an opposition to a stereotyped version of traditional education (dialogism). Then, they build on intertextuality to generate discourse that makes use of old and new “texts” (e.g., “learning at your own pace”). Finally, through polyphony (social media, marketing, high-profile events, celebrity endorsements, branding), they disseminate and legitimize the inevitability of the seemingly benign product—which is then assimilated into everyday discourse (e.g., “personalized learning,” is now incorporated into the lexicon of schools and policymakers).

In that process, the goal of proponents of automated instructionist technologies is not to expand the usage of their products through traditional means, i.e., pilot projects followed by evidence-based research, leading to increasingly larger implementations. The goal is to reclaim the role of innovators in education, and in that capacity, to displace other education stakeholders from setting the agenda, silence dissenting voices, and reshape education in the image of the technologies they produce.

The benefits are significant: first, you attain the privilege of not being challenged by educational formulations when they go wrong. Edtech entrepreneurs that fail catastrophically are allowed to “pivot” to a different direction with no consequence. Take, for example, Coursera’s several “reinventions,” the failure of Udacity’s MOOCs, the ruin of the School of One, Summit Learning System, AltSchool, Edmodo, InBloom, and Knewton, and the underwhelming record of Khan’s Lab School. And despite being behind many of those failed initiatives, and protests from teachers’ union leaders, in May 2020, the Gates Foundation was announced as the state of New York’s leading partner in “reimagining” education during the pandemic (Blad, 2020).

The semiotic instance here is crucial: when in control of the narrative, failures can be rebranded as humble “pivots,” while foundations and entrepreneurs can keep their statuses as successful innovators, which is apparent in the interviews of departing (or bankrupt) edtech CEOs (see, for example, Wan, 2016). In Benveniste’s terms, the goal is not to inform about and describe the functionalities of products but to produce a positive reaction in the mind of their “customers.” Moreover, these customers, in the voice of a longtime industry analyst, are not even students: “Companies like Knewton and others went straight into black-box algorithms. Their customers were really venture capitalists, not academic programs with real teachers and students.” (Hill apud Ubell, 2019).

In bringing the tools of discourse analysis to understand what is happening in the world of automated educational technologies, we gain new insight into how these leaders and companies managed to dominate the discourse of educational technologies for decades. We understand how philanthropic foundations and governments accept to invest in products that do not even exist, trust entrepreneurs with no educational experience and commit national education systems to fragile projects without consistent plans.

Pressey, Skinner, Khan, and Ferreira all participate in a 100-year-old project to mold education in the image of their technologies. Some, such as Pressey and Skinner, were quite explicit about the theories they espoused and the educational future they had in mind. Nevertheless, the modern edtech version of behaviorism understood that the actual battle in education centers around a narrative of innovation, disruption, and revolution. This art has been perfected, and with each new technology (e.g., AI), as it gets increasingly hyperbolic, it also further hides its theoretical inspirations. No edtech firm will use the word “behaviorism” on their websites.

But the entrepreneurs and companies are doing their job, and they will not stop doing so. It would be a mistake to simply ascribe ill-intention to them and comfortably sit in our academic offices dispensing criticism. We have another job to do. It is up to us to build defenses in our educational systems that will protect them from the seductive discourses of automated instructionist technologies. Part of this work lies in ensuring that our educational systems take advantage of technology in other ways instead—such as engaging children in building inventions, programming computers, composing music, or creating art. Those uses of technology are directly opposed to automation—they need experienced teachers, time, and effort. They do not accelerate learning: they make the learning of new, unthinkable things possible. And they are the types of activities that genuinely embody “personalization,” enabling children to express their ideas and intellectual passions. This is the true personalization that proponents of automated instruction stole, and that must be taken back.

In a time of increasing social inequality and escalating tensions due to multiculturalism and immigration, automated and AI-based educational systems—in their current inception—could become the ultimate tool for educational stratification and inequity. Such systems could be the tool of choice for low-income and underprivileged school districts due to constant budget pressures and the allure of a Silicon Valley-esque revolution. Students in those districts would not only be exposed to less face-to-face, innovative instruction but would be much more vulnerable to bias and to have their data exploited or monetized by service providers. These populations would grow up with dehumanizing, impersonal educational technologies that would greatly diminish their prospects in the complex and interconnected world of the 21st century. But not to worry: the prophets of automated education promise that, in return, we will find ten times more Einsteins.

Authors

Paulo Blikstein is an Associate Professor at Teachers College, and (by affiliation) at the Computer Science Department at Columbia University, EUA. His research and design focus on emancipatory technologies for education. (tltlab.org)

Izidoro Blikstein was a Professor at the Department of Romance Linguistics at University of São Paulo, Brazil. He is the author of “Kaspar Hauser or the Fabrication of Reality” and his research focuses on the Semiotics of authoritarian regimes.

Acknowledgments

The authors would like to thank Yipu Zheng for her invaluable data collection work, as well as Alicja Żenczykowska, Fabio Campos, and Chelsea Villareal, and for their insightful comments and reviews of earlier drafts.

References

Anyon, J. (1980). Social class and the hidden curriculum of work. Journal of Education, 162(1), 67-92. https://doi.org/10.1177/002205748016200106

Bakhtin, M. (1984). Problems of Dostoevsky’s poetics. Minnesota: University of Minnesota.

Bakhtin, M., Holquist, M., & Liapunov, V. (1993). Toward a philosophy of the act. Austin: University of Texas Press.

Benveniste, E. (1966). Problèmes de linguistique générale. Paris: Gallimard.

Blad, E. (2020, December 3). New York State teams with Gates Foundation to “reimagine education” amid pandemic. Education Week. https://www.edweek.org/education/new-york-state-teams-with-gates-foundation-to-reimagine-education-amid-pandemic/2020/05

DiSalvo, C. (2014). Critical Making as Materializing the Politics of Design, The Information Society, 30(2), pp. 96-105, DOI: 10.1080/01972243.2014.875770

Cole, R., Kemple, J.J., & Segeritz, M.D. (2012). Assessing the early impact of School of One: Evidence from three school-wide pilots. New York: Research Alliance for New York City Schools, New York University. Retrieved from https://steinhardt.nyu.edu/research-alliance/research/publications/assessing-early-impact-school-one

Cuban, L. (1986). Teachers and machines: The classroom use of technology since 1920. Teachers College Press.

Cubberley, E. P. (1909). Changing conceptions of education. Houghton Mifflin.

Freire, P. (2014). Educação como prática da liberdade. Editora Paz e Terra.

Ginzburg, C. (1991). Chaves do mistério: Morelli, Freud e Sherlock Holmes. In U. Eco & T. A. Sebeok (Eds.), O signo de três (pp. 89-129). São Paulo: Perspectiva.

Khan, S. (2011). TED Talk. Presentation at the TED. Conference.

Kizilcec, R. F., Reich, J., Yeomans, M., Dann, C., Brunskill, E., Lopez, G., Turkay, S., Williams, J. J., & Tingley, D. (2020). Scaling up behavioral science interventions in online education. Proceedings of the National Academy of Sciences, 117(26), 14900-14905. https://doi.org/10.1073/pnas.1921417117

Koller, D. (2012, August 1). What we’re learning from online education. TED Talks. https://www.ted.com/talks/daphne_koller_what_we_re_learning_from_online_education

McAlone, N. (2015, October 15). 24 controversial quotes from Soylent’s CEO that will either terrify or inspire you about the future of food. Business Insider Australia. https://www.businessinsider.com.au/soylent-ceo-rob-rhinehart-quotes-about-the-future-of-food-2015-10

Moles, A. A. (1958). Théorie de l'information et perception esthétique. Paris: Edition Flammarion.

Muller, J. Z. (2018). The tyranny of metrics. Princeton University Press.

Ready, D., Conn, K., Park, E., & Nitkin, D. (2019). Final impact results from the i3 implementation of teach to one: Math. Consortium for Policy Research in Education at Teachers College, Columbia University: New York.

Reich, J. (2020). Two stances, three genres, and four intractable dilemmas for the future of learning at scale. In Proceedings of the Seventh ACM Conference on Learning @ Scale (pp. 3-13). New York, NY, USA: ACM. https://doi.org/10.1145/3386527.3405929

Selwyn, N. (2010). Looking beyond learning: Notes towards the critical study of educational technology. Journal of Computer Assisted Learning, 26(1), 65-73. https://doi.org/10.1111/j.1365-2729.2009.00338.x

Selwyn, N. (2013). Discourses of digital ‘disruption’ in education: A critical analysis. Fifth International Roundtable on Discourse Analysis, City University, Hong Kong. https://www.academia.edu/4147878/Discourses_of_digital_disruption_in_education_a_critical_analysis

Selwyn, N., & Facer, K. (2013). Recognizing the politics of ‘learning’ and technology. The politics of education and technology: Conflicts, controversies, and connections (1st ed.). New York: Palgrave Macmillan.

Skinner, B. F. (1954). Teaching machine and programmed learning. https://www.youtube.com/watch?v=jTH3ob1IRFo

Todorov, T. (1989). Nous et les autres. La réflexion française sur la diversité humaine. Paris: Éditions du Seuil.

Tyack, D. B., & Cuban, L. (1995). Tinkering toward utopia: A century of public school reform. Cambridge, MA: Harvard University Press.

Ubell, R. (2019, June 12). Explaining the shakeout in the adaptive learning market (opinion). Inside Higher Ed. https://www.insidehighered.com/digital-learning/views/2019/06/12/explaining-shakeout-adaptive-learning-market-opinion

Wan, T. (2018, December 27). Jose Ferreira steps down as Knewton CEO, eyes next education startup. EdSurge. https://www.edsurge.com/news/2016-12-21-jose-ferreira-steps-down-as-knewton-ceo-eyes-next-education-startup

Watters, A. (2015a, February 10). Education technology and Skinner’s box. http://hackeducation.com/2015/02/10/skinners-box

Watters, A. (2015b, February 19). The history of the future of education.  http://hackeducation.com/2015/02/19/the-history-of-the-future-of-education

Watters, A. (2021). Teaching machines: The history of personalized learning. MIT Press.

Wiener, N. (1988). The human use of human beings: Cybernetics and society. Da Capo Press.

Williamson, B. (2018a). Silicon startup schools: technocracy, algorithmic imaginaries and venture philanthropy in corporate education reform. Critical Studies in Education, 59(2), 218-236. https://doi.org/10.1080/17508487.2016.1186710

Williamson, B. (2018b, March 29). Why education is embracing Facebook-style personality profiling for schoolchildren. The Conversation. https://theconversation.com/why-education-is-embracing-facebook-style-personality-profiling-for-schoolchildren-94125

Zuboff, S. (2015). Big other: Surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30(1), 75-89. https://doi.org/10.1057/jit.2015.5

Comments
1
?
surbhi nahta:

German language Classes in Pune English-Frene of the benefits that acknowledging more than ch and English-German streams all outfit our students with mind blowing open entryways for what's to come