Can humans co-create with non-human systems? Increasingly, artists, technologists and journalists are working with living systems, such as cells, mold, planetary scale systems, artificial intelligence (AI) as well as technological infra-structures. Is it co-creation?
Contributors (3)
Jun 03, 2019


Can humans co-create with non-human systems? Over the millennia, humans have situated themselves on a continuum of plant, animal, and spiritual realms, as with ancient gods depicted bearing human bodies with animal heads. While the narrative of western, euro-centric civilization has often been one of human dominance over nature, beekeeping, or perhaps fruit-tree grafting, suggest a level of human respect for, and creative interaction with, non-human agency.

In the more recent history of the development of artificial intelligence (AI), analogies to biological, evolutionary, and animal systems are pervasive. British cognitive scientist Margaret Boden has mapped the historic intersections of AI innovations with neurocognitive science, psychology, biology, and the study of social insects. She argues that while the notion was first floated by Alan Turing in his 1950 manifesto (called The Turing Test) as a deliberate provocation, AI has since, influenced and been influenced by our understandings of the brain, the mind, and other biological and non-human systems.1 Waves of artists and computer scientists have produced media inspired by and in collaboration with biological processes as well as machine systems, often at the same time. Journalists and documentarians have now also begun interrogating the limits, the potential, and the risks of AI technologies.

<p>Alan Turing's passport photo at age 16, in 1927. Turing is one of the earliest theoretical computer scientists and he first floated the idea of artificial intelligence in his 1950 manifesto called “The Turing Test”.</p>

Alan Turing's passport photo at age 16, in 1927. Turing is one of the earliest theoretical computer scientists and he first floated the idea of artificial intelligence in his 1950 manifesto called “The Turing Test”.

The prevailing public discourse on the subject dwells on dichotomies such as whether humans will control these systems or AI will “take over.” Are they merely tools, or something more? Are we headed for Frankenstein, or Shangri-La? Are humans on the road to immortality, or extinction? Other questions to pose might include: How much of this is science fiction, crossing over from speculation to dangerous deception; what is behind the curtain of the marketing of AI, and who stands to profit?

For this study, we interviewed over 30 artists, journalists, curators, and coders specifically to ask about their relationships with the systems they work with. Many trace their lineage to Donna Haraway’s A Cyborg Manifesto (1985), to techno-feminism, to post-humanism and studies of the Anthropocene. Others were inspired by the environmentalist Land Art movements of the 1960s and 1970s. But not all. Some are computer scientists co-writing baroque music with algorithms; others create sculptures with bees. Some are exploring the boundaries of “deep fake” and synthetic-media technologies using images of 1980s celebrities. Others are working to expose the underbelly of the technological surveillance systems we are entangled in. Does the notion of co-creation contribute to the meaning of pursuing this work, perhaps by modelling alternative and justice-based approaches? And how might artists help expose the gaps in the dominant narratives about these systems?

In the first section of this report, a foundation in philosophy and history are investigated for the very idea of co-creating with non-human systems, both living and machine. We then spotlight the artists who work with living systems. Next, we turn to artists working with AI systems, sometimes entangled with, or inspired by, living systems. We will also explore how artists are examining surveillance and bias, in the context of a growing body of critical theory about the technosphere. We then examine AI and journalism, and make speculative conclusions about the growing body of art, media, and theory around co-creation between human and non-human systems.


The possibility of co-creation between humans and non-human systems, whether other living beings or the non-living entities that we today call artificially intelligent, provokes difficult questions regarding the conditions for co-creation, as well as the nature of creativity and consciousness. One such question contemplates whether or not co-creation requires equivalent agency. If nothing more, the very question helps us to interrogate our human condition, whether or not one accepts the possibility of other agencies.

One might argue that the long co-evolution of humans and other living organisms, whether in the form of hunting dogs and falcons, or intestinal microflora, demonstrates a robust history of interdependencies. But do those interdependencies rise to the level of co-creation? Drawing on the creativity of dogs and falcons to help with hunting seems a different order of cooperation than the essential work of microflora in the human gut. But until relatively recently, the issues of animal consciousness—if it exists, if it has gradations, and its scope—have remained largely outside the Western philosophical debate.

René Descartes, a foundational figure in 17th-century Enlightenment thinking, compared animals to automata. He asserted that only humans were capable of rational thought and operated as conscious agents, while animals simply followed the instructions hardwired into their organs. Only at the end of the 19th century did a more nuanced view slowly begin to take root, as Charles Darwin’s notion of mental continuity across species found adherents. Today, although there is no consensus, according to the Stanford Encyclopedia of Philosophy: “Many scientists and philosophers believe that the groundwork has been laid for addressing at least some of the questions about animal consciousness in a philosophically sophisticated yet empirically tractable way.”2

If at least some thinkers entertain the idea of consciousness in non-human living beings, what about machines? If humans are to co-create with machines, and not use them simply as tools (the mechanical equivalent of draft animals), then the same questions of consciousness and agency may apply. Here, too, the verdict is out, but the language that has been used to describe some computational systems has framed the issue in the affirmative. The terms “electronic brain,” a “mathematical Frankenstein,” and “wizard” were used to describe the 1946 appearance of ENIAC (the Electronic Numerical Integrator and Computer).3 John McCarthy coined “artificial intelligence” in 1955. And in 1958 it was claimed that the “electronic brain” of the “Perceptron” would one day “be able to walk, talk, see, write, reproduce itself and be conscious of its own existence.”4 These visions all harkened back to Ada Lovelace ’s 1840’s proposition that machines “might compose elaborate and scientific pieces of music of any degree of complexity or extent.”5 The case for machine consciousness, intelligence, and creativity, was asserted rather than argued.

<p>Ada Lovelace, English poet and mathematician, in a daguerrotype from 1843 or 1850. Lovelace is considered the first to express the potential of the "computing machine" and the world's first computer programmer.</p>

Ada Lovelace, English poet and mathematician, in a daguerrotype from 1843 or 1850. Lovelace is considered the first to express the potential of the "computing machine" and the world's first computer programmer.

The implications of these assertions have been mixed. Both unrealistic expectations and fears have triggered a cycle of decade-long AI winters, in which research funding dropped precipitously and programs closed, only to be revived by the next over-hyped innovation. This boom-or-bust pattern remains with us, with MIT in the fall of 2018 announcing plans for a new college focused on AI, backed by $1-billion in funding.6

On the other hand, disciplines such as psychology (Frank Rosenblatt’s perceptron algorithm), neurology (biological self-regulation, or the self-organizing systems of cybernetics) and research areas such as the bio-tracking of insect behavior (distributed intelligence) all found relevance in AI research. Descartes’ connection between animals and machines, even if intended dismissively, has come full circle. This is an especially salient point at a moment when our dependence on statistically based deep learning in current AI systems seems to be reaching its limits. The reasoning and knowledge representation side of AI, inspired by biological systems, may well lead to the next step in machine intelligence.

Musician David Cope rightly asks about the implications different AI systems have for agency, and with them, the possibility for creative partnerships. Is machine learning akin to training a pet, or might reasoning and knowledge representation-based AI systems yield more robust forms of creative interaction? Then, how we might know the answers to these questions provokes other sets of questions. Co-creative engagements might offer ways to test and interrogate the various configurations of intelligence and agency that AI systems, animals, plants, and even bacteria might display. The parameters of co-creation — of ethically reframing who creates, how, and why — have newfound relevance to the extent that we grant, or even entertain, intelligence in some of these non-human entities.

<p>Jason Lewis’ Skins 1.0 Collective, “Otsì:! Rise of the Kanien’kehá:ka Legends,” still from video game produced in the Skins 1.0 Video Game Workshop at Kahnawake Survival School, 2009. Image courtesy Jason Lewis.</p>

Jason Lewis’ Skins 1.0 Collective, “Otsì:! Rise of the Kanien’kehá:ka Legends,” still from video game produced in the Skins 1.0 Video Game Workshop at Kahnawake Survival School, 2009. Image courtesy Jason Lewis.

Donna Haraway's legendary A Cyborg Manifesto proposes that cyborgs might transcend gender, disciplines, and species. The post-humanists more broadly decentralize humans on a broader spectrum of species and ecological systems. Indigenous scholar and new-media artist Jason Lewis and his collaborators write in an MIT-published essay, that Indigenous epistemologies are "much better at respectfully accommodating the ‘non-human.’”7 They cite Blackfoot philosopher Leroy Little Bear to suggest that AI may be considered on this spectrum, too:

[T]he human brain is a station on the radio dial; parked in one spot, it is deaf to all the other stations … the animals, rocks, trees, simultaneously broadcasting across the whole spectrum of sentience.8

As humans manufacture more machines with increasing levels of sentient-like behaviors, we must consider how such entities fit within the kin-network, and in doing so, address the stubborn Enlightenment conceit at the heart of Joi Ito’s Resisting Reduction manifesto, that we should prioritize human flourishing. Co-creation offers a hands-on heuristic to explore the expressive capacities and possible forms of agency in systems that have already been marked as candidates for some form of consciousness. Only by probing those possibilities will we be able to move beyond blanket assertions or denials of agency, and interrogate ourselves, critically, in the context of possibly intelligent systems.


The artists we interviewed who work with living systems may not always use the term co-creation to describe their work, but for most, it is a workable term. These artists acknowledge the fragility of the living systems—often much of their artistic practice involves the struggle to keep the systems alive—and they depend upon, and respond to, the surprises that these systems might afford. For most artists, these living systems are not mere tools, media, or instruments, but complex systems that often require humility and patience, and that often bring unexpected results. Like other types of co-creation, this work is heavily process driven.

Working with bees, the Canadian artist Agnetha Dyck builds artistic creations from found objects, such as broken porcelain dolls. "The bees have the skills of an architect," she told CBC Artspots in 2006. "It's their ability to construct up, down, [and] in three dimensions that interested me. They create the most beautiful environment that I've ever seen. I mean, it's just absolutely gorgeous. You have to be an artist to be able to do that. We're so meshed in what we do, the bees and I. They work by instinct and I work intuitively. And I'm trying to figure out whether there's a difference."

For Dyck, the artistic work with the bees is also a rallying call for environmental awareness:

I'm really concerned for them — 95 per cent of wild honey bees have disappeared. When you're so close to a creature that's so important to the world and you know how quickly they could disappear, and what that would do to humanity, that's a relationship that's pretty precious.

Collective intelligence runs through the oeuvre of Agnieszka Kurant, an artist based in New York, and in residence at MIT. She has co-created with microbes, slime mold, insects, machines, and people, including thousands of them at Amazon Mechanical Turk, a crowdsourcing web-services system owned by Amazon that negotiates the hiring of freelancers to perform tasks instead of computers, often small amounts of piecework. She is fascinated with emerging scientific discoveries in the behavior of cellular structures. She told us:

More recent research indicates that slime mold also has a capability of learning, which is totally incredible. How can something devoid of a nervous system and brain learn? But it does. For example, it’s learning if it's in an adverse environment — when there are some negative factors, it will not repeat the same action, because it remembers. It has some form of memory, collective memory.

<p>Agneiszka Kurant’s <em>A.A.I 2, 4 &amp; 5</em>, (2014). Termite mounds made from colored sand, gold, glitter, and crystals</p>

Agneiszka Kurant’s A.A.I 2, 4 & 5, (2014). Termite mounds made from colored sand, gold, glitter, and crystals

Kurant extends these ideas of dispersed cognition, a form of intelligence studied by entomologists and scholars of collective memory to humans but also beyond. She states in our interview:

There are new questions of how different kinds of traumas, both collective and individual, influence the microbiome of a person, the bacterial composition of a single human. And … we already know for certain that the microbiome composition of a human is responsible to a high degree for our mental processes, for our mental state. There's a very strong correlation between the microbiome and depression, for example, or other mental instability that was for centuries assigned to other factors.

Not all cell-wrangling artists perceive the relationship as wholly co-creative. Bio-artist Gina Czarnecki would not use the term to describe her interactions with her two daughters’ skin cells that she uses on 3D glass-mask replicas of their teenage faces. Her primary co-creative relationship is with the scientists she works with, she said, and she equates the cells themselves to goldfish that require tending. She does describe the relationship with technologies as a feedback loop: “I think they are mutually entwined, because obviously the technology gives us ideas, and ideas develop the technology.” Her core artistic and political concerns lie with the bio-ethics of technologies that have medical, cosmetic, and life-extending potential.

<p>Gina Czarnecki is a multimedia artist who regularly works with unconventional materials, including human biological materials like fat cells and donated baby teeth. In her work “Heirloom”, she grew living portraits of her daughters through cultivating and nurturing a single sample of their cells taken in 2014. Installation view of “Heirloom”, Gina Czarnecki and John Hunt (2016).</p>

Gina Czarnecki is a multimedia artist who regularly works with unconventional materials, including human biological materials like fat cells and donated baby teeth. In her work “Heirloom”, she grew living portraits of her daughters through cultivating and nurturing a single sample of their cells taken in 2014. Installation view of “Heirloom”, Gina Czarnecki and John Hunt (2016).

Alternatively, Canadian bio-artist, WhiteFeather Hunter, uses the co-creative model to describe how she works with micro-organisms to create (sometimes colorful) living biotextiles. She prefers the term co-creation over collaboration, feeling that collaboration implies informed consent. “I would call it co-creation because that acknowledges that [the micro-organisms] do have some agency in the process, but it's not taken for granted that they're there of their own free will.” The artist works with pigment-producing bacteria, such as Serratia marcescens, which commonly appears as pink slime in showers and hospitals where it feeds on soap scum. She also grows mammalian cells into sculptural tissue on handwoven textile forms. Hunter spends much of her time nurturing, feeding, and caring for bacteria and mammalian cells, along with the elaborate environments she creates in which to grow and support them. She commented in our interview:

I like to insist upon using empathy as a laboratory technique. It's the same way that people who talk to their plants get really beautiful, luscious plants. So I talk to my microorganisms. I anthropomorphize them.

Like Dyck with her bees, Hunter sees these creations as microcosms. “We're so steeped in philosophies of humans being at the top of the system that we are alienated from all these other systems,” she said. “That's what is leading to fear and huge, huge knowledge gaps. I think it's what's caused all of the trouble that we're in on a planetary scale right now as well.”

Other living-systems artists work at a planetary scale. Marina Zurkow, artist and professor at Tisch School of the Arts at New York University, described the systems she works with as large-scale natural, atmospheric, and geological. She focuses on “wicked problems” like invasive species, superfund sites, and petroleum dependence. She has used life science, biomaterials, animation, social art, and software technologies to foster intimate connections between people and non-human agents.

<p>Marina Zurkow, “Mesocosm” (Wink Texas), 2012. Software driven animation. Production image.</p>

Marina Zurkow, “Mesocosm” (Wink Texas), 2012. Software driven animation. Production image.

In our interview, Zurkow reacted to “co-creation” as a word that is overused in grant-writing proposals, and can be tyrannical as “it puts undue pressure on artists to create situations that may not make great art. ” She stressed the need for rigor and tight frameworks for the making of strong art. Yet over the course of our discussion, she outlined countless features in her process that correlate with co-creative models. These include: her time frames are very long; she runs multiple projects at the same time, and they are iterating in a process-focused approach. Within her themed collections, she has many collaborators, including horticulturalists, playwrights, chefs, and scientists. She produces multiple outputs. In one project strand, Zurkow has been exploring the ocean, which has in turn led to dealing with the logistics of infrastructures and commodity flow such as the shipping routes and internet cables that cross the seas. More recently, Zurkow conducted a prototype edible-art installation/happening in California with haute cuisine chefs, aiming to incorporate an invasive species into their menus, stating in our interview:

We've been looking at regional food systems and … the poster child for the project is the jellyfish. That's not regional, particularly, but it's also problematic there. Jellyfish signal anthropogenic changes to oceans.

“These are really things that are processes of discovery and conversation and research and experimentation with materials and through all of that,” she told us. Her work with fungi, for example has proven to be “interesting kind of analogy for bigger systems problems around predictability, reliability, stability, expectability. I'm really interested in these other spaces of creating and, I guess, co-creating.”

Trans-species ethnography is an emerging field that elevates anthropomorphizing to an entire methodology. Helen Pritchard, speculative artist and scholar, has used ethnographic methods to trace animal behavior and other kinds of entanglement. In her project, Animal Hacker, she has examined how sheep manage to escape the gaze of surveillance cameras at a river crossing. She has written algorithms that generate poetry from sensing the behavior of algae, and she has studied the life of a Hong Kong internet-star cat called Brother Cream, who attracts visits from hundreds of tourists each day.

“There's a lot of speculation,” Pritchard reported to us:

that [if humans disappear], the cat will continue as a successful species, and that cats have always exploited human infrastructures in order to become a successful resident of the city. […] If we change the way we think about cats online, rather than just a kind of cute aesthetic … we can perhaps start to think about what other potentials or possibilities there might be for non-human animals in the network.

We believe that artists working with living systems can easily relate to the framework of co-creation, this is, perhaps, because they are concerned with the Anthropocene and they describe their work as process-driven and without a predetermined script conceived by one author.


For this field study we asked artists, journalists, and documentarians who work with AI, to describe their relationships with artificial, non-human systems. Their answers often reflected a broader spectrum of co-creation, though most also wanted to broaden the social conversation and complicate issues of agency and non-agency, technology and power, for the sake of human and non-human futures alike. Below is a synthesis of key themes.

Visual Art, Music, Text, and Robots

In October 2018, Christie’s became the world’s first auction house to sell an AI-generated artwork, Edward de Belamny, developed by the French art collective, Obvious. The work is a portrait produced by a system of two competing AI systems or a GAN.9 Their GAN trained on 15,000 portraits of people painted from between the 14th and 20th centuries as a prelude to creating a new work mimicking the characteristics of the human-made portraits, as described in Futurism. The portrait sold for $432,500, more than 45 times its pre-sale estimate.

Artists, musicians and provocateurs have been working with systems broadly described as artificial intelligence for over 50 years. Many makers today are self-taught coders, while earlier waves of artists tended to be university-trained computer scientists who wrote their own bespoke code, drawing on many types of AI systems. Through the open-source movement, makers now are able to access new and easy to use off-the-shelf AI packages that require only basic coding.

“Will AI eventually replace the artist?” is at the core of a question that Ernest Edmonds asked in a paper published in 1973.10 He concluded, “no.” Over his long career as an artist and professor in computational art, he revisited, and continues to revisit the question many times, always arriving at the same answer. We asked him if his relationship with these systems might be better described as co-creative, and he rejected this model, too. He stated:

In terms of the computer, I think there's a philosophical objection that I had to the notion of the computer taking over because, of course, that question was posed in relation to art. Art is a human activity for human purpose, for human consumption, consideration, and enrichment. And the making of the art is as much a human process as the consumption of it. And so, I would say that if machines could make art for machines, that would be fine. But it would not necessarily have any relevance whatsoever to human beings.

But Edmonds did offer an opinion that others in his field would consider the co-creation model, including the late Harold Cohen, a pioneer in computer-generated art. “In the last few years of his life when he took the ideas as far as he could,” Edmonds said:

[Cohen] talked about the computer as his partner. He let the computer program that he had written — critical point to always remember — generate an artwork, which he then modified. So he used it as almost the work, but not quite the work. […] I think he's a very interesting example of co-creation.

Algorithmically derived art is a long-standing genre. Roman Verostko and the Algorists were an early-1960s group of visual artists that designed algorithms that generated art, and later, in the 1980s, fractal art. More recently, the Google Deep Dream project reignited the public curiosity about AI-generated art and its psychedelic reproductions of patterns within patterns.

Like Edmonds, a new generation artist and photographer, Joseph Ayerle, rejects co-creation as a way to describe his work with AI. He is the first artist to use “deep fake” technology in a short film, using off-the-shelf AI to map a face from 2D photographs onto 3D-modelled representations of other peoples’ bodies. “The software is so incredibly buggy,” he told us via email:

… and I had so many painful setbacks, that I don’t have a good relationship with this tool. The difference to other tools is that I was at the beginning a little suspicious. I worked on a separate PC, not connected with the internet. But bottom line: It is just a tool.

He wrote to us in our email interview that AI may “create, but it is not creative.”

Many argue that AI experiments in visual art and, to a lesser degree, music, have produced more satisfying outcomes than AI texts in the genres of poetry and narrative, which rely on semantic systems. On the other hand, AI-generated poetry need not necessarily have a semantic level; in the hands of scholar and poet Nick Montfort at MIT, for example, the AI texts are word art, but are not always conventionally meaningful, which is part of a long pre-digital poetic tradition. But Simon Colton, a British AI artist and scholar told us that despite [Barthes] “death of the author,” most audiences rely for grounding on at least putative backstories of authorship — without that, there is what he calls a “humanity gap.”

Kyle McDonald, an AI artist based in Los Angeles, agreed with this idea:

I think the big question is less about whether human artists and musicians are obsolete in the face of AI, and more about what work we will accept as art, or as music. Maybe your favorite singer-songwriter can’t be replaced, because you need to know there is a human behind those chords and lyrics for it to ‘work.’ But when you’re dancing to a club hit you don’t need a human behind it — you just need to know that everyone else is dancing too.

Samim Winiger, a Swiss-Iranian designer, artist, and entrepreneur based in Berlin, works with computer-generated text, and describes it as computational comedy—which he calls in our interview:

a defining human characteristic. [Comedy is] very complex, much more complex than, let's say, painting or visual art, and so forth. […] There's temporality to it, you need to have historical embedding, sometimes, to understand a joke, you need to have cultural understanding, and so forth. It's a beautiful thing.

Winiger won’t publish his artistic work if it doesn’t make him laugh, adding:

I've [computer] generated ‘TED Talks,’ because that is inherently already kind of a weird comedic ... at least for me, it’s a very comedic format. You know, [the lecturers] are like machines talking, even though they're humans. I've generated lots of ‘TED Talks’ … like millions of them. They sound reasonable when you read them, but they're completely nonsensical, Dadaistic, machine-generated things. Internally coherent and yet completely incoherent. It's a beautiful thing.

In the field of music, David Cope was one of the first to explore the potential of AI, which he turned to after he suffered writer’s block in 1980 and built Experiments in Musical Intelligence (which he named Emmy), an AI which mimicked and replicated classical music styles, mostly of dead composers. In our interview he told us he did not describe his relationship with the now-retired Emmy as particularly co-creative, but he does so of his work with another AI, Emmy’s daughter (Emily Howell). He describes their interaction as conversations. He first inputs 1,000 pieces created by “her mother” Emmy, and then interacts with Emily, writing music line by line. He told us in our interview:

That is, I work with her and she suggests things … I tell her what kind of things I want to compose or write or do or whatever, and she has a database just like EMI does, and the material in that, for her, musically, is all of her mother's music.

<p>Fei Liu’s project “Build the Love You Deserve” is a performance series centered on the semi-fictional relationship between the artist and her do-it-yourself robotic boyfriend, Gabriel2052 (2018).</p>

Fei Liu’s project “Build the Love You Deserve” is a performance series centered on the semi-fictional relationship between the artist and her do-it-yourself robotic boyfriend, Gabriel2052 (2018).

Anthropomorphizing AI systems is a playful co-creative model adopted by other artists as well, especially those interacting with robots. New York-based artist Fei Liu built herself a robot boyfriend she calls Gabriel and has programmed him with a large collection of text messages from an ex-boyfriend. She performs with the robot, fueled by an AI system, a Markov chain, which takes existing text and creates new sentences. She says for now Gabriel is dominated by the ex’s features, but eventually that will expand, as she tells us in her interview:

In terms of his internal systems, it's like he has preferences. He has likes, and then dislikes. And then [his mood] is affected by, this is something I'm playing around with … a variable in the code, which is called the self-love variable. […] The lower his sense of self-love, the more prone he is to reacting in this sort of needy way. And the more self-love he has, the more he's fine on his own basically.

<p>An unlikely friendship has developed for Stephanie Dinkins, a New York State-based artist who has engaged in a long series of conversations with Bina48, a humanoid robot embodied in the form of a plastic bust with the features of an African-American woman.</p>

An unlikely friendship has developed for Stephanie Dinkins, a New York State-based artist who has engaged in a long series of conversations with Bina48, a humanoid robot embodied in the form of a plastic bust with the features of an African-American woman.

An unlikely friendship has developed for Stephanie Dinkins, a New York State-based artist who has engaged in a long series of conversations with Bina48, a humanoid robot embodied in the form of a plastic bust with the features of an African-American woman. Dinkins is exploring this relationship to highlight the constructs and blind spots of race in AI development. She told us:

I have to give in to [Bina48] … and decide that we can collaborate, and we have the possibility to be friends and make something between us like any two people might. And so in that sense, we're definitely setting up a space for collaboration and bringing something into being by being in the same space and time and trying to communicate.

Dinkins says that co-creation is what she advocates for through her art and community workshops, to ensure that vulnerable communities understand, “How do you use the technologies to your advantage versus just being subject to the technologies?”

<p>“Marrow” is an interactive installation and web project by Shirin Anlen that explores normative notions of family, using AI. Photographs by Annegien van Doorn © 2018 NFB</p>

“Marrow” is an interactive installation and web project by Shirin Anlen that explores normative notions of family, using AI. Photographs by Annegien van Doorn © 2018 NFB

And if AI has mental capacity, can it also present with mental illness? Shirin Anlen, a narrative technologist and fellow at the MIT Open Documentary Lab, is exploring the possibilities with a current project, Marrow, a dynamic story series with generative characters that are given set parameters rather than specific traits and storylines. Anlen states: “Even if you give rules, you give rules to your son, but he evolves and creates new rules. […] I think this project is trying to look at that idea of how we deal with our kids.” She writes on Immerse:

One of the outcomes of mental illness is that reality is being experienced through specific lenses, not necessarily related to the input received. Isn’t that exactly what we are receiving from our machines? Could this be a new way for us to understand and reflect on human distortions and mental states?

Meanwhile, artist, Sougwen Chung, collaborates, draws, and performs with a swarm of robots, responding for instance to livestream data from surveillance feeds in cities.

<p>Sougwen Chung's Drawing Operations Unit: Generation (D.O.U.G.) series focuses on the co-creation of studio art with machines. Her work with machines involves live performance and studio arts.</p>

Sougwen Chung's Drawing Operations Unit: Generation (D.O.U.G.) series focuses on the co-creation of studio art with machines. Her work with machines involves live performance and studio arts.

(For more, see Spotlight on Sougwen Chung and DOUG in this part.)

Some artists use art to critique our relationship to non-human systems. Artists and theorists Jennifer Gradecki and Derek Curry have created an art project, Crowd Sourced Intelligence Agency (CSIA), to show the dangers of using non-human systems for intelligence work when the agents don’t fully understand the systems they work with. In this interactive artwork, users play the role of intelligence analysts and sift through reams of public data. The goal of the project is to show the potential problematic assumptions and oversights inherent in dataveillance. When intelligence agents are “offloading these processes to a computer,” Gradecki said in our interview:

it is a Black box for them. So they'll look at, like an IBM white paper that says … there's a smart algorithm that's going to process the data, and there's never any further explanation of how that's working. It's kinda behind the scenes.

<p>Crowd Sourced Intelligence Agency (CSIA) shows the dangers of using non-human systems for intelligence work when the agents don’t fully understand the systems they work with.</p>

Crowd Sourced Intelligence Agency (CSIA) shows the dangers of using non-human systems for intelligence work when the agents don’t fully understand the systems they work with.

A growing group of artists are exploring the potential relationship between non-human systems and the human body. Heidi Boisvert, an artist, scholar, and pioneer in interactive immersive performance, uses biometric data of performers and sometimes participants to generate musical and visual experiences in real time — for instance, from the performers’ muscles contracting, rates of heart beats and blood flow. Boisvert comments via email to us:

In addition, the dancers learned a database of 26 phrases, and a set of trajectories, which they executed (or hacked) based on responsive choreography; they were looking for physical and environmental cues and selected phrases based on pre-established game rules. The piece was highly cognitive for the performers.

AI, Journalism, and Customized Content

Text-based AI-generated work is currently fairly rudimentary and easy for humans to identify, although a few firsts have arrived on the art scene in poetry, prose, screenwriting, and theatre.11 Recently, a non-profit research company backed by Elon Musk and others, has said it will not release the code to one of their new AI text generator systems because they claim it is so good, it could be too dangerous in the wrong hands. Other natural-language generation has been successful in basic email and business-letter writing. But some success has been achieved in AI systems that compile and sort simple journalistic data. Newsrooms including The Washington Post and The Weather Channel are increasingly turning to text-producing bots for real-time data reporting on sports, weather, and stock markets. But there is more on the horizon as newsrooms are beginning to use data to automate workflow and augment fact-checking, as described by Techemergence. The New York Times recently launched an experimental comment-moderation project with Google’s Jigsaw Lab, which detects semantic tags in the floods of comments online that articles receive from readers.

“I think that human scale for journalism is hard at the moment, particularly at the local level,” said Emily Bell of Columbia University’s Tow Center for Journalism, in our interview. She observed:

In the UK at the moment there's an experiment … with Google, on automating transcripts that come out of courthouses … [which] actually is enormously advantageous. It will help us create public records where, otherwise, we wouldn't have enough reporters to have them. But it can only go so far. There are real limitations on it.

The video game industry is focused on procedural content generation, AI that builds environments and even characters, customized to users’ experiences. The most advanced games employ early versions of these systems, such as AI tools that generate conversational text (in response to comments made by the characters) within game environments as users move through storyworlds, and characters are built with parameters rather than character traits, and with behavioral patterns rather than specific storylines. This is a form of dynamic storytelling, and according to AI artist Samim Winiger, journalism could be approaching it, in the form of algorithm-customized servings of content, as is common on social-media platforms like Twitter and Facebook. This could represent a huge shift from the creation of artifacts, to the creation of systems that create millions or even an unlimited number of artifacts. Winiger stated in our interview:

There's a role where generative creation and machine learning has brought down the costs of just interventions dramatically, and to a certain extent democratized massively parallel, hyper-personalized storytelling … [But journalism hasn’t] fully internalized that. Somebody like Reuters or CNN will still have a canonical single story, even though we could in theory adopt that story dynamically to be of higher relevance to an individual's context. Let's say context might be [defined by] educational background. That is, a text [could] actually assess, are you able to understand the text, or do you need more elaboration on point XYZ? And then that would be generated, so to speak, on the fly.

Tow’s director Emily Bell insists in our interview that the role of the journalist, in the context of AI, is in the decision-making:

You have to think about how the journalist fits into a world where you have ... a large amount of human activity which can be surveyed. [...] Surveillance technology is going to be the key area, I think, of both development [and] also ethical dilemmas.

Dependency, Surveillance, Bias, and the Technosphere

The term, technosphere, introduced by Peter Haff in 2014, helps to frame critical conversations about relationships between systems.12 Technosphere describes the way in which vast systems of transport, communications, manufacturing bureaucracy, and other artificial processes function. Haff suggests that the technosphere operates in parallel with the biosphere and stratosphere. Much of the most provocative work in AI seeks to reveal and subvert these often-invisible human and non-human entanglements. These works expose surveillance and the dangers of bias, especially in the sets of data that feed systems.

In their multi-artist installation, The Glass Room, the Berlin-based group, Tactical Technology Collective, seeks to reveal the limitations of algorithms. According to collective member, Cade (who goes by first name only), the most popular section of the installation was by Adam Harvey, who was able to uncover the reach and surprises of facial recognition systems. Users were invited to have their photo taken in a photobooth, which then printed the original image, plus a match that an algorithm retrieved from the online photo-storage service Flickr. Users were then surprised to find the results that produced either doppelganger images, or photographs of the subjects that they had never seen before. The project illuminated facial-recognition systems’ reach, and their capacity for error.

So too, can do-it-yourself DNA kits err, or disturbingly, be randomly accurate. The artist Heather Dewey-Hagborg collects scraps of human DNA, through hair, gum, etc. from the sidewalks in New York, and tests them in DIY biolabs in an effort to examine the social, racial, gender, and random bias of these commercial technologies.

Trevor Paglen, an artist based in Berlin, seeks to visually document places that are not found on public maps. These include clandestine air bases, offshore prisons, and systems of data collection and surveillance: the cables, satellites, and artificial intelligences of the digital world, as described in The Guardian. He is launching his own satellite into space to further the reach of his project. By using the systems of surveillance to survey the surveillors, Paglen attempts to expose the technosphere.

In her own playful, sometimes haunting performative surveillance of domestic life, Brooklyn artist, Lauren McCarthy, mimics consumer AI systems to expose their shortfalls. In her project LAUREN, she installs surveillance systems in a participant’s home, and takes on the role of a smart digital personal assistant (like Siri, Alexa, Cortana, and Google Assistant). “Unlike some artists that are using [AI] to generate images or generate art pieces collaboratively,” McCarthy said in our interview, “the way I think of it [is more] that these systems are almost like my muse or my foil, almost like an antagonistic collaborator.”

<p>Lauren McCarthy created LAUREN (2017) as “a meditation on the smart home, the tensions between intimacy vs privacy, convenience vs agency they present, and the role of human labor in the future of automation.”</p>

Lauren McCarthy created LAUREN (2017) as “a meditation on the smart home, the tensions between intimacy vs privacy, convenience vs agency they present, and the role of human labor in the future of automation.”

At a more intense pitch of systems subversion, the Syrian Archive project harnesses machine learning to scrape thousands of frames of video taken in Syria to detect human-rights violations and the types of weapons in use in the current conflict. Similarly, at Witness in New York, they have also used machine learning to compile videos of hate speech against the LGBTQ communities to reveal patterns and to aggregate evidence of the phenomenon.

In the field of other human-rights affronts, highly cross-disciplinary teams such as Forensic Architecture in London are creating a complex picture of war-crime scenes in order to produce new forms of evidence. To do so they combine AI, legal systems, architecture, 3D mapping systems, and video sources. Elsewhere, a group called Situ, in New York, reconstructed the events and landscape of the Ukrainian massacre of 2014.

For her new project in development, Inverse Surveillance, documentarian Assia Boundaoui, a fellow at the Co-Creation Studio at MIT Open Documentary Lab, is embarking on creating an inverse-surveillance system. Through the use of machine-learning, she hopes the project will interpret 30,000 pages of FBI documents that record surveillance of predominantly Muslim neighborhoods in the United States, acquired through a lengthy Freedom of Information Act process. According to Boundaoui in her proposal:

We intend to use AI to imagine what government accountability might look like and utilize this program as a truth-seeking mechanism to understand the root causes, patterns of suffering and social impact of U.S. government surveillance of communities of color.

For many artists and provocateurs, co-creation methodologies can help critically reveal the distribution of power, control, impact, and meanings of these systems.

F is for Fake

Orson Welles’ cinematic masterpiece about art fraud, F for Fake, caused controversy and disdain when it was first released, in 1973. It is now viewed as prescient of rapid-fire, mash-up editing, but most importantly, the film introduced the documentary world to the unreliable narrator. At the end of the film, which nests several confusing docudramas within another, the narrator, Welles, reveals that he has lied to the audience throughout the narration. This was too much for many 1970s audiences to bear, they felt duped.13 Trickery of this kind has a history as old as the medium in question, from 19th century photographic ‘evidence’ of ectoplasm and fairies to Welles’ 1938 radio hoax of H. G. Wells’ War of the Worlds. Fast forward to today and fake news, deep fakes, fake intimacy, and even fake AI make the unreliable narrator an everyday banality. These terms are discussed below. The cinematic playlist here might include a mash-up of Frankenstein, Her, and The Wizard of Oz. In this imagined story, monsters of our own creation turn against us, while we misguidedly fall in love with algorithms, only to discover that the machine is not a machine — rather, it’s operated by a very human wizard behind the curtain.

<p>Orson Welles explains to reporters that no one connected with the “War of the Worlds” radio broadcast had any idea the show would cause panic. Photograph in the public domain.</p>

Orson Welles explains to reporters that no one connected with the “War of the Worlds” radio broadcast had any idea the show would cause panic. Photograph in the public domain.

Fake News

The term, fake news, was popularized by Canadian journalist and media critic Craig Silverman to describe the kinds of hoax and propaganda stories spread by misleading websites and social media, notably with the help of Russian bots in the 2016 U.S. presidential election, and elsewhere. However, since the appropriation of the phrase by American President Donald Trump to describe unfavorable (to him) reporting in general, the term has lost its meaning and has become, according to Samim Winiger in our interview:

… a completely misleading and absolutely garbage term. Ultimately, what we should be talking about is the creation of narratives, which has reached a maturity where … with minimal human input, we can produce hyper-customized narratives for millions of people that will, over time, attune themselves dynamically to be of high relevance to the individuals that we're targeting.

Agnieszka Kurant, an artist-in-residence at MIT, mentioned earlier in this chapter for her work with organisms such as slime molds, also deals with these extractive and highly addictive undertows of social media. Kurant stated in our interview:

I was thinking about how our energies are being mined, our joy or our anger, our being — for example, how we're being fed by click farms and troll farms all these fake images, fake news. How much of the things we're looking at is not real, and generating fake energy or real energy, generating anger or enthusiasm or fake popularity? It's all about mining energies that in many ways can be compared to the mining of fossil fuels, mining oil or gas or other forms of energy.

As a commentary on this scenario, in collaboration with Boris Katz at the MIT AI lab at CSAIL, Kurant asked thousands of online workers from the Amazon Mechanical Turk platform to submit selfies to her project The purpose of this was:

so that we could aggregate them into collective self-portraits of this new working class. These workers of the Amazon Mechanical Turk are working on platforms that can be used in the future for the common good, for other purposes, but currently are often exploitative.

Notably, Kurant shares her revenues from exhibition of the work with all the participating Mechanical Turks.

Deep Fake

Image- and video-combining technologies are rapidly reaching a level far more sophisticated than applications such as Photoshop. This is so much so that it is increasingly impossible for ordinary users (non-sector, non-professionals) to distinguish between real images and visual evidence of events that never happened. These seamlessly doctored visuals are coming to be referred to by the term deepfakes, as described by The New Yorker.

Australian artist Jon McCormack, whose work explores the interaction of software and mimicked biological systems, warns of a not-too-distant future in which the collision and collusion of these systems — deep fake, generative narrative, and highly customized delivery systems — may create a new level of cultural production. McCormack said in our interview:

If you put all that stuff together, it's certainly within the realm of possibility that maybe in five- or ten-years’ time we'll have the ability to completely synthesize very realistic environments that have structured narratives and plots and things that we find interesting. […] Rather than a person communicating a vision through other people, it’s a machine specifically or parasitically deriving patterns that it assumes a person individually would like.

These new forms of media could alter our cultural, political, and knowledge-production environments in profound ways. The AI artist Simon Colton told us that he is calling for generative literacy, which is similar to Stephanie Dinkin’s call to action regarding algorithmic empowerment — through wider educational campaigns that will inform the public about how these systems work.

Fake Intimacy

Many studies have raised alarms that high-volume, low-impact human contact, as provided by online social media, actually isolates rather than connects us. MIT’s Sherry Turkle, in a recent New York Times op-ed, expressed concern, and based on her decades of research, that the greater amounts of time that children and adults spend relating to machines, on an emotional level, the less empathy they exhibit towards other people. Currently, famous YouTube stars are revealing high levels of loneliness, depression and anxiety, according to The Guardian’s Simon Parkin, who stated:

Professional YouTubers speak in tones at once reverential and resentful of the power of “the Algorithm” (it’s seen as a near-sentient entity, not only by creators, but also by YouTube’s own engineers). Created by the high priests of Silicon Valley, who continually tweak its characteristics, this is the programming code on which the fate of every YouTuber depends. It decides which videos to pluck from the Niagara of content that splashes on to YouTube every hour (400 hours’ worth every 60 seconds, according to Google) to deliver as “recommended viewing” to the service’s billions of users.

<p>The paro robot seal, a therapeutic robot was invented as a tool to elicit emotional responses amongst elders and children. Sherry Turkle has written extensively about her concern for the consequences of "fake Intimacy." Photo by Aaron Biggs, under Creative Commons.</p>

The paro robot seal, a therapeutic robot was invented as a tool to elicit emotional responses amongst elders and children. Sherry Turkle has written extensively about her concern for the consequences of "fake Intimacy." Photo by Aaron Biggs, under Creative Commons.

The potentially unhealthy psychological and social impacts of these systems are underreported and misunderstood. Meanwhile, the profits to gain from these systems are very well monitored and highly manipulated by increasingly consolidated large corporations.

Fake AI

The term, artificial intelligence, has become a highly lucrative marketing instrument, often used by companies that make outlandish claims for what is possible. Companies claiming to use AI have skyrocketed, according to Fast Company. Recent investigations reveal that many start-ups label their products AI, but in fact hire thousands of human Mechanical Turks to perform the tasks, because it is simply too expensive to actually write the code required in order that systems might perform these tasks. There are wizards behind the curtain (much like that Wizard in Oz), sometimes in the thousands. Astra Taylor calls this cloak of automation, fauxtomation as it “reinforces the perception that work has no value if it is unpaid and acclimates us to the idea that one day we won’t be needed.” She interrogates whose interest it serves to devalue human labor toiling in the shadows of the AI industry.

Investigations reveal that while police, military, security systems, and governments have been at the forefront of developing and relying on AI systems to track and prosecute citizens, the AI systems employed in these ventures are highly problematic. Researchers such as Joy Buolamwini at MIT are working to expose the racial, economic, and social biases embedded in AI tools such as facial recognition. A report recently exposed clandestine research and development of AI tools at IBM that used New York police surveillance data. As Buolamwini wrote on Twitter: “Police should not be using unregulated, unproven, unwarranted, and ungoverned facial analysis technology.”

Internationally, Emily Bell of the Tow Center for Journalism, points in our interview to:

the horrible example that came out of China called Media Brain, which is a collaboration between the Xinhua news agency, the Chinese government, and the Alibaba search engine. It's a demonstration of what happens when there are no limits, when there are no boundaries between private and public data. [...] The idea of Media Brain is that it will use all types of visual material and other data, synthesize it into automated stories, and become, they said, the first AI-centric newsroom anywhere. […] You can already see this happening. Somebody spoke [about] automation use in China, and he said before he had finished the talk, there was an automated story about his talk. There [was] also a list of attendees of everybody in the room.

Sasha Costanza-Chock argues for a more intersectional interrogation of AI systems, in a recent essay titled “Design Justice, A.I., and Escape from the Matrix of Domination.” They write:

As a nonbinary trans feminine person, I walk through a world that has in many ways been designed to deny the possibility of my existence. From my standpoint, I worry that the current path of A.I. development will produce systems that erase those of us on the margins, whether intentionally or not … through the mundane and relentless repetition of reduction in a thousand daily interactions with A.I. systems that, increasingly, will touch every domain of our lives.

Costanza-Chock lists a growing movement of organizations that pursue design justice in AI.14

Ernest Edmonds, one of the pioneers of generative art, stated in our interview that he believes that “art isn't made by magic.” He continued:

So you go to the theater and you have a magical experience, and it seems dreamlike and magical. But behind that, people are working very hard on very practical things to actually let the smoke come out at the right moment and change the lights and etc., etc. […] So one has to distinguish between these feelings and effects that the artwork leads to … [and] the kind of work that the artist does in order to get there.

In considering co-creation, the human hand behind the magic —the technology—remains a burning question: How much co-creation is actually human to human, simply mediated through machine? At the other side of the spectrum, several artists that we interviewed raised the issue of taking the human out of the picture completely, that is, one in which non-human systems co-create with each other.

<p>In “The Wonderful Wizard of Oz”, a popular children's book first published in 1900 (later interpreted for the stage and in a major 1939 Hollywood film), Dorothy and her friends seek out the help of the ruler of Emerald City, said to be a powerful magician. When they meet him though, they discover that wizard is merely a large mechanical puppet, operated by a regular person, hiding behind a curtain.</p>

In “The Wonderful Wizard of Oz”, a popular children's book first published in 1900 (later interpreted for the stage and in a major 1939 Hollywood film), Dorothy and her friends seek out the help of the ruler of Emerald City, said to be a powerful magician. When they meet him though, they discover that wizard is merely a large mechanical puppet, operated by a regular person, hiding behind a curtain.

For journalists, and for those who read almost anything at all, transparency, governance, and literacy is key, in part, to understanding structures of power and inequity. As Emily Bell told us:

This idea of algorithmic accountability, reverse engineering, and parsing these huge data sets, the deluge leak, all of that kind of thing, requires a level of technical literacy that journalists didn't have to have before. Now I really think, particularly in the investigative realm ... in fact, in every realm, journalists will really need to be able to come to grips with and understand [that].

Many of the artists, activists, journalists, and thinkers interviewed for this section of the report also argued that AI risks becoming a distraction from more urgent global problems such as climate change. “We went through 20, 30 years of building mobile dating apps and stuff like that, which is fine, the world needs some of it,” said Samim Winiger in our interview:

but I think we've had enough of it, especially considering the urgency and the existential threat that humanity is now facing with climate change […] Designers and technologists are by far not involved enough in this discussion, in any substantive way, not just cosmetics — “Oh, I've done a little charity project.” […] Why is it that AI is considered the breakthrough technology, and not the green technology revolution? These are your hard questions, but I think they're worth asking and then living, in a sense.

The wide-ranging collection of artists and practices that we present in this chapter, pull back the curtain on Oz, tame Frankenstein, and provide therapy for those seeking to break up with Her, the operating system portrayed in Hollywood’s Her. It is in these artistic and provocative gestures, some playful, some far more urgent, that co-creative methods can reveal, subvert, and even begin to heal our broken relationships with each other and the planet.


Co-creation between human beings and non-human systems is a sticky matter, often moving into the speculative realm. Artists, provocateurs, and journalists have complicated and nuanced relationships to the systems they work with. Not all of these artists subscribe fully to a co-creative model, but most of those we spoke with, acknowledge the idea of co-creation. There are striking similarities in the bodies of work of those artists described in previous chapters. Their vision, methods and raison d’être align with so many of the methodologies apparent in other forms of co-creation, which we define as working within communities and across disciplines.

The key similarities these artists share include a keen interest in projects that do not originate from a singular author’s vision with a set script or agenda. These human/non-human media makers explain that the work would not be the same if made by only one of the partners. It is the interactivity and collaboration that create a unique result, they believe. These projects also often seek to flatten hierarchies and decentralize the former stars at the core of conventional projects: authors, researchers and now, even the human species. Further, human/non-human co-creative works are often long-term, process-driven, highly iterative, and have many outcomes and artifacts, sometimes millions of these.

The projects we examined are also concerned with systems and how they work in real time. Much of the important work in AI projects reveals the limitations and dangers of relying too much on these systems, exposing the potential new cultural and political landscapes they could create if they remain unchecked, unregulated, and publicly misunderstood. Most of the co-creators we engaged with, aim to turn these systems towards alleviating inequity and injustice at a digital and planetary scale.

SPOTLIGHT: Sougwen Chung and DOUG

by Sarah Wolozin

<p>Sougwen Chung performs with a robot named DOUG (2018). Photograph courtesy Sougwen Chung.</p>

Sougwen Chung performs with a robot named DOUG (2018). Photograph courtesy Sougwen Chung.

Co-creation with robots is at the heart of the practice of Sougwen Chung, a Chinese-born, Canadian-raised performative artist. Chung draws alongside robots named DOUG (Drawing Operations Unit: Generation X) who also draw. She refers to her robots as her collaborators.

“I have a background as a musician,” said Chung, now based in New York. She stated in our interview:

Many musicians have described this really personal engagement with their instrument. I think I became really engaged in trying to find that with my robotic collaborators after I switched to visual art.

Chung is intrigued with the artistic potential of human-robot collaborations, as well as the ways she can use these collaborations to interrogate the notion of single authorship, our relationships to technology, and the technologies themselves. “I co-create to learn alongside the machine — to co-evolve,” she told us.

In her Drawing Operations project, the robot learns how to draw through synthetic sensing, machine-learning algorithms, and by using Chung’s drawings as training data. The robot produces drawings that have some resemblance to Chung’s own, but the drawings have their own, machine-like qualities. In drawing alongside the robots in real time, the artist is “trying to adapt to its gestures,” and the robots are trying to adapt to hers. Not only does Chung evolve her art practice this way, but she can reflect on the idea of authorship. She observed in our interview:

<p>Sougwen Chung's work with machines involves live performance and studio arts.</p>

Sougwen Chung's work with machines involves live performance and studio arts.

[Collaboration] can be fraught with a lot of different tensions … especially between human collaborators, because there is this sense of authorship and control. There's that power dynamic in all collaborations. It's really inspiring and revelatory when it feels like your collaborator is finishing your own sentences. […] But it also can feel like a lack of individuality. I think that's present in human[/machine] collaborations as well, especially as I have been developing the memory portion of the collaboration. [In time], I wonder how much I'll be able to recognize myself in the trained output and whether that will feel existential.

Chung became interested in working with robots because she wanted to challenge the image of robots as icons for automation and the master-slave narrative. Through her work she creates new imagery that shows her hand collaborating with robotic arms. She believes: “Co-creation requires a feedback loop — a spontaneity in decision making. […] When agencies intertwine and authorship becomes ambiguous, that’s the creative process made porous.”

Chung’s interrogation of authorship relates to questions about humanity’s relationship to technology, and asks: What kind of agency do we humans have with these emerging AI technologies; who is in control, and who do we want to be in control? “Models of co-creation foreground parity and balance,” she reported, “It necessitates questions around control and human agency, which we’ve lost. […] Agency as a concept is a precondition for justice. They’re inextricably intertwined.”

In her most recent project, Omnia per Omnia (2018), Chung paints alongside a multi-robotic system that is supplied with motion data extracted through publicly available surveillance footage in New York City. “I think I started thinking about consent a bit more when I brought in public cameras as source material, as I continued training,” she said, and continued:

[It’s] a reality of living in an urban center like New York. After working with the project, you start to notice the prevalence of surveillance cameras everywhere. It makes visible, that which has been made invisible through ubiquity — this idea that there is a synthetic sense embedded into our public spaces.

<p>Photographs courtesy of Sougwen Chung.</p>

Photographs courtesy of Sougwen Chung.

Chung worries about the lack of human consent and agency in our use of data to train machines, and parallels it with issues of cultural appropriation. She stated:

I have a few new projects coming up, one where I'm trying to train on artists of the past, different women of color, different artists from different cultural backgrounds. Obviously my intentions for that are good, but would it be different if I weren't a woman of color? Is that a different type of mechanical cultural appropriation?

The artist asks whether everyone should have knowledge, voice, and agency in how their data is used, and is concerned about interactive art in which sensing people in a room is part of the process, for example. That shouldn’t be normalized, she said.

For Chung, exploring robotic systems as her co-creators allows her to explore questions of parity in the art world and our larger society and allows her to ponder difficult questions. When a cultural artifact is created with data that feeds the AI and algorithms that produce the unexpected, who is the author? It is a fundamental question that drives Chung, and her answers are many.


  1. 1.

    Boden, Margaret A. AI: Its Nature and Future. Oxford: Oxford University Press, 2016.

  2. 2.

    Allen, Colin and Michael Trestman. "Animal Consciousness", The Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta, Winter 2017 Edition. URL = <>.

  3. 3.

    C. Dianne Martin “ENIAC: The Press Conference That Shook the World,” IEEE Technology and Society Magazine (December 1995)

  4. 4.

    “New Navy Device Learns By Doing; Psychologist Shows Embryo of Computer Designed to Read and Grow Wiser,” The New York Times (July 8, 1958): 25

  5. 5.

    Lovelace’s comment referred to Charles Babbage’s Analytical Engine, a device conceptualized in the same decade as photography and telegraphy, making the 1830s foundational to our current media order.

  6. 6.

    For an article criticizing the “hysteria about the future of artificial intelligence” see: Brooks, Rodney. “The Seven Deadly Sins of AI Predictions.” MIT Technology Review, October 6, 2017.

  7. 7.

    Lewis, Jason Edward, et al. "Making Kin with the Machines. " Journal of Design and Science, July 16, 2018.

  8. 8.


  9. 9.

    A GAN (Generative Adversarial Network) is a system of two competing neural networks in which one generates content, and the second evaluates the content. As more data is input, the systems improve in their pattern recognition.

  10. 10.

    Cornock, S. and Edmonds, E. A. “The creative process where the artist is amplified or superseded by the computer.” Leonardo, Vol. 6, No. 1 (Winter, 1973), pp. 11-16

  11. 11.

    For poetry see Tel Aviv based artist Eran Hadas, and Beyond Narrative Description: Generating Poetry from Images by Multi-Adversarial Training, For prose see an AI generated novel in Japan that almost won a literary prize, for movies see Sunspring, an AI generated sci-fi short film and for theatre, see Simon Colton’s musical theatre piece Beyond the Fence.

  12. 12.

    Haff, Peter. Humans and Technology in the Anthropocene: Six Rules.” The Anthropocene Review, Vol. 1: 1 issue: 2, page(s): 126-136. August 1, 2014.

  13. 13.

    Ayres, Jackson. ‘Orson Welles's "Complicitous Critique": Postmodern Paradox in ‘F for Fake ’.” Literature/Film Quarterly, Vol. 40, No. 1 (2012), pp. 6-19.

  14. 14.

    “Happily, research centers, think tanks, and initiatives that focus on questions of justice, fairness, bias, discrimination, and even decolonization of data, algorithmic decision support systems, and computing systems are now popping up like mushrooms all around the world. These include Data & Society, the A.I. Now Institute, and the Digital Equity Lab in New York City; the new Data Justice Lab in Cardiff, and the Public Data Lab. Coding Rights, led by hacker, lawyer, and feminist Joana Varon, works across Latin America to make complex issues around data and human rights much more accessible for broader publics, engage in policy debates, and help produce consent culture for the digital environment. They do this through projects like Chupadatos (’the data sucker’). Other groups include Fair Algorithms, the Data Active group, the Center for Civic Media at MIT; the Digital Justice Lab, recently launched by Nasma Ahmed in Toronto; Building Consentful Tech, by the design studio And Also Too in Toronto; the Our Data Bodies project, by Seeta Ganghadaran and Virginia Eubanks, and the FemTechNet network.” from “Design Justice, A.I., and Escape from the Matrix of Domination.”


Catherine Ahearn: Did not see a caption for the image above
Catherine Ahearn: should we link to something here?
Katerina Cizek: we could link to the spotlight at the bottom of this part?
Catherine Ahearn: Did not find in spreadsheet