Lead image designed by Andrew Brumagen courtesy of Freethink


Writer, theater director, and performance maker Annie Dorsen creates original works exploring artificial intelligence, performances that she calls “algorithmic theater.” These pieces aim to discover the unexpected in the interplay of humans and machines. Her current project applies the text generation model GPT-3 to Aeschylus’s unfinished Prometheus trilogy from Greek antiquity—a three-part tragedy in which the unknown fate of humanity hinges on questions of power and technology. Only fragments survive of the final play, Prometheus Fire-Bringer, and Dorsen intends to use GPT-3 to propose multiple possible completions of it. Dorsen spoke with dramaturg Tom Sellar in January 2022 following residencies at MAX and at Brooklyn’s Mercury Store. The piece is scheduled to premiere in Philadelphia in 2023.

ANNIE DORSEN: After the residency, I read a nice piece by Rob Horning called “Plausible Disavowal.” It’s an essay about AI-generated art and why he finds it charming, and why he feels sort of bad about finding it charming. It’s a good discussion of the connection between how machine learning algorithms are set up and the kinds of creativity they seem to demonstrate. Here’s one part of his argument:

“AI-generated art depends on massive troves of collectively produced data and evokes an idea of creativity without the individualist spark of insight. Rather than make anything genuinely new, generative adversarial networks converge on a stereotype, as a “discriminator” network using a set of images or phrases already determined to belong to some genre refines the attempts a “generator” network makes to approximate that genre.”                                                       

By way of background, the way that adversarial networks work is you train the network on a bunch of examples of the kind of thing you want. Images of cats, let’s say. And then there’s a little competition between two parts of the network. That’s why it’s called adversarial. The first part generates some new material, a new picture of a fake cat. The second one is trained to compare this new image to the training images and gives it a score based on how close it is. So the first one generates a cat. The second one says, “no that doesn’t look very much like these pictures of real cats,” and then the first one makes some modifications and says “how about now?”  Over time the generator improves. It learns to create more plausible cat pics.  There’s a bias, therefore, in favor of existing images, in favor of the familiar, or the recognizable. A bias that will reward the network for making things that look like what it has seen in its training. So in other words, success is measured in terms of how well the new thing matches old things. The whole process converges on the typical. 

TOM SELLAR: As opposed to an original thing, or something groundbreaking, or new insight? 
AD: Right. Of course, human artists also copy things. Or they try to write within a genre. Obviously we are influenced by what we’ve seen before. But still we tend to try for something new, to create something that feels true in a way we haven’t seen before. That doesn’t necessarily matter so much if you’re just trying to make a realistic image of a cat. But generally speaking, if you’re just going to create another example of something that already exists, why would you bother? The motivation is to add something to the world that wasn’t there before. 
But algorithmic creativity is about being plausible in relation to pre-existing data. That inevitably means that the algorithm – to the extent that the programming gives it intentionality, the intention is to make something same. That’s why Horning calls it a kind of creativity without insight. Life experiences, perceptions, or observations aren’t the source material. The source material is other representations. No inner measure of truth, or value. Only a mechanical comparison process. 
TS: In that case, why use AI at all? Or for this project in particular: What do you need it for?
AD: That’s a good question! For the moment I’m still asking it myself. I suppose I’m trying to figure out if GPT-3 has something like an inherent aesthetics. What interests me is how these mathematical processes manifest themselves. What structures of feeling can they create? How do they organize and present information and make meaning? It’s what I think of as the dramaturgy of algorithms. 
I started the project almost just as a way to spend some time with GPT-3. I was looking for a project to do with it, to see what’s interesting about it. And then I’ve always wanted to do something with Prometheus, and the lost third play of the trilogy seemed like a good fit. 
TS:  So maybe we can talk for a bit about the play. We’ve talked before about Hannah Arendt, who wrote about loneliness as a precondition for totalitarianism, and the difference between isolation and loneliness. That’s such a big part of Prometheus Bound, Aeschylus’s first play in the trilogy: Prometheus has a freakout about the isolation of his punishment. It’s sort of a play about the state of mind produced by the prospect of isolation. 
AD: Oh, I didn’t think of it that way before. That’s true. It’s a weird thing, though, in Prometheus Bound, you know, he’s supposed to be left up there forever, on the rock. But he keeps getting visitors. The whole play is him having friends over.  
TS: Right. People keep dropping by. 
AD: Ha, they won’t leave him alone for a minute! But if we’re talking about dichotomies in the play, for me the basic conflict is between Zeus and Prometheus. Well, in the first play, Zeus doesn’t appear, he’s represented by emissaries. The second play, Prometheus Unbound, we don’t know. And the third play, Prometheus Fire-Bringer, we don’t know. But even so, the conflict is between two different kinds of power. Prometheus representing the power of the mind –critical thought– and Zeus representing the power of brute force. We know Prometheus as the bringer of fire, the friend of man, the one who teaches us that our brains are valuable. By giving us the power to think.
TS: And invent tools. 
AD: Right, yes. I keep wondering if we’d had the second two plays from the start, and knew that in the end Prometheus surrenders to Zeus, would he still be such a romantic figure? Would Marx have been fascinated by him, and Romantic poets like Shelley? Or would we have an entirely different mythology around the relationship between force as power and thinking as power? That’s one of the questions I have. In the end power is always the power to control human bodies, to inflict death, or cage people, or deprive them of their basic necessities. The power of the mind is in a way the power of the weak. In Nietzsche, that’s the power of the resentful. In folk tales it’s always the powerless who are clever or ingenious. We live in a world in which technical mastery is a very strong form of power, but I also wonder how much the reality of physical power is maybe a bit hidden from many of us. 
TS: Although, don’t we also live in a world in which the state has the power to control technology?
AD: Yes! I was just reading an old piece by Lewis Mumford, from the 60s, called “Authoritarian and Democratic Technics.” His thesis is that centralized control of mass technologies is fundamentally antithetical to democracy. We accept these authoritarian technics because we’ve been offered a bargain, he calls it the magnificent bribe. In exchange for surrender, we get material advantages, instantaneous communication, entertainment, convenience. Needless to say, he doesn’t think it’s a good deal. He says: “Once our authoritarian technics consolidates its powers, with the aid of its new forms of mass control, its panoply of tranquillizers and sedatives and aphrodisiacs, could democracy in any form survive? That question is absurd: life itself will not survive, except what is funneled through the mechanical collective.” So there’s that. 
TS: One of the things that’s interesting in the trilogy is that in the first play Prometheus is himself a victim of that first kind of power: His body has been chained to a rock. He’s been victimized by Zeusian state power, and yet scholars tell us that by the end of the trilogy he comes around and capitulates, and accepts the argument in favor of that power. But because only fragments of the drama exist today, we don’t exactly know how he got there. 
AD: Right. Is this a 1984 thing? Where he’s broken at the end? 
TS: Right, or like Stockholm syndrome where he starts to identify with his captor.
AD: Or is – is that what the situation is in 1984, with what’s his name?
TS: Winston. Yes, because the State was able to informationally pinpoint his innermost fear and weaponize it. He then, willingly or consciously or not, embraced the point of view that would protect him. So in a sense, one thing we’re doing is asking to see the math: how does Prometheus get from here to there?
AD: In a way we’re asking that. Or maybe not. Because we’re not writing a play. So here’s where a production of Prometheus Bound, or a Shelley-type exercise of sitting down to write a third part to the trilogy, based on our understanding of psychology and so on–that’s one kind of project, I’m not really sure yet what this kind of project is. I don’t think it’s about me fantasizing and thinking, “I wonder what Prometheus is all about?” Maybe it’s more like putting the question of technology, or there’s something…I want to say meta, but now that’s Facebook, so that word has been stolen from us. But I hope that the way I’m making the piece will change the meaning of it. I can see that the conversation we’re having about Prometheus’ psychological state – that would be the dramaturgical conversation we would have to have if I was going to direct Prometheus Bound. But I’m not sure if we have to have it to make this thing. 
TS: Sure. On the other hand if we disregard dramaturgy, maybe we’re playing with fire! As you just pointed out, revolutionaries have often embraced the Prometheus myth, at least rhetorically or ideologically, because of the symbolism of his radical power-giving tool, the Promethean fire he gifted to humanity over Zeus’s objection. Meanwhile this project is inseparable from the technology creating it. Is it your intention to question AI as a tool, even as it generates the performance event?
AD: Absolutely. I’m really just trying to learn something about it. The deference we already pay to these tools is pretty alarming. It’s dangerous to believe that they are going to be able to tell us something we don’t know. Or it’s ok to believe it I guess, but it’s a bad idea to restructure our entire society around that belief. In the old-school McLuhan understanding of technology as prosthesis, computers are prostheses for cognitive work. We delegate math problems to calculators, spelling to spell-check, driving directions to Siri, and so on, the technology extends our cognitive reach. But what are the implications of that. I’ve also been reading Bernard Stiegler, particularly what he wrote about the proletarianization of the mind. For Marx proletarianization is about the transformation of craftspeople into piecework laborers, unskilled laborers – 
TS: By technology—or what Stiegler calls “technics.”
AD: And also by economic and structural reorganization. The consolidation of expertise into the hands of a few, as opposed to a more general distribution of knowledge. The carpenter who designs and builds a chair from beginning to end gets turned into an assembly line worker who does one small piece of the production chain over and over again, a thousand chairs a day, according to a design made by a guy upstairs in the corner office. There’s a manager who designs the chair, and there’s another manager who takes that design and divides it into manufacturing steps which are then programmed into machines. And our worker on the shop floor uses those machines to do her small piece, and her expertise is no longer needed or wanted. Her relationship to what she makes is different too. Instead of selling a chair, she sells her time. So that’s the traditional version of proletarianization. Stiegler is wondering if that’s a useful frame to think about another transformation, in which our cognitive labor is being proletarianized. We are presented with small cognitive modules to perform, within a structure designed and implemented by experts. We do our most valuable work by clicking on things. No matter what we do for a living, maybe the most productive work we do for the economy is providing tech platforms with our behavioral data. 
TS: Self-commodifying because we’re providing them with the materials for – 
AD: – for predicting what we’re going to buy next. Right.
TS: Which you’ve spoken about to me as a kind of strip-mining of our inner lives. 
AD: Which leaves us depleted. I think Stiegler is really good on this. He’s not just rearticulating the idea that one can be alienated from one’s creative labor. He says that when your cognitive work gets proletarianized, you don’t only lose your imagination, you also lose your ability to live well. Knowledge of how to live gets extracted and aggregated, subjected to statistical analysis, and sold back to us in parts. Anyway. Stiegler connects this to a loss of savoir vivre. What he calls “symbolic misery.” I don’t really know what symbolic misery is, but it sounds bad. 
TS: I’m wondering about a tension between technology as a disruptor and technology as a repetition of the already-known. The existing cognitive data. It seems like we sometimes see it as one and sometimes as the other.
AD: I think it is both. The disruption side is about how these technologies interfere with and mediate our social relationships. And the data that is used to do that is all past information, based on our past behavior. You enter into a weird relationship with some kind of algorithmic all-seeing eye, which maybe you come to trust more than you trust the people you know. Are Amazon’s book recommendations for you more helpful than the ones your friend makes? Remember that thing, when you would ask someone for advice and they would say “let me Google that for you”? Like, “how dare you ask me, you should just Google.” That’s what makes the tech disruptive and a repetition of past preferences.
TS: Because it’s having to find ourselves in relationship to it.
AD: By engaging with these tools, the platforms begin to define us, and by and large we accept that definition. That’s the loop. Knowledge that used to be built up between people in relationship with each other is now all stored outside. It’s like our epistemological universe has changed its shape. It’s not horizontal anymore, spread out in a community, it’s a one-on-one relationship with an algorithm that is personalized to show us things we want. 
TS: We don’t try to hold knowledge within ourselves anymore. 
AD: The digital repository of information becomes a support structure for memory, it’s the inscription of memory. But it’s a very partial memory and it’s external to our brains. It’s owned by someone else, and it’s monetized.
TS: So what does AI produce? It’s produced a version of ourselves which in a way already existed, but on the other hand it’s a kind of simulation or projection.
AD: I don’t know what the relationship is between a self and the version of a self that the AI claims to know. I don’t know if we should think of pre-established desires as real, or authentic. I mean, the tech hype is that these AI systems can see what we really want, based on what we actually do, not on what we say or think. It knows what we want better than we do. I find those claims more than dubious, in part because the data we create is a product of us reacting to what is shown to us. The AI is training us just as much as we are training it.  
TS: Another insight from Rob Horning, to your point: he writes, “Something or someone is authentic because it invites vicarious participation, and it omits any details specific enough to inhibit identification. A moment feels authentic if it feels auto-completed, or like emergent behavior.”
AD: Emergent behavior, meaning some kind of pattern emerges via the operation of the mathematics. Something that seems to appear spontaneously, rather than being chosen by a person. So he’s saying the tech seems to offer a way for artists to escape authorial responsibility. They can claim and disclaim the artwork at the same time. 
And he says maybe one of the things that feels more authentic to us about AI-generated art is that it’s not trying so hard. There’s a sense, a false sense by the way, that it’s not the product of somebody’s effort. It’s as if it just happened. And that, he says, gives it a sense of authenticity. Honestly I kind of agree that plays written by playwrights, for example, tend to feel pretty artificial to me these days. When I hear the dialogue I am so aware of the writer’s voice and their structural choices and so on. There’s a kind of falseness to it. And somehow a text generated by an AI avoids this problem, because you think it’s just the product of math, and chance, and there’s something kind of miraculous about it just appearing.
TS: Which gives it authenticity?
AD: Yes.
TS: Although a play is incomplete. It’s supposed to be made authentic by whoever is going to realize it. The director, the actors. 
AD: True. When you are dealing with actors trying to make written language come alive, that’s craft and skill. You’re dealing with artifice. Art. There’s nothing wrong with that, obviously, but somehow I tend to give AI-generated language a pass on a lot of things that I don’t give human-generated text. I’m more forgiving of it somehow. 
TS: One of the things I like about your work is that it always exposes that very gap. 
AD: But I don’t know how to expose that with GPT-3! That’s in a way my big problem right now. You can’t mess with it the way you can with other kinds of coding. With GPT-3 you’re, it’s like you’re playing a slot machine. The only thing you can do is put a quarter in and see what happens. You can design a prompt, the task you’re asking it to perform. You give it a piece of text you want the AI to imitate, or give it a direction, a specific job to do. “Write a song for a chorus of mermaids” or whatever. Write a dialogue in which this character talks about X. Or a monologue about how much someone loves sandwiches. And the GPT-3 will give you some text that supposedly accomplishes that. There can be a big variety in what it gives you. But your only control is in the question you’re asking. That’s the opportunity for artistry, in a way. And then the model just spits something out. That’s why I think of it like playing the slots. With other work I’ve done, my team and I wrote the code from the ground up. With GPT-3 I’m in a much more passive position. “Luck be a lady,” “Baby needs a new pair of shoes,” or whatever. You can’t get in there and tinker with it. You can only access it with a certain prompt at a certain moment in time. 
TS: But in the workshop we became smarter about what kinds of prompts were more likely to give interesting results. So that’s some skill developing. We’re learning, and the machine is learning too. 
AD: No, not really. The machine isn’t learning based on our questions or inputs. The machine already learned everything it’s going to learn. The training of the model already happened, somewhere else, under someone else’s supervision.  And the amount of data it’s working with is so enormous there really wouldn’t be any way – or any point – in us trying to build something of our own. 
TS: So the machine finished its education, it got the degree.
AD: Ha, exactly. Now it’s out there, living its life. And each time we access it with a given prompt, it will do something different. Actually it’s a great example of the humans being trained by the machine. We adjust ourselves to accommodate its preferences and particularities. So one way to think of it is, when you write the code yourself you have to decide what you want to make, and that’s a creative process. Sometimes trial and error, but also some imagination of what you’re going for. With this, you’re not asking “what do I want?” but “do I want what it’s giving me?” 
I’ve been trying to understand why we’re all so delighted by the results. I guess it is just a kind of magic trick feeling? “How does it do that?” etc. 
TS: A Magic 8-Ball. 
AD: Yeah, there’s something a little oracular about it, which goes well with the Greeks. You get what you get. And that’s interesting. It exposes an asymmetry of power. And then, some of the very early roots of computer science are all entangled with fortune-telling. A lot of fortune-telling techniques are based on a chance situation, random numbers, that sort of thing. And that’s not far from GPT-3, which gives you something, seemingly out of nowhere. And if you had pressed “enter” a few seconds later or earlier, you might have gotten something else: Another fortune. Another future. 

The discussion continues with MAXforum: The Neuroverse



Skip to content