Lead image designed by Andrew Brumagen courtesy of Freethink
Writer, theater director, and performance maker Annie Dorsen creates original works exploring artificial intelligence, performances that she calls “algorithmic theater.” These pieces aim to discover the unexpected in the interplay of humans and machines. Her current project applies the text generation model GPT-3 to Aeschylus’s unfinished Prometheus trilogy from Greek antiquity—a three-part tragedy in which the unknown fate of humanity hinges on questions of power and technology. Only fragments survive of the final play, Prometheus Fire-Bringer, and Dorsen intends to use GPT-3 to propose multiple possible completions of it. Dorsen spoke with dramaturg Tom Sellar in January 2022 following residencies at MAX and at Brooklyn’s Mercury Store. The piece is scheduled to premiere in Philadelphia in 2023.
ANNIE DORSEN: After the residency, I read a nice piece by Rob Horning called “Plausible Disavowal.” It’s an essay about AI-generated art and why he finds it charming, and why he feels sort of bad about finding it charming. It’s a good discussion of the connection between how machine learning algorithms are set up and the kinds of creativity they seem to demonstrate. Here’s one part of his argument:
“AI-generated art depends on massive troves of collectively produced data and evokes an idea of creativity without the individualist spark of insight. Rather than make anything genuinely new, generative adversarial networks converge on a stereotype, as a “discriminator” network using a set of images or phrases already determined to belong to some genre refines the attempts a “generator” network makes to approximate that genre.”
By way of background, the way that adversarial networks work is you train the network on a bunch of examples of the kind of thing you want. Images of cats, let’s say. And then there’s a little competition between two parts of the network. That’s why it’s called adversarial. The first part generates some new material, a new picture of a fake cat. The second one is trained to compare this new image to the training images and gives it a score based on how close it is. So the first one generates a cat. The second one says, “no that doesn’t look very much like these pictures of real cats,” and then the first one makes some modifications and says “how about now?” Over time the generator improves. It learns to create more plausible cat pics. There’s a bias, therefore, in favor of existing images, in favor of the familiar, or the recognizable. A bias that will reward the network for making things that look like what it has seen in its training. So in other words, success is measured in terms of how well the new thing matches old things. The whole process converges on the typical.