As an artist and educator dealing with artificial intelligence and machine learning, I have noticed that much of the art and critical dialogue deals with the data used to drive these systems. This is for good reason; the entirety of the information used to train neural networks, for example comes from human beings and as such includes our biases and misconceptions, leading to the development of prejudiced machines. In machine learning circles this is described as “garbage in, garbage out.” Artists often describe things through the dichotomy of form and content, and with neural networks one can employ this analysis.
If the data is obviously the content, then we must also investigate the form of the network itself. These structures are largely obfuscated by the “black box” effect, which essentially means that it is impossible to inspect how the algorithms produce results. A black box can be the result of closed and proprietary source code, as with the case of Grand Theft Auto V in my film why don’t the cops fight each other?. It can also be the result of the evolutionary and chaotic development of a neural network through thousands of training iterations, in which neurons in the network take up complex behaviors, which I investigate in Usefulness of a Useless Neural Network. In this work, I demonstrate that given the optimization techniques employed in neural networks, “machine learning” could just as easily be described as “machine forgetting.”
In these works, I hope to offer a glimpse into this black box, to give some sense of what is inside. In Inference Engine, the player is positioned inside the latent space of a Generative Adversarial Network, a common network infrastructure used to generate photorealistic images. The player must traverse this space by controlling the levers of a hyperdimensional spaceship, thus manipulating the neurons in the network by hand, achieving a successful navigation of this complex system.