COMPRESSED CINEMA
5 4 3 2 1 INFO
INFO 1 2 3 4 5
Compressed Cinema is the series title for five new audiovisual works completing in 2020. The images were created by Casey REAS, and each work has a stereo audio track composed by Jan St. Werner.

The Compressed Cinema digital videos were created in the tradition of experimental films that use existing films as raw materials. The Compressed Cinema videos are an inversion of Ken Jacob’s 1969 film Tom, Tom, the Piper’s Son, which expanded the short 1905 film of the same name from 8 to 115 minutes through meticulous rephotography, repetition, and editing. In contrast, each Compressed Cinema video distills a feature-length film into a work of less than ten minutes. A Compressed Cinema video is a complete reimaging of a film through a process of transformation and editing. The collection of five videos created to date are the result of over three years of experimentation and developing new techniques for creating cinematic media with generative adversarial networks (GANs).

Each of the five videos were generated from a carefully trained GAN model. Some of the videos utilize conventional editing techniques and others are created entirely from a continuous flow of images possible only through working with GANs. Unlike editing a film or video where an artist has a set of fixed frames to organize in time, a Compressed Cinema model contains an indeterminate number of unique images within the GANs “latent space.” It’s a microworld of potential images to be explored, and each finished video is a narrow path through near infinite options.

The process of making a Compressed Cinema video starts by training the model. This is done by extracting each frame from an existing film. These frames are cropped differently to create a set of more heterogeneous images as the training data. The model is trained over a period of two to four weeks on a computer in the artist’s studio, with sample “contact sheets” generated every few minutes to monitor the training process. After the model is trained, images are generated to evaluate the training. If the training fails according to the artist’s goals, the model is trained again with modified training data. Training with some films never worked and were abandoned.

Once the model is finished training, thousands of images are generated from randomized positions within the latent space of the model. Individual images are selected as the seed for discovering more images within the same area. Through this process of refinement, thousands of images are curated to hundreds and then distilled to about one hundred. To make a comparison to photography, it's similar to moving through a stack of contact sheets to hone in on the desired results. These one hundred or so images are further curated into a few dozen images that become the key frames for the final video.

The machine-learning algorithms used for Compressed Cinema don’t “see” in the way we do. A GAN can coalesce a diverse range of patterns and textures during the training and can statistically aggregate them into coherent images, but the GAN doesn’t recognize something as a “face” or “table” the way our vision systems continually filter the world. Without these human filters, GANs create images that appear uncanny, weird, and dreamlike to us. Some of the images within each trained Compressed Cinema model are representational and vary little from the source material, some are completely abstract and noisy, and others are hybrids. Each Compressed Cinema video has a different atmosphere coaxed from the unique forms and textures within each Compressed Cinema model, and therefore indirectly from each film source.

Each Compressed Cinema film is a collaboration between Casey Reas and Jan St. Werner. Reas created the video and St. Werner created the audio. The images and sound are equally important for experiencing the videos. Werner’s compositions augment the transmutation of imagery in and out of recognition by adapting computer-generated sounds with granular synthesis, a technique that transforms acoustic events into microscopic grains to be arranged and modulated freely. The final culmination of visuals and sound mimics a discernible lexicon of film while establishing a new, multi-sensory expression of cinema.

The technical work of wrangling and writing custom machine learning software was led by Hye Min Cho. Videos 1–2 were created with a Deep Convolutional Generative Adversarial Network (DCGAN) by Radford et al. and 3–5 were created with the Progressive Growing of GANs algorithms by Karras et al.

//

Casey REAS' software, prints, and installations have been featured in numerous solo and group exhibitions at museums and galleries in the United States, Europe, and Asia. His work ranges from small works on paper to urban-scale installations, and he balances solo work in the studio with collaborations. Reas' work is in a range of private and public collections, including the Centre Georges Pompidou and the San Francisco Museum of Modern Art. Reas is a professor at the University of California, Los Angeles. With Ben Fry, Reas initiated Processing in 2001; Processing is an open-source programming language and environment for the visual arts.

More at https://reas.com

//

Jan St. Werner is an electronic music composer and artist based in Berlin. He's best known as one half of the electronic duo Mouse on Mars, and he also pursued a solo career creating music under his own name as well as Lithops, Noisemashinetapes, and Neuter River. Starting in the mid-1990’s as part of Cologne’s A-Musik collective, St. Werner released a steady stream of influential records both as a solo artist and with Mouse on Mars. During the 2000s, he acted as the artistic director for Amsterdam’s Institute for Electronic Music (STEIM). In 2013 Werner launched a series of experimental recordings called the Fiepblatter Catalogue on Thrill Jockey Records. Werner has been a visiting lecturer at the Arts Culture and Technology department of the Massachusetts Institute of Technology MIT, and he holds a position as a professor for Interactive Media and Dynamic Acoustic Research at the Academy of Fine Arts Nuremberg, Germany.

More at http://fiepblatter.com/

//