study on algorithmic video

This project consists of a series of studies on algorithmic video. They investigate the opportunities for dynamic spatial montage offered by digital video. Found footage excerpts are fed to different algorithms that manipulate the pixels of the videos in real time.

Algorithmic videos SLIT SCAN and TIME SCULPURE experiment with the temporal dimension of the video. They visualize simultaneously areas of past and future frames of the same video. The projects REAL-TIME VIDEO MIX and TU/TWO investigate interactive ways to compose and mix multiple videos in real time.


REAL-TIME VIDEO MIX

In REAL-TIME VIDEO MIX areas a video are replaced by the corresponding areas of another video in real time, according the selections of the viewer.

Software: C++ (OpenFrameworks).
Found Footage: Dreams by Akira Kurosawa, Outer Space by Peter Tscherkassky
Watch video :: http://www.youtube.com/watch?v=F01_fc4UplI


SLIT SCAN

In SLIT SCAN fragments of past and future frames of the same video are visualised simultaneously. Every horizontal line of pixels visualizes the corresponding pixels of a future frame. This transformation adds fluidity to the forms, freeing them from spaciotemporal constraints.

Software: Processing.
Found footage: ‘Pas de Deux’ by Norman McLaren.
Watch video :: http://www.youtube.com/watch?v=TIXR2_i5FnI


TIME SCULPTURE

In TIME SCULPTURE, pixels of past and future frames of the same video are visualised simulatneously in circular concentric areas around the point selected by the viewer.

Every future frame of the video is placed in a different depth, creating a virtual 3D volume out of 2D video-frames. This way, the time dimension is transformed to a third spatial dimension.

Software: Processing.
Found footage: A fragment of the movie ‘Mirror Mask’ by Dave McKean.
Watch photos :: http://www.flickr.com/photos/peqpez/sets/72157625851973277/


TU/TWO

tutu

In TU // TWO the movement of the performer is used to create a dynamic mask that makes areas of different videos, simultaneously visible.
A computer vision algorithm detects the movement of the performer and uses this information to process the image of multiple videos, that are simultaneously reproduced at multiple layers. TU //TWO can be presented both as a performance as well as an interactive installation, permitting the audience to experiment with the influence of their movement on an audiovisual composition.

Developed with: Processing
Found footage: ‘La Antena’ (Esteban Sapir), ‘Baron Prasil’ (Karel Zeman).
Photos: http://www.flickr.com/photos/peqpez/sets/72157625977412098/

Advertisements