Scrapple - Golan Levin (US)

Scrapple is an installation that offers a playful way to create music by simply shifting around objects and wind-up toys. Installation visitors can arrange the objects set up on a three-meter-long table any way they like, take some away, or let the wind-up toys find their own resting place. The changes in the alignment are interpreted by the installation as the notes of a musical score, whereby the positions of the individual elements on the horizontal and vertical lines determine the rhythms and pitches. The musical score is scanned at regular intervals and the notation created by the user is “played back.”

Scrapple is the outcome of the implementation by Golan Levin of one of the computer-screen-oriented audio-visual environments that he had conceived in 2000 for his performance entitled Scribble in which visitors painted animated graphic symbols on a diagram that served as a musical score.

Created time: 2005

Last Clock - Jussi Ängeslevä (FIN) & Ross Cooper (UK)

The Last clock displays the history and rhythm of a space. The application is connected to a digital display and a digital camera. Last represents the time in the form of an analogue clock where trails behind the three hands paint the clock face with the feed from the camera, this creates an easy to read mandala of archived time. Last shows what has been happening in the space over the last minute, last hour and last 12 hours. The video feed can be any live video source: a camera mounted on the clock itself looking at what is happening in front of it, a remote camera streamed over the internet or a television signal. Thus the clock can display the local space, remote space or media space respectively. As an installation, the system can be used as a living aesthetic element reacting to the usage and activity of the space.

Created Time: 2002

Touchscreen - Anna Anders (DE) & Klaus Gasteier (DE)

The interactive installation Touchscreen addresses contact and the desire to touch, and plays with the expectations of the viewers.

The visitors are asked to touch the screen integrated into the wall. Each area of the monitor has been assigned different video and/or audio sequences. Yet even after playing for a while, the user still cannot determine a fixed course. He finds himself in a constantly changing emotional roller-coaster. Sometimes he becomes an object of desire, and is asked to touch the screen. This spot is then kissed whimsically by a young beauty, or the user is bombarded with insults about being aggressive grabber and gets sent away.

Aside from the video sequences there are purely audio parts. One navigates through sounds such as hissing, ringing glass, chortles, a clearing of the throat, crackling, pings and pongs. Or a virtual finger will follow the user’s real finger as it runs over the screen.

Created Time: 1998

Perfect Time - h.o (JP)

Perfect Time calls for active involvement on the part of the installation visitor, without whom the interface between virtuality and reality fails to take shape and the virtual world remains inaccessible.
The result of the visitor filling the installation with quartz sand is a fine wall of tiny trickling particles upon which the projected images appear. But as soon as someone touches this projection surface of trickling sand, the images disappear.

The central consideration of this installation is time. We exist amidst the constant passage of time but we are unable to make this tangibly comprehensible. In Perfect Time, time is depicted as images of illusion on trickling sand. What is imparted thereby is the impression that one might be able to physically touch them. But if one does so, all that remains is the sand.

Created Time: 2004

Telematic Dreaming - Paul Sermon (UK)

In Telematic Dreaming two double beds are connected by video linkup: one in a blacked out space and the other in an illuminated space. The bed in the brightly lit location has a camera situated directly above it, sending a live video image of a person lying on the bed to a video projector located above the other bed in the blacked out location. The live video image is projected down on to the bed with another person on it. Here, a second camera sends another image back to the other bed so that he/she can watch - and interact with - visitors approaching the bed. The telepresent image functions like a mirror that reflects one person within another person's reality.
Telematic Dreaming deliberately plays with the ambiguous connotations of a bed as a telepresent projection surface. The ability to exist outside of the users own space and time is created by an alarmingly real sense of touch that is enhanced by the context of the bed and caused by an acute shift of senses within the participant. In Telematic Dreaming the user exchanges their tactile sense of touch by replacing their hands with their eyes.

Created Time: 1992

s.h.e. - Nataša Teofilović (RS)

s.h.e. is an interactive ambience composed from an installation and a 3D character animation.

The work is ambiental because the audience and the architecture of the space are a key ingredient of the setup. The screens are set apart, so that the virtual actresses, who pass from screen to screen, pass through the real space as well. The two spaces overlap – the virtual one with the real.
Conceptually, the Work is based on an analysis of the virtual space and the virtual actor.

In the void-space, the virtuals are left to explore their own virtual identities. That’s why they knock on the insides of the screens, touch the boundary (the edge) of the picture, they enter their own virtual bodies or meet themselves. They are “skins”, virtual membranes without any organs, with animated shaders.

The personal, the private and the auto portrait are built into the work. The virtual actresses’ portraits were derived from the artist`s face. The digital lighting was captured from the artist´s ambient. The sound - knocking on the screen on which they were created.

Author: Nataša Teofilović: concept, character design, 3D animation, editing, sound
cooperator: Raka Jovanović: communication and video synchronization software

Created Time: 2006

Key Grip - Justin Manor (US)

The Key Grip project is an attempt to combine the entertainment and expressive possibilities of television, video gaming, and audiovisual performance into a single platform. The user can manipulate live camera streams in a fully three dimensional environment via an arcade gamepad. Video can be scratched, looped, and extruded into an expansive virtual space which the gamepad controls.

Credits: Justin Manor (US) in collaboration with Ars Electronica Futurelab (AT)

Created Time: 2003

Sur la table - Osman Khan (US)

Sur la table revisits the domestic situation of the table. Events that normally occur on/over a table (the placing of objects, the eating of food, hand gestures, etc.) are amplified through projection and become the basis for interactivity, ultimately changing the visitor's relation to the table.
The installation setup is as follows: a camera is placed above the table, capturing events occurring on/over the table, which are sent to a computer, where customized software processes the image, so that non-white objects visually appear to stream their color down. This processed image is then projected back onto the table. Thus, a historic timeline of events over the table is visualized as a continuous flow of images down the table.
The installation also explores the notion of consumption. Brightly colored food and drink (which bleed their colors down the table) are placed on otherwise austere white tables. Visitors were encouraged to eat and drink, and in doing so consume the visually enticing elements (the colored foods) on the table, returning the tables to their blank state and thus ending the installations visually dynamic performance.

Created Time: 2003

Diorama Table - Keiko Takahashi (JP)

Diorama Table offers novel possibilities to experiment with virtual images and objects used in everyday life. Visitors place everyday objects like cups, short lengths of rope or pieces of candy on to the table, and they’re immediately integrated into the virtual environment. The lengths of rope, for instance, are transformed into railroad tracks on which virtual trains travel. Houses, trees or a pond emerge around a fork or a saucer. In this way, real objects and images from the realm of fantasy merge with one another.

Credits: Keiko Takahashi (Concept), Project Team: Taku Oizumi, Takahide Mikami, Shinji Sasada (program), Saburo Ubukata (sound) / Japan Electronics College.

Created Time: 2007

Freqtric Drums - Tetsuaki Baba (JP)

Freqtric Drums is a new musical, corporal electronic instrument that allows us not only to recover face-to-face communication,  but also makes possible body-to-body communication so that a self image based on the sense of being a separate body can be significant altered through an openness to and even a sense of becoming part of another body. Freqtric Drums is a device that turns audiences surrounding a performer into drums so that the performer, as a drummer, can communicate with audience members as if they were a set of drums.

Created Time: 2007

Spacequatica - The Sancho Plan (UK)

Through the careful combination of animation, sound, music and technology, The Sancho Plan create fantastical worlds in which animated musical characters are triggered by a variety of electronic drum pads.
Their new Spacequatica installation invites you to explore an audiovisual aquarium populated by a varied cast of musical sea creatures. Visually and sonically, the user plays along a descending journey - from the surface, where schools of small exotic creatures can be performed like phasing xylophones, through the deeper waters populated by dangerous robotic sharks, and on to the pitch black depths, where all that can be seen and heard are rare self-illuminating species occasionally blinking out of the darkness.

The Sancho Plan are an audiovisual band who explore the real-time interaction between music and video and its potential for narrative and storytelling. A mix of musicians, animators, designers and computer programmers, The Sancho Plan aim to present audiences with unforgettable, interactive entertainment experiences.
The Sancho Plan are:
Ben Bryant: Performer
Ed Cookson: Director, Audio, Visuals, Performer
Edward Dawson-Taylor: Character Visuals
Joel Farland: Performer
Adam Hoyle: System Architect
Lewis Sykes: Audio, Performer
Olly Venning: Character Visuals

Created Time: 2007

Moony - Akio Kamisato (JP), Takehisa Mashimo (JP), Satoshi Shibata (JP)

Images of virtual butterflies are projected into the steam.
When a visitor touches a butterfly perched on the column, it flies up. Additionally, if the visitor tries to catch the flying butterflies, the butterflies will fly away and disappear from view.

If visitor continue holding up a hand to steam for a while, butterflies will flock around the visitor's hand and will begin to play. The visitor who touches artwork see the steam which comes out of the upper part of a column. Steam is used as an interface and screen, and a steamy temperature and the warmth of a living thing are associated.
And, we are treating the "butterfly" as organic existence born from inorganic existence.

Created Time: 2004

drawn - Zachary Lieberman (US)

This project presents a whimsical scenario in which painted ink forms appear to come to life, rising off the page and interacting with the very hands that drew them. Inspired by early filmic “lightning sketches,” in which stop-motion animation techniques were used to create the illusion of drawings escaping the page, drawn presents a modern update: custom-developed software alters a video signal in real time, creating a seamless, organic and even magical world of spontaneous and improvised performance of hand and ink.

By turning simple brushstrokes of ink into complex and energetic life forms, drawn delights with simple truths: the musicality and immediacy of drawing, and the playful joy of magic.

Created Time: 2005-2006

Tool’s Life - minim++ (JP)

When useful physical objects are gently touched, their shadow-like silhouettes magically begin to move and can assume a wide variety of forms. The objects themselves do not change their shape, whereas their shadows reveal their true character or their secret wishes. Tool's Life not only illustrates the function of these objects; it also brings out background factors that usually go unnoticed and the various significances of the objects' use.

In Japanese "kage" is the shadow that appears on the ground behind something that blocks the light; it's the shade on a thing where light does not reach; it's the silhouette that is projected onto a wall; it's the "shadow" that symbolizes a thing's very existence.
At first glance, a "kage" may seem to be a mere imitation of a thing—that which projects only outline and external shape. But, at times, it can highlight the important aspects of a thing and reveal its intrinsic quality. In this respect, it is very much like a fragment of a memory that has already started to fade.

Credits: minim++ is Kyoko Kunoh (JP), Motoshi Chikamori (JP)

Created date: 2002

musicBottles - Hiroshi Ishii and Tangible Media Group (MIT)

Three sets of bottles filled with jazz, techno and classical music explore the transparency of the interface. Glass bottles as containers and controllers of digital information accessed by opening and closing the lids. The project tries to create emotional value different from conventional function-centric interfaces.
Through the seamless extension of physical affordances and the metaphor of bottles, this project explores interface transparency. Just as we naturally open and close lids to access bottles' physical contents, in this project users open and close lids to access digital information.

For this project, a custom wireless sensing technology was developed. An antenna coil attached to the underside of the table creates a magnetic field above the table. A custom electronic circuit detects disturbances in this magnetic field that are caused by the placement and opening of tagged bottles. The system then executes musical programs for each bottle (e.g. opening one bottle plays a piano) and controls the patterns of colored LED light projected onto the table. This project uses a combination of artistic and technological techniques to support emotional interactions that are fundamentally different from conventional, function-centric interfaces.

Credits: Developed at MIT (Hiroshi Ishii (US) and Rich Fletcher (US)) in collaboration with Dr. Joe Paradiso (sensor technology) (US)

Created Time: 1998

Life Spacies II - Christa Sommerer (AT) and Laurent
Mignonneau (FR)

Life Spacies II is an interactive artificial life environment where users can create artificial creatures by typing text messages. The text characters function as genetic code for the creature's design. Each different text will create a different creature. Depending on their construction creatures can move fast or slow. They will move around and try to eat text characters. Creatures eat the same characters as their genetic code is made of. Once a creature has eaten enough characters it can look for a partner creature and mate. Children creatures will look similar to their parents as their genetic code is the product of artificial evolution.

Credits: Collection of the NTT-ICC Museum Japan and at MOMA New York

Created Time: 1997-99

Augmented Sculpture  v 1.3 - Pablo Valbuena (ES)

This project is focused on the  temporary quality of space, investigating space-time not only as a three dimensional environment, but as space in transformation.
For this purpose two layers are produced that explore different aspects of the space-time reality.

On the one hand the physical layer, which controls the real space and shapes the volumetric base   that serves as support for the next level.
The  second level is a  virtual projected  layer that allows controlling the transformation and  sequentiality of space-time.

The  blending of both levels gives the impression of physical geometry suitable of being transformed.  
The orverlapping produces an euclidean three-dimensional space augmented by a transformable layer that can be controlled, resulting in the capacity through the installation of  altering multiple dimensions of space-time.

These ideas come to life in an abstract and geometric envelope, enhanced with synesthetic audio  elements and establishing a dialogue with the observer.

Credits: The first stage of this project was developed during Interactivos 2007 workshop at Medialab-Prado (Madrid).

Created Time: 2007

La Pâte à Son - LeCielEstBleu  (FR)

La Pâte à Son – which can be translated as ‘sound dough’ – is a sound toy and  compositional tool that was conceived to encourage musical experimentation.

In the Pâte à Son factory interface, two reservoirs of dough generate a continuous flow of musical loops, from scales to simple tunes. The goal is to create music by making a mess of the established order.

Users can divert and direct the musical flow by placing pipes from the conveyor belt on to the central checkerboard above. In addition to neutral transporter pipes, there are eleven instrumental pipes that give voice – a flute, a guitar, a human singing… – to the silent notes.

Finally, switches allow users to create closed circuits with multiple layers of notes that create musically complex compositions.

Credits: LeCielEstBleu are Frédéric Durieu, Kristine Malden, Jean-Jacques Birgé, Thierry Laval

Created Time: 2004

metaField Maze - Bill Keays (CAN)

The metaField Maze is a virtual, room-sized recreation of the traditional marble maze game. Instead of using knobs to control the play, the player walks over a projected 3D model of the game. When the player steps left or right, the model tips and the ball moves accordingly. The player must attempt to run the full course of the maze while avoiding the holes.
The metaField Maze is a highly compelling interactive installation. Because players use their whole bodies to interact and the objective of the game is familiar if not obvious, they immediately immerse themselves into play, leaving behind all notions of interface and technology.
The metaField Maze was created by Bill Keays at the MIT Media Lab as a research project in Ron MacNeil's Intelligent Graphics Group. Assistance was provided by Tim McNerney, and John Underkoffler. It has been exhibited at SIGGRAPH, Ars Electronica, the Boston Museum of Science, The MIT Museum, the London Millennium Dome, the Siemens Forum in Berlin, and the Interaction Biennale in Japan.

Credits: Created at the MIT Media Lab as a research project in Ron MacNeil's Intelligent Graphics Group. Assistance by Tim McNerney, and John Underkoffler.

Created date: 1998

Noise and Voice - Tmema (US) and Ars Electronica
Futurelab (AT)

Noise and Voice is an interactive audiovisual installation focused on the magical relationship of speech to the ethereal medium which conveys it. Participants in this work are able to "see" each others' voices, which are made visible in the form of animated graphic figurations that appear to emerge from the participants' mouths while they speak. When one of the users speaks or sings, lively and colorful forms appear, which assume a wide variety of shapes and behaviors that are tightly coupled to the unique qualities of the vocalist's volume, pitch and timbre. Birthed by vocal actions, these artificial lifeforms gradually obtain independent lives of their own.

Credits: Tmema is Golan Levin (US) and Zachary Lieberman (US)

Created Time: 2002-2003

Gulliver’s world - Ars Electronica Futurelab (AT)

Gulliver's World is a multilevel, interactive edutainment platform. Here, anyone can explore his/her own world without need of a mouse or a keyboard.
Just like in the world of huge Brobdignagians and tiny Lilliputians in “Gulliver’s Travels” by Jonathan Swift, the play of scale and relation is what shatters accustomed modes of seeing.
The work consists of several modules: in the Recording Area, visitors can make recordings of themselves that are played back in the Stage Area. Users can also select one of three different environments for a particular scene. With a camera object, the scene is captured from the perspective of the figures and displayed on a screen.
The possibility of observing and manipulating the scene from a position overlooking it—or from any other desired angle—seems to be unique in a media art context. The result of this project is an infrastructure that provides artists with new possibilities to transport audiovisual information, and ought to encourage creative artists in every discipline to work with these new approaches.

Created time: 2003

Innovision Wall - Ars Electronica Futurelab (AT)

Inscribe a message into the Ars Electronica’s digital guest book. Innovision Wall is the Information Age’s answer to the blackboard.
A tablet PC and digital camera make it possible for any visitor to the Ars Electronica exhibition to create his/her own personal entry on the Ars Electronica’s blackboard. The tablet PC’s intelligent screen lets visitors compose notes and pictograms and customize photos with scribbles.
These messages then appear in the form of Post-its and Polaroids on digital bulletin boards. The result is a colorful info-collage that reflects in highly personalized fashion visitors’ impressions of the exhibition “Ars Electronica – Get in Touch”. The communiqués can evoke spontaneous reactions on the part of other visitors and thus bring forth new and individualized communicative phenomena.

Created time: 2004

archive wall - Ars Electronica Futurelab (AT)

28 years of Ars Electronica means 28 years of innovative projects, state-of-the-art technology, creative works of cyberart and astounding future-oriented concepts. Now, visitors to the exhibition can experience this quarter century of development of new media and media art.

This dynamic large-format wall diagram simultaneously serves as a portal to Ars Electronica’s digital project archive and to documentations of the most important events that took place in the cultural environment of Ars Electronica. The Datapool displayed in form of  a semantic network provides windows to an extensive video archive featuring a collage of documentations of the individual festivals and the best works of computer animation from the history of the Prix Ars Electronica.

Semantic networks have become a general paradigm for knowledge representation beyond their traditional application in science and technology. Under the phrase of the iconic turn a shift from linear textual narration to diagrammatic reasoning has been taking place. Disciplines like art history, image science and philosophy increasingly use semantic networks and diagrammatic methods in their work.

Created time: 2007

LibroVision - Ars Electronica Futurelab (AT)

LibroVision enables the user to page through a virtual book and to zoom in on or shift particular portions of a page via simple hand motions. Users’ gestures are registered by a video camera, interpreted in real time, and the resulting data is forwarded to the application’s control module. This creates the linkage between the virtual book’s contents and the user situated in real space. Additional videos or hyperlinks can also be launched solely by means of gestures. An invisible human-computer interface makes for an intuitive process of information exchange with the medium.

Created time: 2004

Animations on demand - Ars Electronica Futurelab (AT)

Animations on demand wil lopen up the archive of the annual competition Prix Ars Electronica and offer a selection of the best animations previously submitted.
Instead of the viewer having to go by a relentlessly rigid programme scheme, this time it is the video programme that goes by the wishes of the viewer: By placing his hands on the projected tabletop of a table furnished with sensors, the visitor can decide for himself which films he would like to see.

Created time: 2004