top of page

AUDIOGRAPHY

Harvard Graduate School of Design | In collaboration with Scott Penman, Izgi Uygur | 2016

In “Audiography”, we’ve been exploring the alternative ways that human can interact with a Computer-Controlled Machine in real-time in order to augment its end product. Our scenario was to have a main input as “perception” of the machine, and to have a secondary input given by human that impacts and modifies the output of the machine.

We implemented this scenario by having a CNC Drawing Machine as the end-point of system that draws an “image” augmented by the “sound”. The process consists of 3 stages (seeing, hearing and drawing): capturing the visual data, altering it and recreating the visual narrative on a drawing machine. While the machine draws the contours of an image that is captured through the camera, the sounds in the environment (with its frequency and volume as its numerical parameters) affect the drawing in an experiential way. We concluded our work by implementing a series of user experiments in order to observe and asses the human interaction with our multi-sensory machine.

bottom of page