Posts Tagged ‘video’

Nature of Bokeh – Proposal

// March 30th, 2010 // No Comments » // Nature of Code

noc_sample_small

For my Nature of Code final I’m thinking of creating a video that would have a particle system interacting with the native particles from the source footage. Using the image above as an example, the blurred out lights in the background would act as the primary set of particles. A second set of particles would then be introduced in processing (using textures stripped out from the lights). Through a variety of physics simulations, the new set of particles would interact with the lights as well as any objects that are focused in the screen. The chair above for instance, could be turned into a mask, where the particles would treat it as a boundary and react accordingly.

Using one shot to start with, I will try to implement as much of the code as I need to have everything up and running. Once I get to that stage I will start testing that code with new footage and try to introduce some new features. One of which that I am thinking of, is to add an audio reactive element to the particles’ behaviors. I still have lots to do before I get there though. I have started to use OpenCV to see how I can track the particles with blob detection. I am looking into the best way to handle the textures. I also need to decide how to handle the workflow of the video (GSVideo, an image sequence ?). While I do all of this, I will also be thinking of what type of footage will work best with the concept and what kind of foreground objects I can use to help build a stronger conceptual framework.

Daft Bodies (via SMS)

// February 9th, 2010 // No Comments » // Interactive Television

Last week in Live Experimental Interactive Television we were asked to create a site where users were able to interact with video by SMS. Teaming up with Adam Harvey, Edward Gordon and Mustafa Bagdatli we created the above video. Drawing upon the ridiculousness of the Daft Bodies meme, we thought it would be funny to give users the ability to control the characters with their phones. One of the things I thought worked well was the ability to send in a word via text and see that word acted out and sung with the typical Daft Punk vocoder sound. One of the drawbacks would be scale. It worked well with a class room full of people, but it might get out of control with thousands of viewers. A few ways to approach this problem may be to change the nature of the content, divide up the screen and then provide many more options. If this was the case, you could construct some sort of game based goal which people may be able to see the result of their actions clearer.

The Social Spotlight

// November 4th, 2009 // No Comments » // Physical Computing

social spotlight

The Social Spotlight was created in collaboration with Boris Klompus and Miriam Simun for our physical computing midterm. The project, consisting of a bench and three spotlights, sets the stage for social experimentation. Imagine how the setting would influence these experiments if it were to be installed in such places as subway platforms.

As a user approaches the bench the spotlight above the left seat is lit, enticing the user to sit there. This is sort of stage one of the experiment. Would people want to sit in the spotlight or feel they are supposed to? Would they be too shy and sit in one of other seats or avoid the bench all together. Once someone sits in the spotlight, that light turns off and the middle light turns on. As the etiquette of social norms in this type of situation may suggest, one would normally leave a seat in between empty before sitting down. With the middle spotlight turning on, the bench suggests otherwise. With the middle seat taken, the last spotlight lights up. Once the bench is fully occupied, the light above the first person to sit down turns back on again. This leads to new observations. Does the person under the spotlight feel awkward in this situation? Do they feel as though they are meant to get up? How do others sitting on the bench respond? The interest in how the answers to these questions change from user to user and location to location was one of the biggest factors in leading us to develop the project.

Here are a few more photos and a video below to demo how it functions.

Frankenstein Mirror

// October 11th, 2009 // No Comments » // Computational Media

In ICM this week we learned how to parse text using strings. I’ve also been playing around with using camera input, so I decided to make a text mirror drawing from class examples and using the text of Mary Shelley’s Frankenstein for the mirror.  Video of it in action:

Code can be found here if you want to test it out in processing.