Archive for Computational Media

Foreground Subtraction (a visual mockup)

// November 18th, 2009 // No Comments » // Computational Media

I have revised the steps I have broken down for this project.  While I work through programming them I have put together a visual mockup of how I plan it to look.  First is a timelapse of a space to illustrate the motion involved.

Next is a video illustrating the stages of how the foreground will be subtracted and the background reconstructed.

The steps I outlined in my previous post had one point that needed some revisions. In determining a threshold to compare pixel values, it is easier to use a reference frame to establish a threshold from, as opposed to comparing each image to all of the others. So now, I plan on breaking the process up in to two stages. The first will average each pixel across all of the images and write the averaged value to the screen. The next will then determine the mean of each pixel below the threshold (with the threshold being determined in reference to the averaged pixels from the previous stage).

Steps towards a larger project

// October 28th, 2009 // No Comments » // Computational Media

I have an idea for a project I don’t yet know how to make. To help me get there I figured I would break down each of the tasks I need to figure out and start working through them individually. I’ll get to that below, but first, I figured I would do a bit of explaining what the larger idea is all about.

Before getting to the technical side of things, I’ll describe some of the conceptual ideas. I would like to construct a system where I am able to edit out the motion from a scene (ie. people moving) and be able to create a static image void of any of it’s inhabitants. Think an image of Grand Central Station in New York completely empty on a saturday afternoon. Or, as another example, this clip from Vanilla Sky depicts Time Square completely empty.

The application of doing this I have envisioned in a number of ways. First one being that through a series of time lapse photographs a new photo could be constructed of a place that may never otherwise be empty. Also, the system could be integrated with live video so that the user’s can see themselves alone in these same spaces. Lastly I would like to input video footage into the system from movie scenes that have locked off shots. By creating a static image from this, the image can then be used for further background subtraction that would take away any characters from the scene and allow users in insert themselves into the scene.

As for the technical implementation, here is a brief explanation as to how I imagine it working out. The processing sketch will read and store all of the pixel data for each image. Once the sketch has cycled through the array of images, it will compare the values where it noticed a minimal change in pixel values and see if there is enough information to construct a new image with these values. If there are any gaps, it will continue reading images until it can construct an image without any. If not, the new image is displayed/saved.

Breaking this down further into smaller steps I can work through for now, I have listed a few key elements to familiarize myself with:

  1. Learn how to create an array of images.
  2. Learn how to store an image.
  3. Learn how to implement background subtraction.

Working my way through this list I first created an array of images.  I used just two to start. Alternating between a frame with an empty background and a frame with a subject in the foreground.


You can see the sketch in action here.

Next step was to figure out background subtraction. I learned a few things through this process. One of which was how to store an image off screen and into memory. The other thing which took a lot of trial an error to figure out, is that I may need to use an external library to handle the background subtraction as I couldn’t quite get the results I was hoping for.  I experimented with the process a bit to also do some foreground subtraction and some frame differencing. Here is a video of background subtraction (code), foreground subtraction (code) and frame differencing (code).

Working my way through this I put together a new list.  This one being some steps for the actual programming.  So the steps in that go:

  1. construct image array of 500 images
  2. load first 5 images of array into buffer
  3. analyze pixels across all 5 images above and below threshold
  4. write each pixel consistently below the threshold in all 5 images to a new image on the screen
  5. load next 5 images in array and write any pixels below threshold that haven’t already been written to the new image
  6. repeat step 5 until either the new image has all pixels in the image filled, or the end of the image array has been reached

Frankenstein Mirror

// October 11th, 2009 // No Comments » // Computational Media

In ICM this week we learned how to parse text using strings. I’ve also been playing around with using camera input, so I decided to make a text mirror drawing from class examples and using the text of Mary Shelley’s Frankenstein for the mirror.  Video of it in action:

Code can be found here if you want to test it out in processing.

Experimenting with spirals v1

// October 4th, 2009 // No Comments » // Computational Media

I’ve started playing with spirals lately, having a few ideas of what to do with them down the line.  Homework this week was to create a sketch using arrays and loops to create multiple instances of an object. I integrated some of my first iterations of these experiments to put together this:

Pixel Distortion

// September 27th, 2009 // No Comments » // Computational Media

This week in ICM we were asked to create a sketch that made use of functions and objects. I decided to go a bit more abstract on this one. The sketch is below. The lines are being drawn from the center out with a random displacement between each other of -1 to 1. A second instance of the object is drawn one pixel up and to the right when any key is pressed. It produces a crazy glitching effect because of being drawn on keypress instead of in the draw loop. As for the strange patterns within the lines, I can’t fully explain what’s going on there. But from what I do know, they are caused from the lines being too dense for the resolution of the pixels. As well, because they are thin lines drawn on an angle across the grid, the lines have jagged parts because of their adherence to the grid. The proximity of these jagged parts to each other is what creates the patterns.


You can try out the sketch and see the code.

Particle Emitter

// September 21st, 2009 // No Comments » // Computational Media

This week in class I was working with Corrie on a particle emitter. Learned some fun new things from her in the process. The sketch emits particles at the mouse coordinates and toggles on and off gravity with a mouse click. You can see it in action below or check out the sketch to play around with.

Pages: 1 2 Next