For my Nature of Code final I’m thinking of creating a video that would have a particle system interacting with the native particles from the source footage. Using the image above as an example, the blurred out lights in the background would act as the primary set of particles. A second set of particles would then be introduced in processing (using textures stripped out from the lights). Through a variety of physics simulations, the new set of particles would interact with the lights as well as any objects that are focused in the screen. The chair above for instance, could be turned into a mask, where the particles would treat it as a boundary and react accordingly.
Using one shot to start with, I will try to implement as much of the code as I need to have everything up and running. Once I get to that stage I will start testing that code with new footage and try to introduce some new features. One of which that I am thinking of, is to add an audio reactive element to the particles’ behaviors. I still have lots to do before I get there though. I have started to use OpenCV to see how I can track the particles with blob detection. I am looking into the best way to handle the textures. I also need to decide how to handle the workflow of the video (GSVideo, an image sequence ?). While I do all of this, I will also be thinking of what type of footage will work best with the concept and what kind of foreground objects I can use to help build a stronger conceptual framework.