I’m currently building a playful Processing application for the pre-opening of the new Media & Design Academy building at C-Mine. The pre-opening is scheduled for the weekend of april 25-26 2009. So I have quite some time left to finetune the draft version I have now.

Concept = The mirrored video image of people passing in a specific frame is projected lifesize. When you stand still in the frame, you just have the projection of your mirror image. Movement will reveal an extra layer. Using motion detection, particles of ribbons that follow the movement in the frame are superimposed upon the videosource.
Curious how people will interact with this.
For the time being it’s based on motion tracking as provided in the JMyron processing library.
I took out the visualisation of the blobs (well not yet everything, as keeping some visual reference in an unfinished product is a real time saver while still in production. I just have to outcomment a few lines of code when completely finished.)
And I added:
- Mirroring of the video input image
- the 2D ribbon code graceously provided by James Alliban, which I linked to the motion tracking output.
Still to implement:
- Mirroring of the motion tracking output.
- Limiting the amount of blobs. (based on luminance levels, well, in fact on colour)
What I have now works best in a dark setting: writing particles using a light source (my mobile phone).
If you browse James Alliban’s blog, you’ll notice that I got my inspiration from his virtual ribbons experiment. I’m only using his 2D ribbon code though, but combining this with other chunks of code into a real application, forces me to delve deeper into Processing, and will probably prove to be quite a learning experience.
Will be continued…
Leave a comment