Post Format

★ Janus


As part of the Smart Light Sydney Festival, May 2009, Tom Barker (Professor of Design, Architecture and Innovation at UTS) and Hank Haeusler (Post-Doctoral Researcher at UTS) were commissioned to design and produce an interactive light sculpture to be exhibited on the light walk in the Rocks.  The piece conceived by Tom was called Janus and was pitched to the SLSF body as;

a giant floating human face in The Rocks..inspired by Janus, the Roman god with two faces, Barker and Haeusler’s installation is part of their ongoing research into complex and non-standard media facades.  Janus uses social media and new technologies to engage the public and influence its art. Photovoltaic cells are used to power the installation.

IMG_1095

The concept for the project was for the face sculpture to act as a mirror to the emotions of the city, as measured using the social media of mms, email and blog updates.  Toms’ earlier research had lead him to explore notions of the nature of facial expressions, our abilities to read and emote via the expressive capabilities of our faces.  With this in mind, it was an interesting experiment – is it possible to measure, collect and respond to accumulated faces – can you determine how happy a city is by watching its’ inhabitants facial expressions?I was invited to join the project as the software design component of the project, as Tom had seen some snippets of my interaction design work, as well as the work of my students in the computational environments class.  Naturally my first thought was to ask Frank Maguire if he was interested in joining me on the project – having worked with Frank on the Filtration Fields installation, his industrial design skills and generally snappy logical mind made him the perfect partner in crime..

The main crux of the project production from our end was in coding the algorithms which would translate images of faces into emotional readings (happy, sad, surprised, angry, fearful, disgusted and neutral), using these readings to trigger pre-recorded videos and controlling the video output to a non-rectilinear array of 192 pixels.Having worked frequently with camera images, facial emotions I was confident in that component of the programming, as with the data munging and video triggers.  However, having never used more than 4 LEDs to output recorded/live video, I couldn’t be so sure I could guarantee the display robustness – but with such a challenge, how could I say no to the project!After a few initial tests using a standard Arduino board in a non-stanard manner, I had managed to get ~20 LEDs lighting up with varying PWM values and we were off and running.  It turned out that the technique I had tested was naughtily using the arduinos’ onboard resources and was not a sustainable way of outputting video – so we had to look elsewhere.Options included using a daisy-chain of chips to multiply the output of an arduino duemillanove board, an arduino Mega and the phidgetLED 64.

With project timelines fairly short, we opted for the output mode we felt would be simplest/most trusted/idiot proof, which our experience told us would be the phidgetLED 64.  The phidget range of interface kits are bread and butter for the interactivation studio, as well as my computational environments students, as well as being able to claim a dedicated output of 64 PWM leds per board – which meant that we could order 3 and end up with spare LED output pins.The face itself could then be split up into separate sections to be addressed individually by each Phidget board – the forehead, center and chin regions containing around ~60 pixels each.  This allowed us to divide up the phidget output coding into regions and simplify a bit of our output matrixing.  I’d spent some time earlier working with maxduino to get greyscale LED output from pixelated video (a matrix of 6 x 1 pixels!), and luckily I was able to put that patch to work with a little bit of scaling, upgrading to the required resolution.The first issue we came to was the phidget method of sending single line matrices to the phidgetLED64 from top-left pixel to bottom-right pixel.  Since we were not working with a rectangular screen, each row of pixel data had to be offset from the starting 0 point, yet still line up with the neighbouring rows.

See Also;http://vividsydney.com/ http://www.smartlightsydney.com/artists/barker-and-haeusler http://www.timeoutsydney.com.au/aroundtown/smart-light-sydney–vivid-sydney.aspx

1 Comment so far

  1. Pingback: Smart Light Fields (intro) | j a s o n m c d e r m o t t

Comments are closed.