All Posts Filed in ‘Projects

Post

★ fluid updated

I’ve spent a bit more time cleaning up the fluid blobsexamples I made last week, this time limiting the Region of Interest and fiddling with the fluid interaction.  Also newly included is a smarter way to interact with the blobs (in the code, i mean), pulling out more precise locational data.  I’ll be looking to mine this one a bit more extensively than I did with the filtration fieldsinstallation – and since I seem to be getting better now at things I was attempting before – this should be a lot more fun.In the mix still is some video over network action, as well as potentially a database record of the motion over time.  I’d like to develop this as an interactive (from the visualisation point of view) interface where you could select a day, week or month and view the fluid ripples as they occur, like a fluid time-lapse of the actual motion from the courtyard.  We’ll see.

Fluid Blobs v2from Jason McDermotton Vimeo.

Post

★ fluid blobs

Linked below are some early results from a new series of sketches I’ve been working on using Processing.  These sketches continue in a long line of projects I’ve completed recently using simple camera tracking algorithms to infer interesting patterns of movement in urban spaces.The first example is a calibrated blob tracking experiment, using the excellent and very well documented OpenCV library for processing.  A few simple modifications to the setup parameters allow for a very customisable tool, able to withstand many of the constraints live webcam installs can throw up.  I’ve tested this in a number of places (my bedroom wall, lit by a single lamp tends to be the best contrast) and will have more to say on the nature of live webcam video in the future.

OpenCV blob tracking – calibratedfrom Jason McDermotton Vimeo.

The second example is a first attempt at combining the live blob tracking with the wonderfully funky and playful MSA Fluid library also for processing.  This lib is geared towards touch screen interfaces and screen based mouse interactivity – but I immediately thought it would be the perfect partner for my webcam based projects (or even accelerometer/phidget/slider/midi sensor data).  It wasn’t very difficult to swap out the mousex/pmousex variables for centroid x/y data, so the first test has been deemed a success.  I showed this yesterday to Frank/Ale/Amy/george/anyone who would stop for more than 2 minutes in the interactivation studio and it was a big hit 🙂

OpenCV + MSA Fluid (Processing)from Jason McDermotton Vimeo.

The third example is significant for a couple of reasons – it is another combination this time using recorded video of an actual installation space (filtration fields/ DAB courtyard) thus requiring another version of the calibration – but also my first experiments in putting together an arrayed interface between the blobs and the fluid.To explain further; Firstly it’s easy to switch out the mouse for ‘something else’ and inferring movement velocity for a single object/blob is simple.  Secondly I wasn’t so sure about the way to apply this singular blob mousex/pmousex-esque technique to many objects at once.  Thirdly I wasn’t sure if it would all explode in one big fluorescent, particle mess!

OpenCV + MSA Fluid (processing) test 3from Jason McDermotton Vimeo.

[Update]< <note, v3 will be embedded when vimeo uploads my video. since when does a new video have to wait in a queue for 30 minutes??>>In the end I’d say it’s mission accomplished, certainly with calibration tweaks to occur before I’m happy to unleash this on an unsuspecting public.  I’d be interested to see how this could influence peoples’ behaviour in the space – whether or not we would see people dancing/swimming/painting the space of the courtyard.  I’m curious also to see how this kind of new interaction with the space of the DAB could filter into a new perception of the building as not merely a space to move through but one which is open to new forms of physical conversation.

Post

★ Smart Light Fields

In addition to my involvement with the janus project for the Smart Light Sydney Festival, Joanne Jakovich and I were invited to collaborate with the NSW Department of Planning in an ambitious short term project during the festival. The Department of Planning, along with Metropolis and D-City had the initiative to setup a small amount of resources for live event data tracking to be visualised for the duration of the festival.  Our first taste of this was in an email inviting us to join, with a specific aim towards generating realtime visual information using passive bluetooth tracking technology. I’d worked before with bluetooth tracking (in the 2008 UTS MDA masterclass Street as Platform), and I’d recently mastered the small monster of embedded MySQL insert queries so it felt quite appropriate to combine these two techniques in producing the visualisation. In essence, the project asked for the following;

  1. Networked and located sensor nodes, tracking any visible devices nearby
  2. Central storage and collation,
  3. regular output of recent activity (the last 3 hours)
  4. visualisation of current activity and any paths of movement picked up by the sensors

The project had been allocated resources for sensor nodes, internet connections, software programming and some kind of visual output – in this case a projector.  Joanne managed to secure space in Customs House for the project to live, we arranged for the hardware and software combination to be installed and we were off the ground. Ben Coorey (who had been a stellar student in the streetasplatform masterclass) came onboard to help us produce the visualisation in what ended up being a solid fortnight of work.  We went from concept through design and installation in just over two and a half weeks – not an insignificant feat!

This project marked a first in many regards – it was the first time I’d worked in this capacity as an artist/designer with an external client, providing data surveillance and visualisation with aesthetics and information.  It was the first time I’d been given access to such a large data set, with potentially hundreds of thousands of visitors making their way to the SLSF precinct during the three weeks of festival activity.It happened to produce the first meaningful coalescence of a body of researchers Joanne and I had been working to pull together for the last 6 months – into the newly founded and launched anarchi.org.  We were now an organisation, able to pull in assistants and coders, all within the framework of a budgeted project, able to provide payment for their time. 

This mightn’t seem like much of an achievement, but having worked with friends and colleagues for some time now (relying on generosity and willingness to help), it gave me a huge sense of pride in being able to offer a small sum of money to repay the hours of work put in. See Also;

Post

★ Janus

As part of the Smart Light Sydney Festival, May 2009, Tom Barker (Professor of Design, Architecture and Innovation at UTS) and Hank Haeusler (Post-Doctoral Researcher at UTS) were commissioned to design and produce an interactive light sculpture to be exhibited on the light walk in the Rocks.  The piece conceived by Tom was called Janusand was pitched to the SLSF body as;

a giant floating human face in The Rocks..inspired by Janus, the Roman god with two faces, Barker and Haeusler’s installation is part of their ongoing research into complex and non-standard media facades.  Janus uses social media and new technologies to engage the public and influence its art. Photovoltaic cells are used to power the installation.

IMG_1095

The concept for the project was for the face sculpture to act as a mirror to the emotions of the city, as measured using the social media of mms, email and blog updates.  Toms’ earlier research had lead him to explore notions of the nature of facial expressions, our abilities to read and emote via the expressive capabilities of our faces.  With this in mind, it was an interesting experiment – is it possible to measure, collect and respond to accumulated faces – can you determine how happy a city is by watching its’ inhabitants facial expressions?I was invited to join the project as the software design component of the project, as Tom had seen some snippets of my interaction design work, as well as the work of my students in the computational environmentsclass.  Naturally my first thought was to ask Frank Maguire if he was interested in joining me on the project – having worked with Frank on the Filtration Fields installation, his industrial design skills and generally snappy logical mind made him the perfect partner in crime..

The main crux of the project production from our end was in coding the algorithms which would translate images of faces into emotional readings (happy, sad, surprised, angry, fearful, disgusted and neutral), using these readings to trigger pre-recorded videos and controlling the video output to a non-rectilinear array of 192 pixels.Having worked frequently with camera images, facial emotions I was confident in that component of the programming, as with the data munging and video triggers.  However, having never used more than 4 LEDs to output recorded/live video, I couldn’t be so sure I could guarantee the display robustness – but with such a challenge, how could I say no to the project!After a few initial tests using a standard Arduino board in a non-stanard manner, I had managed to get ~20 LEDs lighting up with varying PWM values and we were off and running.  It turned out that the technique I had tested was naughtily using the arduinos’ onboard resources and was not a sustainable way of outputting video – so we had to look elsewhere.Options included using a daisy-chain of chips to multiply the output of an arduino duemillanove board, an arduino Mega and the phidgetLED 64.

With project timelines fairly short, we opted for the output mode we felt would be simplest/most trusted/idiot proof, which our experience told us would be the phidgetLED 64.  The phidget range of interface kits are bread and butter for the interactivation studio, as well as my computational environments students, as well as being able to claim a dedicated output of 64 PWM leds per board – which meant that we could order 3 and end up with spare LED output pins.The face itself could then be split up into separate sections to be addressed individually by each Phidget board – the forehead, center and chin regions containing around ~60 pixels each.  This allowed us to divide up the phidget output coding into regions and simplify a bit of our output matrixing.  I’d spent some time earlier working with maxduino to get greyscale LED output from pixelated video (a matrix of 6 x 1 pixels!), and luckily I was able to put that patch to work with a little bit of scaling, upgrading to the required resolution.The first issue we came to was the phidget method of sending single line matrices to the phidgetLED64 from top-left pixel to bottom-right pixel.  Since we were not working with a rectangular screen, each row of pixel data had to be offset from the starting 0 point, yet still line up with the neighbouring rows.

See Also;http://vividsydney.com/http://www.smartlightsydney.com/artists/barker-and-haeuslerhttp://www.timeoutsydney.com.au/aroundtown/smart-light-sydney–vivid-sydney.aspx

Post

★ Filtration Fields

Recently Joanne and I were given the opportunity to exhibit in the DAB Lab Research Gallery at UTS, in the Design, Architecture and Building faculty building, as an opportunity to refine and showcase our collective research into realtime responsive architectural environments.The filtration fields exhibition in the DAB Lab gallery was a realtime interactive installation using simple camera tracking to measure daily activity within the DAB courtyard.  The exhibition was as a prototype test for ideas on the overlap of surveillance information and participation in architecture by its’ inhabitants.  Our premise for the installation was that the architecture of the DAB Lab gallery and surrounding courtyard space would be given eyes and ears, a brain to consider and a mouth to speak its’ mind.  The exhibition space of filtration fields was, unlike all pieces held in the DAB Lab, not the space of the gallery itself but the outside world upon which it had a threshold.  The silent box would become an active element in the architecture of the courtyard, no longer only passively inviting people inside but actively seeking to make its opinions known.  The void space of the courtyard would act as a performance stage for the activities and life of the DAB, and the natural bookend to the void was an appropriately matching wall of glass facing the space of the gallery.

The DAB gallery sits nestled under the canopy of one side in the DAB courtyard, standing as a window into another world, a place of existence in the imagined mind of another.  All of our experiences in the DAB Lab gallery were of surprise and delight, the little gallery had observed us and prepared something appropriate to show.My initial thoughts for the piece revolved around an image I had imagined of the DAB Lab gallery space existing as a small part of a sensory system extending the fabric of the whole building – the glass wall fronting onto the courtyard was in fact the glass lens of a large and ever curious eye.  The rear wall of the gallery would be the retina upon which the useful information would be refracted and transferred for processing elsewhere.  Other senses of the building were to be placed in the surrounding architecture outside, remote senses (microphones as ears, light/temp/hum/vibration as skin) of a much larger organism.  Each of the senses would be dislocated but connected, each informing the other regarding the goings-on of people in the courtyard.As the project took shape, it became clear that the focus of the exhibition should not only be the ‘eye’ of the DAB, but rather the effort to interpret the overlay of many eyes, ears and other senses into information, all representing the happenings in the courtyard.  The focus of the exhibition is not the DAB Lab itself, but the affect it could have on the lives of people moving through the space in-between.  Each of the glass wall panels would form opposing viewpoints on the courtyard, illustrating different relationships between the viewer/participant and the data they created.  The concept of the DAB as being a semi-conscious entity gave us the notion of eyes (an overload of information, all visual and uninterpreted for meaning) and brains (filtered information, abstracted for patterns of activity).

More to come..

Post

★ Computational Environments'09

Gearing up for the first week of class for Computational Environments, the Master of Architecture design studio Joanne JakovichBert Bongers and I will be teaching at UTS.Last time round the studio culminated in the Skinform project.

From tomorrow onwards we will be launching into a new semester, complete with a new brief, renewed vigor and an even greater expectation.   We will be setting up a platform for the students to share and explain their work, so keep an eye out for that – I will post more details when they are at hand.Looking forward to an exciting, thought provoking and intensely productive semester!

Post

Processing Sketches

I’ve gone crazy overnight, writing out as many of the sketch examples included in Dan Shiffman’s Learning Processing book. So far I’ve worked my way through to chapter 5 (of 23) and so far it’s still making sense!

Not exactly life changing so far, but it’s good to get my hands dirty experimenting with various types of mouse/key interaction with the sketches. I’ve got until Sunday to put some more work into getting the pixelTag project written up as a max/msp processing monster – we’ll see how that goes :)…

Post

★ The Street as Platform

The Street as platform – a street rendered in data.

November has been a busy month! Along with Anthony Burke, Dan Hill and Mitchell Whitelaw, I’ve been running an intensive masterclass studio in the Master of Digital Architecture program at UTS.  The masterclass is based on one of Dan’s earlier posts called The Street as Platform, in which the notion of the static street in contemporary urban planning and architecture is discussed as an anachronistic idea and one in dire need of reform.  The street as platform talks about the dynamically linked nature of the modern street, where mobile communication, ubiquitous computing and traditional number crunching merge as a new kind of informational street ecology that exists just outside of our normal consciousness.As students and teachers of architecture, it could well be said that the dynamism of the street in it’s inhabitation and occupation is implicitly known and explored, but never clearly articulated as a driver – in it’s own right – of architectural decision making regarding form/content.

With this in mind, we set out to investigate the lived inhabitation of the street in an attempt to visualise and understand the hidden seams of activity, an attempt to make the invisible visible.Along with Dan, Anthony and Mitchell, we had a selection of super keen students and a handful of sensor equipment with we set about taming the data beast of Harris St.  Our aim was to produce some meaningful information, based on corellated data sets gleaned and generated from our surrounds.  The students searched for data on Harris st from a number of sources relative to Harris St (google, flickr, youtube, newsrolls, blogs) and then used processingto scrape, munge and visualise the data.  Also included into the mix were a number of sensors we wired up to collect site specific data such as light/temperature/humidity/rainfall levels over the last week, Bluetooth devices in the vicinity, webcam images from the street as well as audio readings and a magnetic sensor.

All up the live data feeds were a bit of a mixed bag with plenty of teething problems, but over the next fortnight these issues will look to be sorted.The students presented their work on Friday to an invited panel including marcus trimbleandrew vande moere and kirsty beilharz, one of our new professors in Design at UTS.  The presentations went very well, showcasing some very good work and sparking much discussion amongst the invited guests.The students have diligently been updating a blog with images of the process workand sketch ideas throughout the last two weeks, which can be found at http://streetasplatform.wordpress.com.  The studio will be exhibiting some of the work at the upcoming UTS Architecture exhbition on the 4th December, so come see some of the live feeds being visualised on the night.

See also; http://offshorestudio.net/http://cityofsound.com/http://theteemingvoid.com/

Post

★ pixeltag

pixelTag is an experimental working prototype for creating digital art using hand-held devices and radio signals.The current pixelTag prototype uses the Nintendo Wii remote controller, in conjunction with Osculator and Max/Msp.  The prototype generates pixel graphics in real time,  based on x/y/z motion information sent to Max by the Wii remote.  The prototype has the ability to incorporate up to 4 artists at the same time.pixeltag has been in existence for just over a week now and so far it’s generated a small amount of buzz.

Thanks to the DAB Lab opening night schedule (which happily coincides with the weekly experimentation playtime in the interactivation studio), I’ve had the opportunity to demo the project to a widely varying audience.  Last week at the Convergence exhibition opening I was able to test the project with none other than Charles Rice, Desley Luscumbe, Adrian Lahoud and Sam Spurr as my hapless guinea pigs.  Many others were also subjected to my user testing and the feedback was generally positive.  I’m excited to see that a project in such baby steps can take on a life such as this, allowing fantastic possibilities such as collaboration and further refinement.I’ve been posting videos of the project in action to vimeo, but in case you’re in lock-down mode, I’ll be looking to embed content directly into my posts rather than linking to 3rd party software.  We’ll see how things go, watch this space.pixeltag 081106 ft. Tony Curran from Jason McDermott on Vimeo.