All Posts Tagged ‘Design

Post

★ ozchi 2009

I’m heading down to Melbourne tomorrow for the 2009 OZCHI conference with Frank Maguire, Bert Bongers and Dan Hill, to attend and present (with Frank) some research work we’ve done recently.  I’m looking forward to it, one of the add-ons for this conference is a workshop on Street Computingorganised by the ubiquitous Marcus Foth.  Danand Andrewwill be presenting some of their research, as will Bert– so it’s shaping up to be a really great day tomorrow.  To anyone who’s going to be at the workshop, I look forward to meeting you, the same goes to anyone else floating around the conference…Catch you later in the week, hope this bizarre Sydney weather doesn’t get the better of you!J

Post

★ analytical graphics

I’m really enjoying the work in Michæl.Paukner’s Flickr photostream, which includes some of the most fantastical analytical graphics, illustrating scientific/theoretical concepts. Most of his work is just gorgeous, bringing clarity and simplicity to what could otherwise be convoluted diagrams. Some of my favourites are the solar eclipse and the circular periodic table of elements;

Circular Periodic Table of Elements

Circular Periodic Table of ElementsHe’s not shy of dealing with less-rigorous concepts either, such as the hollow world theory or the ancient hebrew concept of cosmology. Definitely worth a look-see;

Hollow Earth
The Hundredth Monkey Effect

Hollow EarthThe Hundredth Monkey Effect

One of the reasons’ I’m enjoying this work is the dedication to taking complex ideas and presenting them in ways that they can capture an audience and convey an idea or message in a really simple, elegant way. It’s something I’ve been dealing with a lot in my work – somewhat as a side effect of observing people through the ‘eyes’ of buildings – which is the representation of data or information to the people who have a part in creating it. It’s a complex challenge and I often look to graphic designers for ideas on this very subject. Nice to see someone putting it all together in such a clean manner – not driven purely by data or image, rather a balance between the two. Message and medium, not an easy gap to bridge.via kitsunenoirJ

Post

★ fluid updated

I’ve spent a bit more time cleaning up the fluid blobsexamples I made last week, this time limiting the Region of Interest and fiddling with the fluid interaction.  Also newly included is a smarter way to interact with the blobs (in the code, i mean), pulling out more precise locational data.  I’ll be looking to mine this one a bit more extensively than I did with the filtration fieldsinstallation – and since I seem to be getting better now at things I was attempting before – this should be a lot more fun.In the mix still is some video over network action, as well as potentially a database record of the motion over time.  I’d like to develop this as an interactive (from the visualisation point of view) interface where you could select a day, week or month and view the fluid ripples as they occur, like a fluid time-lapse of the actual motion from the courtyard.  We’ll see.

Fluid Blobs v2from Jason McDermotton Vimeo.

Post

★ Janus

As part of the Smart Light Sydney Festival, May 2009, Tom Barker (Professor of Design, Architecture and Innovation at UTS) and Hank Haeusler (Post-Doctoral Researcher at UTS) were commissioned to design and produce an interactive light sculpture to be exhibited on the light walk in the Rocks.  The piece conceived by Tom was called Janusand was pitched to the SLSF body as;

a giant floating human face in The Rocks..inspired by Janus, the Roman god with two faces, Barker and Haeusler’s installation is part of their ongoing research into complex and non-standard media facades.  Janus uses social media and new technologies to engage the public and influence its art. Photovoltaic cells are used to power the installation.

IMG_1095

The concept for the project was for the face sculpture to act as a mirror to the emotions of the city, as measured using the social media of mms, email and blog updates.  Toms’ earlier research had lead him to explore notions of the nature of facial expressions, our abilities to read and emote via the expressive capabilities of our faces.  With this in mind, it was an interesting experiment – is it possible to measure, collect and respond to accumulated faces – can you determine how happy a city is by watching its’ inhabitants facial expressions?I was invited to join the project as the software design component of the project, as Tom had seen some snippets of my interaction design work, as well as the work of my students in the computational environmentsclass.  Naturally my first thought was to ask Frank Maguire if he was interested in joining me on the project – having worked with Frank on the Filtration Fields installation, his industrial design skills and generally snappy logical mind made him the perfect partner in crime..

The main crux of the project production from our end was in coding the algorithms which would translate images of faces into emotional readings (happy, sad, surprised, angry, fearful, disgusted and neutral), using these readings to trigger pre-recorded videos and controlling the video output to a non-rectilinear array of 192 pixels.Having worked frequently with camera images, facial emotions I was confident in that component of the programming, as with the data munging and video triggers.  However, having never used more than 4 LEDs to output recorded/live video, I couldn’t be so sure I could guarantee the display robustness – but with such a challenge, how could I say no to the project!After a few initial tests using a standard Arduino board in a non-stanard manner, I had managed to get ~20 LEDs lighting up with varying PWM values and we were off and running.  It turned out that the technique I had tested was naughtily using the arduinos’ onboard resources and was not a sustainable way of outputting video – so we had to look elsewhere.Options included using a daisy-chain of chips to multiply the output of an arduino duemillanove board, an arduino Mega and the phidgetLED 64.

With project timelines fairly short, we opted for the output mode we felt would be simplest/most trusted/idiot proof, which our experience told us would be the phidgetLED 64.  The phidget range of interface kits are bread and butter for the interactivation studio, as well as my computational environments students, as well as being able to claim a dedicated output of 64 PWM leds per board – which meant that we could order 3 and end up with spare LED output pins.The face itself could then be split up into separate sections to be addressed individually by each Phidget board – the forehead, center and chin regions containing around ~60 pixels each.  This allowed us to divide up the phidget output coding into regions and simplify a bit of our output matrixing.  I’d spent some time earlier working with maxduino to get greyscale LED output from pixelated video (a matrix of 6 x 1 pixels!), and luckily I was able to put that patch to work with a little bit of scaling, upgrading to the required resolution.The first issue we came to was the phidget method of sending single line matrices to the phidgetLED64 from top-left pixel to bottom-right pixel.  Since we were not working with a rectangular screen, each row of pixel data had to be offset from the starting 0 point, yet still line up with the neighbouring rows.

See Also;http://vividsydney.com/http://www.smartlightsydney.com/artists/barker-and-haeuslerhttp://www.timeoutsydney.com.au/aroundtown/smart-light-sydney–vivid-sydney.aspx

Post

★ Filtration Fields

Recently Joanne and I were given the opportunity to exhibit in the DAB Lab Research Gallery at UTS, in the Design, Architecture and Building faculty building, as an opportunity to refine and showcase our collective research into realtime responsive architectural environments.The filtration fields exhibition in the DAB Lab gallery was a realtime interactive installation using simple camera tracking to measure daily activity within the DAB courtyard.  The exhibition was as a prototype test for ideas on the overlap of surveillance information and participation in architecture by its’ inhabitants.  Our premise for the installation was that the architecture of the DAB Lab gallery and surrounding courtyard space would be given eyes and ears, a brain to consider and a mouth to speak its’ mind.  The exhibition space of filtration fields was, unlike all pieces held in the DAB Lab, not the space of the gallery itself but the outside world upon which it had a threshold.  The silent box would become an active element in the architecture of the courtyard, no longer only passively inviting people inside but actively seeking to make its opinions known.  The void space of the courtyard would act as a performance stage for the activities and life of the DAB, and the natural bookend to the void was an appropriately matching wall of glass facing the space of the gallery.

The DAB gallery sits nestled under the canopy of one side in the DAB courtyard, standing as a window into another world, a place of existence in the imagined mind of another.  All of our experiences in the DAB Lab gallery were of surprise and delight, the little gallery had observed us and prepared something appropriate to show.My initial thoughts for the piece revolved around an image I had imagined of the DAB Lab gallery space existing as a small part of a sensory system extending the fabric of the whole building – the glass wall fronting onto the courtyard was in fact the glass lens of a large and ever curious eye.  The rear wall of the gallery would be the retina upon which the useful information would be refracted and transferred for processing elsewhere.  Other senses of the building were to be placed in the surrounding architecture outside, remote senses (microphones as ears, light/temp/hum/vibration as skin) of a much larger organism.  Each of the senses would be dislocated but connected, each informing the other regarding the goings-on of people in the courtyard.As the project took shape, it became clear that the focus of the exhibition should not only be the ‘eye’ of the DAB, but rather the effort to interpret the overlay of many eyes, ears and other senses into information, all representing the happenings in the courtyard.  The focus of the exhibition is not the DAB Lab itself, but the affect it could have on the lives of people moving through the space in-between.  Each of the glass wall panels would form opposing viewpoints on the courtyard, illustrating different relationships between the viewer/participant and the data they created.  The concept of the DAB as being a semi-conscious entity gave us the notion of eyes (an overload of information, all visual and uninterpreted for meaning) and brains (filtered information, abstracted for patterns of activity).

More to come..

Post

★ The Street as Platform

The Street as platform – a street rendered in data.

November has been a busy month! Along with Anthony Burke, Dan Hill and Mitchell Whitelaw, I’ve been running an intensive masterclass studio in the Master of Digital Architecture program at UTS.  The masterclass is based on one of Dan’s earlier posts called The Street as Platform, in which the notion of the static street in contemporary urban planning and architecture is discussed as an anachronistic idea and one in dire need of reform.  The street as platform talks about the dynamically linked nature of the modern street, where mobile communication, ubiquitous computing and traditional number crunching merge as a new kind of informational street ecology that exists just outside of our normal consciousness.As students and teachers of architecture, it could well be said that the dynamism of the street in it’s inhabitation and occupation is implicitly known and explored, but never clearly articulated as a driver – in it’s own right – of architectural decision making regarding form/content.

With this in mind, we set out to investigate the lived inhabitation of the street in an attempt to visualise and understand the hidden seams of activity, an attempt to make the invisible visible.Along with Dan, Anthony and Mitchell, we had a selection of super keen students and a handful of sensor equipment with we set about taming the data beast of Harris St.  Our aim was to produce some meaningful information, based on corellated data sets gleaned and generated from our surrounds.  The students searched for data on Harris st from a number of sources relative to Harris St (google, flickr, youtube, newsrolls, blogs) and then used processingto scrape, munge and visualise the data.  Also included into the mix were a number of sensors we wired up to collect site specific data such as light/temperature/humidity/rainfall levels over the last week, Bluetooth devices in the vicinity, webcam images from the street as well as audio readings and a magnetic sensor.

All up the live data feeds were a bit of a mixed bag with plenty of teething problems, but over the next fortnight these issues will look to be sorted.The students presented their work on Friday to an invited panel including marcus trimbleandrew vande moere and kirsty beilharz, one of our new professors in Design at UTS.  The presentations went very well, showcasing some very good work and sparking much discussion amongst the invited guests.The students have diligently been updating a blog with images of the process workand sketch ideas throughout the last two weeks, which can be found at http://streetasplatform.wordpress.com.  The studio will be exhibiting some of the work at the upcoming UTS Architecture exhbition on the 4th December, so come see some of the live feeds being visualised on the night.

See also; http://offshorestudio.net/http://cityofsound.com/http://theteemingvoid.com/

Post

★ motion graphs

The google-verse has recently been expanded to include gapminder software as part of the ever impressive Google Docs platform. Below is an interactive spreadsheet example of the sophisticated motion graphs now available in Google Docs.The motion graph is based on the Gapminder software first demonstrated to the world by Hans Rosling in his exciting and highly memorable TED Talk presentations (2006 and 2007). I first noticed gapminder in march this year, and I’m very pleased to see how far it has come in the last 6-12 months. Gapminder Graph;

According to Gapminder.org, the motion graphs have been ported to become a new inclusion to the Google docs spreadsheet tool, and as you can see can be leveraged very easily for all kinds of shared data storage and visualisation. The data included in the graph is the boilerplate standard detail that comes with the gadget as a means of demonstrating it’s capabilities, and it seems to be a very compelling example of where collaborative work involving data sharing/massaging/viewing is heading. Hans makes the claim in his 2007 TED talk that the UN databases for statistical data have been opened up to the software and will be searchable in some form. As the presentations illustrate, it’s not only the capability of the software, but also the depth and accessibility of the source data that is moving at a remarkable rate.

Google Doc Example; For the record, the steps involved in going from 0 to google motion-graphed are as follows;

  • sign up for free google account,
  • click the docs link on the homepage,
  • click new spreadsheet or upload an existing file,
  • select cells, click insert new motion graph gadget
  • publish as a webpage, share with friends/colleagues etc.
  • voila!

Last year when I was completing my architecture dissertation project, myself and a few other students were researching living conditions and economic data in countries external to Australia. Gapminder was a discovery made far too late to be of use to that project, but I’m certain that this kind of data empowerment is only going to facilitate knowledge or information distribution between on a local and global scale simultaneously.closing thoughts;

  • Hans makes the comment that the $100 computer will be of integral value to impoverished and developing families. One only has to imagine the kind of super-users that could emerge using nothing more than cheap hardware, fluid access to the internet and freely distributed open source software.
  • I’ll be using the motion charts to interact with the household budget – sharing bills between 5 people can be tricky. I’ll post more on this topic when the tools have been tested.
Post

★ Skinform

The Computational Environments students created Skinform as the major project for the studio.  It has been fully exhibited here at UTS and also as a small teaser exhibit at Customs House.

Skinform @ Customs House (6th October 2008) As part of the Sydney Architecture Festival, the Skinform project was exhibited. 

The project was initially planned to be installed as a fully working prototype in the courtyard in front of Customs House, however the weather forecast forced us inside.  The Skinform base module was installed as a teaser, as well as a video projection of the project.

See also:

http://skinform.net/http://skinform.blogspot.com/ http://datasearch.uts.edu.au/dab/news-events/architecture/news-detail.cfm?ItemId=12637&ItemDate=2008-10-02/

http://www.engadget.com/2008/06/06/skinform-project-sees-shape-shifting-structure-get-its-wiggle-on/