All Posts Tagged ‘interaction

Post

★ Engage

Leave a reply

20120111-010332.jpg

Yesterday I jotted down a few thoughts about the apparent and impending future of YouTube. One thing I neglected to mention specifically was the one thing I was actually writing about in the first place – brand engagement.

It’s the special sauce that makes things like YouTube run. Forget your utopian dreams of sharing videos or things between friends, this place is where people engage with the brands they love. Or something.

It seems to be everywhere, lurking just beneath the surface. Awareness, market saturation, x is better than y, we seem so happy to play along. It’s appalling. Take a moment to read Maciej’s post about the uncanny valley and do what you can to avoid stumbling through the godawful plethora of online ‘experienceavailable.

I hope that we don’t look back on this time as the moment when we realized we’d been duped into playing someone else’s pointless game.  I really do.

Post

★ Non 'design' disciplines

Leave a reply

I was digging through a few old drafts and this one emerged from 2009, having never before seen the light of day:

Lately I’ve been thinking more on the different pieces of knowledge required to pull together an interactive project / visualisation / installation / design, especially one which needs to be in some way demonstrable in front of a live audience. Yesterday I was speaking with Stephen Viller at the OzCHI conference, about the difference between IT/ICT and Design education methodology. It seems that (any maybe this shouldn’t have been as much of a surprise as it was) that those in the computer sciences or IT streams of education don’t have much call to give verbal account or critique of their work.

As a designer/practitioner/academic/venn diagram inhabitant, I’m painfully aware of the constant requirement of the profession to be able to present, demo and explain your work – even moreso to potentially negative or critical audiences. This verbal technique is just one big part of the profession, you really can’t escape it.  It became clear to me during the week, that many of those in the disparate fields claiming some form of involvement in the HCI community, perhaps do not come from a design paradigm background – and as such do not verbally or even visually present their work to audiences. Textual accounts were common, as were scientific breakdown and analyses, however it really came as a surprise to me that the participants of a conference in “interaction design” (call it what you will) were not very adept at communication in either the verbal or visual form.

There were some notable exceptions, of course – Bill Moggridge’s keynote was engaging if a little dated, and Patrick Hofmann’s User Experience keynote was definitely a highlight.  One other note that I’d like to add is that I very recently forced myself to sit through the length of the Ben Stiller/Vince Vaughan ‘comedy’ Dodgeball. I would highly recommend you avoid this movie at all costs, however one of the funnier moments involves the absurdist acronym ADAA, or the American Dodgeball Association of America – which I must say prepared me very well for the assault of acronyms during the OzCHI week! HISER, HFESA, CHI, CHISIG and many more..

I thought it was worth sharing — my old criticism of the non-design discipline education was just yesterday reinforced by another ICT related encounter which I might see fit to write about in another two years time.

Post

★ particle interaction

Leave a reply

Here we’ll see some chaotic interaction happening between our particles – we can tweak the interaction parameters (e.g. interact(particles_,1,false,-1,40) is where all the magic happens) to allow for different attraction and repulsion forces between each of the particles and their environment.Source Codeint Resolution;ParticleController pC;PVec mousey;boolean gridded = false;boolean back = true;int W;int H;boolean online;float timed;void setup() {// this is one of the more useful pieces of code i’ve used to date// firstly we check if we’re offlineif (online == false) {// if so, we know we’re in a processing window// so we set our height/width variables accordinglyW = 800;H = 300;}// if online is true, we know we’re in a browser window// so we make the sketch smaller// this way I can sketch out the same sketch but with dynamic size propertieselse {W = 600;H = 200;}size(W,H);smooth();frameRate(30);// how closely packed our particles might beResolution = 20;// our particle controller class objectpC = new ParticleController();background(0);}void draw() {if (back == true) {tint(0,1);fill(0,5);noStroke();rect(0,0,W,H);} else {background(0);}pC.update();timed++;if (timed > 2000) {println(frameRate + ” ” + pC.pSize());noLoop();}}class ParticleController {// an arraylist to hold all of our particlesArrayList particles = new ArrayList();ArrayList boundaries = new ArrayList();// these variables were originally used to dictate the spacing of a grid// we’re not really using them anymore, but we’ll keep them for posterity// besides, we might later on want to reintroduce the gridint mXRes, mYRes;// this we’ll use to give each particle a unique identifierint counting = 0;ParticleController() {// we’ll assign a new random point to push away from.mousey = new PVec(random(0,W),random(0,H),0);// when we were putting the particles into a grid, this part made more sense// step through the grid resolution, assign a new particle for each x/y position on the grid// although this time and most times in future we’ll ignore the grid// and insert particles randomlyif (gridded) {mXRes = W/Resolution;mYRes = H/Resolution;for( int y=mYRes; y>0; y– ) {for( int x=mXRes; x>0; x– ) {addParticle( x, y-1, Resolution,counting );counting++;}}}else {for (int i=Resolution; i > 0 ; i–) {addParticle( i, i-1, Resolution,counting );counting++;}}// for our pressure exerting walls we’ll create ‘boundary’ particle objects// how closely do we want to space these boundary objects?float gridBounds = 5;// then we’ll step through each of the boundary conditions// if (x == 1 && y % gridBounds == 0) simply means every 2 vertical pixels on the left side of the window// likewise// if (x % gridBounds == 0 && y == 1) simply means every 2 horizontal pixels on the top of the windowfor (int i=W;i> 0;i–) {for (int j=H;j> 0;j–) {if (i == W && j % gridBounds ==0) {addBoundary(i,j,Resolution);}else if (i == 1 && j % gridBounds ==0) {addBoundary(i,j,Resolution);}else if (i % gridBounds ==0 && j ==1) {addBoundary(i,j,Resolution);}else if (i % gridBounds ==0 && j ==H) {addBoundary(i,j,Resolution);}}}}void update() {// stock standard routine for looping through each particle and updating them.for (int i=0;i 1) {sepp = interact(particles_,1,false,-1,40);attracts = interact(particles_,.5,true,1,80);bounds = interact(boundaries_,2,false,-1,40);attracts.mult(drag);sepp.mult(drag);newLoc.add(attracts);newLoc.add(sepp);newLoc.add(bounds);}// newLoc.mult(drag);if (newLoc.x > W || newLoc.x H || newLoc.y < 0) {dead = true;}}void render (int mode) {noStroke();if (mode == 1) {fill(255,0,0);ellipse(newLoc.x,newLoc.y,Resolution*0.3,Resolution*0.3);}else {fill(255);ellipse(newLoc.x,newLoc.y,3,3);}}// here we have a neat little function which can tell us the// distance between each particle object// and then act on itPVec interact(ArrayList particles_, float multFact_, boolean drawMe_, int pushPull_, int separation_) {// we first set our upper limit// bio drawMe = drawMe_;float pushPull = pushPull_;float multFact = multFact_;float desiredseparation = separation_;// then for each particlefor (int i = 0 ; i 0) && (p < desiredseparation)) {// draw a line between this object and that one// but only between our particles, not between the particles and the boundariesif (drawMe_) {strokeWeight(desiredseparation/(p*5));stroke(255-p);fill(255-p*4);line(newLoc.x,newLoc.y,ot.x,ot.y);}PVec diff = Sub(newLoc,other.newLoc, null);float diffMag = (float)sqrt(diff.x * diff.x + diff.y * diff.y + diff.z * diff.z);if ((diffMag != 0.0) && (diffMag != 1.0)) {diff.div(diffMag);}if (pushPull < 0 ) {diff.div(p);diff.mult(multFact);steer.add(diff);} else {diff.mult(0.1);diff.mult(multFact);steer.sub(diff);}}}return steer;}// PVec converge(ArrayList particles_, float multFact_, int drawMe_, int pushPull_, int separation) {// // we first set our upper limit// float drawMe = drawMe_;// float multFact = multFact_;// float influence = separation;// // then for each particle// for (int i = 0 ; i 0) && (p < influence)) {// // draw a line between this object and that one// // but only between our particles, not between the particles and the boundaries// if (drawMe == 1) {// strokeWeight(influence/(p*5));// stroke(255-p);// fill(255-p*4);// line(newLoc.x,newLoc.y,ot.x,ot.y);// }// PVec diff = Sub(newLoc,other.newLoc, null);// float diffMag = (float)sqrt(diff.x * diff.x + diff.y * diff.y + diff.z * diff.z);// if ((diffMag != 0.0) && (diffMag != 1.0)) {// diff.div(diffMag);// }// diff.mult(0.1);// diff.mult(multFact);// steer.sub(diff);// }// }// return steer;// }// this function returns the current location of our particlePVec getLocation() {PVec tempLoc = newLoc;return tempLoc;}// this function tells us the distance between the location and a pvecfloat Dist(PVec v) {float dx = mLoc.x – v.x;float dy = mLoc.y – v.y;float dz = mLoc.z – v.z;return (float)Math.sqrt(dx * dx + dy * dy + dz * dz);}// this is lifted straight out of the pvector class.// it returns a new pvec object which represents the// distance (direction and length) between two PVec objectsPVec Sub(PVec v1, PVec v2, PVec target) {if (target == null) {target = new PVec(v1.x – v2.x, v1.y – v2.y, v1.z – v2.z);}else {target.set(v1.x – v2.x, v1.y – v2.y, v1.z – v2.z);}return target;}}// all of this PVec code is taken from theh PVector class written by Daniel Shiffman// I've merely pulled out the bits we need to have a working PVec class which will run in a web browser window// It's a bit of a hack, but it's worth demonstrating the way we can interact with java classesclass PVec {float x;float y;float z;PVec(float x_, float y_, float z_) {x = x_;y = y_;z = z_;}void add(PVec v) {x += v.x;y += v.y;z += v.z;}void div(float n) {x /= n;y /= n;z /= n;}void mult(float n) {x *= n;y *= n;z *= n;}void set(float x_, float y_, float z_) {x = x_;y = y_;z = z_;}void sub(PVec v){x -= v.x;y -= v.y;z -= v.z;}}

Post

★ pachube coffee and gps

5 comments

Below is a small experiment I’m currently running; very informally I’m updating a pachube feed (‘cups of coffee today’) with the current location data. It’s a bit of an odd experiment – I’m not really always drinking coffee, but once I set it up (using the pachube iphone app, which is pretty simple to use) I discovered that it logged the location of each coffee cup consumed. Not so exciting I know, but I’m interested in the notion that my daily activities and movements can be logged, tracked in some sense and collected to form an image – be that automatically or manually logged data. It’s by no means new territory, but I’m keen to see how it all works out. There’s a chance that the ‘update feed by twitter’ tool will come in handy, but who knows at this point.So as I said, It’s a bit of a mish-mash, however there are a few things of interest. 1. I can log 0 cups of coffee, which will update the location but keep my caffeine intake low. 2. When i do have a cup of coffee the daily tally will increase and I’ll be able to see where and when it all happens (at work, that’s a no brainer..). 3. Along comes this openstreetmap based project from the pachube apps page – Trails.Have a look for yourself, just how well or unwell my coffee consumption is going over the last 24 hours.. Hopefully, if you’re reading this you’ll see a map below, with a gps overlay of location (x,y) and consumption (z) data. If not, then perhaps I’ve not updated in a while. Nonetheless, it’s got me thinking about these new consumer (no pun intended) tools which are now readily available – and free.Not sure where this will go, not exactly sure where it belongs, but it’s worth mentioning at least.// Drink up!

Post

★ fluid updated

4 comments

I’ve spent a bit more time cleaning up the fluid blobs examples I made last week, this time limiting the Region of Interest and fiddling with the fluid interaction.  Also newly included is a smarter way to interact with the blobs (in the code, i mean), pulling out more precise locational data.  I’ll be looking to mine this one a bit more extensively than I did with the filtration fields installation – and since I seem to be getting better now at things I was attempting before – this should be a lot more fun.In the mix still is some video over network action, as well as potentially a database record of the motion over time.  I’d like to develop this as an interactive (from the visualisation point of view) interface where you could select a day, week or month and view the fluid ripples as they occur, like a fluid time-lapse of the actual motion from the courtyard.  We’ll see.

Fluid Blobs v2 from Jason McDermott on Vimeo.

Post

★ Janus

1 comment

As part of the Smart Light Sydney Festival, May 2009, Tom Barker (Professor of Design, Architecture and Innovation at UTS) and Hank Haeusler (Post-Doctoral Researcher at UTS) were commissioned to design and produce an interactive light sculpture to be exhibited on the light walk in the Rocks.  The piece conceived by Tom was called Janus and was pitched to the SLSF body as;

a giant floating human face in The Rocks..inspired by Janus, the Roman god with two faces, Barker and Haeusler’s installation is part of their ongoing research into complex and non-standard media facades.  Janus uses social media and new technologies to engage the public and influence its art. Photovoltaic cells are used to power the installation.

IMG_1095

The concept for the project was for the face sculpture to act as a mirror to the emotions of the city, as measured using the social media of mms, email and blog updates.  Toms’ earlier research had lead him to explore notions of the nature of facial expressions, our abilities to read and emote via the expressive capabilities of our faces.  With this in mind, it was an interesting experiment – is it possible to measure, collect and respond to accumulated faces – can you determine how happy a city is by watching its’ inhabitants facial expressions?I was invited to join the project as the software design component of the project, as Tom had seen some snippets of my interaction design work, as well as the work of my students in the computational environments class.  Naturally my first thought was to ask Frank Maguire if he was interested in joining me on the project – having worked with Frank on the Filtration Fields installation, his industrial design skills and generally snappy logical mind made him the perfect partner in crime..

The main crux of the project production from our end was in coding the algorithms which would translate images of faces into emotional readings (happy, sad, surprised, angry, fearful, disgusted and neutral), using these readings to trigger pre-recorded videos and controlling the video output to a non-rectilinear array of 192 pixels.Having worked frequently with camera images, facial emotions I was confident in that component of the programming, as with the data munging and video triggers.  However, having never used more than 4 LEDs to output recorded/live video, I couldn’t be so sure I could guarantee the display robustness – but with such a challenge, how could I say no to the project!After a few initial tests using a standard Arduino board in a non-stanard manner, I had managed to get ~20 LEDs lighting up with varying PWM values and we were off and running.  It turned out that the technique I had tested was naughtily using the arduinos’ onboard resources and was not a sustainable way of outputting video – so we had to look elsewhere.Options included using a daisy-chain of chips to multiply the output of an arduino duemillanove board, an arduino Mega and the phidgetLED 64.

With project timelines fairly short, we opted for the output mode we felt would be simplest/most trusted/idiot proof, which our experience told us would be the phidgetLED 64.  The phidget range of interface kits are bread and butter for the interactivation studio, as well as my computational environments students, as well as being able to claim a dedicated output of 64 PWM leds per board – which meant that we could order 3 and end up with spare LED output pins.The face itself could then be split up into separate sections to be addressed individually by each Phidget board – the forehead, center and chin regions containing around ~60 pixels each.  This allowed us to divide up the phidget output coding into regions and simplify a bit of our output matrixing.  I’d spent some time earlier working with maxduino to get greyscale LED output from pixelated video (a matrix of 6 x 1 pixels!), and luckily I was able to put that patch to work with a little bit of scaling, upgrading to the required resolution.The first issue we came to was the phidget method of sending single line matrices to the phidgetLED64 from top-left pixel to bottom-right pixel.  Since we were not working with a rectangular screen, each row of pixel data had to be offset from the starting 0 point, yet still line up with the neighbouring rows.

See Also;http://vividsydney.com/ http://www.smartlightsydney.com/artists/barker-and-haeusler http://www.timeoutsydney.com.au/aroundtown/smart-light-sydney–vivid-sydney.aspx

Post

★ Filtration Fields

2 comments

Recently Joanne and I were given the opportunity to exhibit in the DAB Lab Research Gallery at UTS, in the Design, Architecture and Building faculty building, as an opportunity to refine and showcase our collective research into realtime responsive architectural environments.The filtration fields exhibition in the DAB Lab gallery was a realtime interactive installation using simple camera tracking to measure daily activity within the DAB courtyard.  The exhibition was as a prototype test for ideas on the overlap of surveillance information and participation in architecture by its’ inhabitants.  Our premise for the installation was that the architecture of the DAB Lab gallery and surrounding courtyard space would be given eyes and ears, a brain to consider and a mouth to speak its’ mind.  The exhibition space of filtration fields was, unlike all pieces held in the DAB Lab, not the space of the gallery itself but the outside world upon which it had a threshold.  The silent box would become an active element in the architecture of the courtyard, no longer only passively inviting people inside but actively seeking to make its opinions known.  The void space of the courtyard would act as a performance stage for the activities and life of the DAB, and the natural bookend to the void was an appropriately matching wall of glass facing the space of the gallery.

The DAB gallery sits nestled under the canopy of one side in the DAB courtyard, standing as a window into another world, a place of existence in the imagined mind of another.  All of our experiences in the DAB Lab gallery were of surprise and delight, the little gallery had observed us and prepared something appropriate to show.My initial thoughts for the piece revolved around an image I had imagined of the DAB Lab gallery space existing as a small part of a sensory system extending the fabric of the whole building – the glass wall fronting onto the courtyard was in fact the glass lens of a large and ever curious eye.  The rear wall of the gallery would be the retina upon which the useful information would be refracted and transferred for processing elsewhere.  Other senses of the building were to be placed in the surrounding architecture outside, remote senses (microphones as ears, light/temp/hum/vibration as skin) of a much larger organism.  Each of the senses would be dislocated but connected, each informing the other regarding the goings-on of people in the courtyard.As the project took shape, it became clear that the focus of the exhibition should not only be the ‘eye’ of the DAB, but rather the effort to interpret the overlay of many eyes, ears and other senses into information, all representing the happenings in the courtyard.  The focus of the exhibition is not the DAB Lab itself, but the affect it could have on the lives of people moving through the space in-between.  Each of the glass wall panels would form opposing viewpoints on the courtyard, illustrating different relationships between the viewer/participant and the data they created.  The concept of the DAB as being a semi-conscious entity gave us the notion of eyes (an overload of information, all visual and uninterpreted for meaning) and brains (filtered information, abstracted for patterns of activity).

More to come..

Post

★ Computational Environments'09

Leave a reply

Gearing up for the first week of class for Computational Environments, the Master of Architecture design studio Joanne Jakovich, Bert Bongers and I will be teaching at UTS.Last time round the studio culminated in the Skinform project, see below;From tomorrow onwards we will be launching into a new semester, complete with a new brief, renewed vigor and an even greater expectation.   We will be setting up a platform for the students to share and explain their work, so keep an eye out for that – I will post more details when they are at hand.Looking forward to an exciting, thought provoking and intensely productive semester!

Post

★ The Street as Platform

1 comment

The Street as platform – a street rendered in data.

November has been a busy month! Along with Anthony Burke, Dan Hill and Mitchell Whitelaw, I’ve been running an intensive masterclass studio in the Master of Digital Architecture program at UTS.  The masterclass is based on one of Dan’s earlier posts called The Street as Platform, in which the notion of the static street in contemporary urban planning and architecture is discussed as an anachronistic idea and one in dire need of reform.  The street as platform talks about the dynamically linked nature of the modern street, where mobile communication, ubiquitous computing and traditional number crunching merge as a new kind of informational street ecology that exists just outside of our normal consciousness.As students and teachers of architecture, it could well be said that the dynamism of the street in it’s inhabitation and occupation is implicitly known and explored, but never clearly articulated as a driver – in it’s own right – of architectural decision making regarding form/content.

With this in mind, we set out to investigate the lived inhabitation of the street in an attempt to visualise and understand the hidden seams of activity, an attempt to make the invisible visible.Along with Dan, Anthony and Mitchell, we had a selection of super keen students and a handful of sensor equipment with we set about taming the data beast of Harris St.  Our aim was to produce some meaningful information, based on corellated data sets gleaned and generated from our surrounds.  The students searched for data on Harris st from a number of sources relative to Harris St (google, flickr, youtube, newsrolls, blogs) and then used processing to scrape, munge and visualise the data.  Also included into the mix were a number of sensors we wired up to collect site specific data such as light/temperature/humidity/rainfall levels over the last week, Bluetooth devices in the vicinity, webcam images from the street as well as audio readings and a magnetic sensor.

All up the live data feeds were a bit of a mixed bag with plenty of teething problems, but over the next fortnight these issues will look to be sorted.The students presented their work on Friday to an invited panel including marcus trimble, andrew vande moere and kirsty beilharz, one of our new professors in Design at UTS.  The presentations went very well, showcasing some very good work and sparking much discussion amongst the invited guests.The students have diligently been updating a blog with images of the process workand sketch ideas throughout the last two weeks, which can be found at http://streetasplatform.wordpress.com.  The studio will be exhibiting some of the work at the upcoming UTS Architecture exhbition on the 4th December, so come see some of the live feeds being visualised on the night.

See also; http://offshorestudio.net/ http://cityofsound.com/ http://theteemingvoid.com/