I’ve been playing with Arduino and I have burns from my soldering iron to prove it. The end result is that I now have a talking house.
I built a light sensor “sentry”. He has an LCD display that he uses to communicate with the people around him and a network connection that effectively extends his ability to talk to anyone or anything that can connect to the Internet.
He’s not a very good listener - but that’s another project.
I gave him a sleep/wake up sequence so he says “good night” and goes to sleep if the light drops below a certain level or says “hello” and wakes up when the light comes back on again. I gave him a pain threshold so that if the light gets too bright it hurts him and he complains.
He sends his data out to Pachube so that I (or anyone else for that matter) can use the data he creates for other things.
I’ve got him connected to the private Twitter feed coming from my house. This feed is the aggregated sensor data (light, temperature and power consumption) from the sentry I made and a Current Cost Envi I have in my living room. There’s also the carbon footprint calculation coming from the Pachube Carbon Footprint calculator and my MSN status.
Whilst doing this project it’s become more than clear that Twitter is a great channel for smart objects to use to communicate with people and I know there’s a number of Twitter connected objects already but there’s something really interesting that happens psychologically when it’s your own house talking to you.
At the moment all the sensor data is aggregated into this single “voice” that I’ve called Home Unit 1 for the time being. It’s interesting that even though I know it’s the output of multiple distributed sensors it feels like it’s a single entity - and it became much more pronounced when I made the messages feel more human.
So my house talks and what is says appears alongside all the things my friends are saying. I can listen to my house from wherever I am that has a network connection.
This is great, but it raises some interesting questions.
Mainly that I’m not so sure if the single voice approach is a good thing or not. I think a house that appears to be run by an omnipresent robot like HAL might feel a bit threatening and I would like to try out individual feeds for each sensor to see if that feels a little less dominating - or just too noisy. I’m also going to experiment with the frequency of messaging and the language it uses to see how that affects things.
I’m sure there’s a sweet spot where it all feels quite natural and I have a feeling it lies somewhere close to the metaphor of a butler and servants in that the servants tell the butler everything but the butler only tells you when you really need to know something. You wouldn’t actively listen to the servants but if there was a problem you could bypass the butler and listen to what they were saying directly.
The concept of the talking house is fascinating. I wonder how this would scale. Talking houses or buildings being aggregated into talking streets, talking streets into talking boroughs and talking boroughs into talking cities, countries, continents etc.
So anyway this is “sensing and talking”. The next step is “listening and doing” and I have no idea what I’m going to do yet - but I know it’s going to be great fun.