LAMP, a collaborative project by Shanshan Zhou, Adam Ben-Dror, Joss Doggett.
Created for MDDN 251: Physical computing, project 3: Animatronics.
Created with open source frameworks Arduino, Processing, OpenCV. Big Thank You to the open-source community and other open-source robotic projects.
Special thanks to: Douglas Easterly, Hadley Boks-Wilson
Victoria University of Wellington, 2012
Lampy WIP2 from Shanshan on Vimeo.
a work in progress video, showing how the lamp functions based on what it sees with camera vision
Logic of the lamp
The lamp check a series of boolean flags to know what situation its in, and what it should do in this situation.
It checks goes through the followin steps and check the following flags:
If this is the first time the lamp being switched. We want to give the person a surprise, so the lamp should move at all until it has been switched on. I named this flag “PowerOn”.
Then the lamp checks whether someone just flicked its switch. If it has just been swiched off, it does the sequence of motion to flick its switch back on. If no body flicked its switch off, it is in the dynamic mode of following a human face, or searching for face.
If the CV find a human face, the lamp is in follow face mode. If the CV doesn’t find any face, the lamp just look around until it finds a face.
If the lamp is in a conservative mood, it folds. If the lamp is in a more extrovert mood, it stretches out. We were attempting to use the book to trigger this mood change. It’s not really functioning yet, due to the only big servo that would allow the lamp stretch is broken, and the methods we have tried to tag the book wasn’t efficient enough.