From MIT’s Media Lab, the IO Brush (check out the video at the link)
Also from the Media Lab, Topobo
Topobo is a 3D constructive assembly system with kinetic memory, the ability to record and playback physical motion. Unique among modeling systems is Topobo’s coincident physical input and output behaviors. By snapping together a combination of Passive (static) and Active (motorized) components, people can quickly assemble dynamic biomorphic forms like animals and skeletons with Topobo, animate those forms by pushing, pulling, and twisting them, and observe the system repeatedly play back those motions. For example, a dog can be constructed and then taught to gesture and walk by twisting its body and legs. The dog will then repeat those movements and walk repeatedly. The same way people can learn about static structures playing with building blocks, they can learn about dynamic structures playing with Topobo.
Also from the Media Lab, Audiopad. The most common theme I saw was tabletops of one type or another, which did interesting things as people manipulated the surface of the table, or objects on it. Audiopad was one of these: a techno music composition system based on manipulated objects on the surface of the table, with a projector providing cues and feedback directly on the surface.
From the University of Tokyo, a Forehead Retina System
A small camera and 512 forehead-mounted electrodes capture the frontal view, extract outlines, and convert the data to tactile electrical stimulation. The system is primarily designed for the visually impaired, but it can be a third eye for users with normal sight.
That really doesn’t do justice to the importance of this. A headband with an array of electrodes rests on the forehead. A small camera with a real-time video feed converts the video into shapes. The shapes in the video feed are used to activate the electrode array, and translate the video signal into a tactile sensation which the wearer feels on the forehead. Yes, it allows blind people to perceive objects in front of them. How’s this for a worthwhile project?
According to a 2003 World Health Organization report, up to 45 million people are totally blind, while 135 million live with low vision. However, there is no standard visual substitution system that can be conveniently used in daily life. The goal of this project is to provide a cheap, lightweight, yet fully functional system that provides rich, dynamic 2D information to the blind.
From Mitsubishi Electric Research Lab, Submerging Technologies
These made people smile more than anything else there, I think. They’re so new that it’s difficult to find information or photos of them online. There were three displays in the set. First, a tabletop filled with water, with a video display below it. Little triangles like paper airplanes would move and flow on the display surface – but their movements were a realistic result of manipulating the water in the tank. Make a wave, the shapes are pushed away. I’m not doing it justice! Second was a water harp built of musical “strings,” which were actually made of streams of water – interrupt one of the stream/strings and a musical note sounds. Run your hand down the device and it’s like running your hand across a piano’s keys. Lots of smiles from this one, too. Last was a “tantalizing fountain.” By default it sprayed water in a kind of half-dome shape down into the pool. But as you moved you hand forward to touch the water, the shape of the water spray would adjust, to keep the stream of water away from your approaching hand! Paul Dietz is the senior researcher on these projects. He was there joking about how wet they all got as they were “debugging” the displays. First time I’ve heard of those two concepts going together, lol. I suspect you really like your job, Paul? 🙂
Last is from the University of Tokyo and NTT, Tablescape Plus
A new type of display system for digital kiosks, multiple-aspect viewers, and tabletop theater that uses placed objects as projection screens and input devices.
Once again, this is such new stuff there’s nothing about it online yet, and that description doesn’t really do it justice. Let me try to describe a couple of the interesting examples I saw. Picture a fully rendered 3D car model displayed on a flat screen TV. In front of it on the flat tabletop is a physical model of the same car. Want to move or turn the car on the screen? OK, move or turn the car on the tabletop. Presto, the car moves to match, on the screen. The more complex example had geometric shapes, a balloon model, and a couple of other items – again on the screen, and on the flat tabletop. The screen display was enough like the table display that when someone reached a hand in to move an object on the table, the natural reaction was to look for the hand on the screen display – but of course it was not there. And that, of course, is the point.
The rendering on the screen was completely realtime. The applications of this for gaming are maybe most obvious. How about remote gaming with friends, with the virtual table top in the middle being where we see our game pieces combined.
Another really obvious application of this? Machinima! Think of using this to do a kind of digital claymation. After all, the things as rendered on the digital display do not have to match the physical object on the table top. Take it further so that the actual movements as rendered include natural movement (like steps forward when an avatar moves forward, or the swinging of a sword as my knight takes your cowering bishop). Innnnteresting!