Conference 5–9 August 2012
Exhibition 7–9 August 2012
Los Angeles Convention Center

Emerging Technologies

Emerging Technologies

Sunday, 5 August 12:00 AM - 12:00 AM | Los Angeles Convention Center, Concourse Foyer
Session Chair:

3D Capturing Using Multi-Camera Rigs, Real-Time Depth Estimation, and Depth-Based Content Creation for Multi-View and Light-Field Auto-Stereoscopic Displays

The wide variety of commercially available and emerging 3D displays -- such as stereoscopic, multi-view, and light-field -- makes content creation for these displays challenging. This project presents a generic method for capturing and rendering live 3D footage for 3D displays. The system features several innovative components: a professional-grade multi-camera assistance and calibration system, a real-time depth estimator that produces convincing depth maps, a real-time and generic depth-image-based rendering (DIBR) engine that is suitable for generating imagery for a range of 3D displays, and the world's first front-projected 140-inch light-field, glasses-free 3D cinema display system.

The contribution of this system is two-fold:

• It demonstrates 3D image generation and display based on sparse multi-camera input.

• The system's generic multi-view-plus-depth (MVD) representation can serve as the future 3DTV format, in line with MPEG’s efforts in 3DTV.

The system is based on work performed in the MUSCADE European FP7 project, the objective of which is to define, develop, validate, and evaluate technological innovations in 3DTV capturing, data representation, compression, transmission, and rendering and display adaptations required for a technically efficient and commercially successful 3DTV broadcast system.

Peter Tamas Kovacs
Holografika Kft.

Frederik Zilly
Fraunhofer Heinrich Hertz Institute

Combiform: Beyond Co-Attentive Play, a Combinable Social-Gaming Platform

Communal casual games (CCG) are a game genre that draws players' attentions more to others than to a virtual world. It combines digital-game approaches with social physical play in the play community. Games in this category rely on coLiberation (group flow) rather than flow for maximizing players’ intrinsic motivations. Players should not care about the game objectives or game experiences as much as creating a fun-oriented community. Most such games involve physical touching, but this is not a requirement. In fact, cheating, performing, touching, physical contacts, etc. are common practice in CCG.

Combiform is the first digital-game platform that is especially designed for communal casual game practice. The hardware forces players to focus on other players, embracing their physical bodies as part of the game to enhance the big WE in coLiberation. The LEDs make the game’s feedback apparent not just to individual players, but also to the play community. Motion sensing is a classic mimetic control choice that emphasizes the events in player space but not screen space. Having only one big button and one big knob encourages well-timed cheats and makes individual actions very visible to the rest of the group, allowing easy interruption of those actions during play.

This is a unique project because it is not merely using novel technologies to promote new forms of interactions during play. It also endorses a movement, an emergent change in the game-design community. Combiform takes advantage of its unique hardware design to further a new perspective on digital games that uses old rules to create new types of play.

Edmond Yee
University of Southern California

Tai An
University of Southern California

Andrew Dang
University of Southern California

Josh Joiner
University of Southern California

Andy Uehara
University of Southern California

Shader Printer

This novel stateful projector display uses bi-stable color-changing inks. It augments non-planar complex painted surfaces by projecting high-resolution, rewritable imagery that persists on target surfaces without requiring additional power.

The concept is useful for various applications that are not supported by traditional displays or fabrication technologies. For example, fashion items such as shoes or bags, and architectural elements such as wallpaper and floors can be updated frequently. Design prototypes can be tested outside the laboratory environment in outdoor displays. Another application could be very large, high-resolution information displays that are constantly updated.

Daniel Saakes
Japan Science and Technology Agency

Masahiko Inami
Japan Science and Technology Agency and Keio University

Takeo Igarashi
Japan Science and Technology Agency and The University of Tokyo

Naoya Koizumi
Keio University

Ramesh Raskar
Massachusetts Institute of Technology

TECHTILE Toolkit

TECHTILE is a fundamental concept that combines “TECHnology” with “tacTILE” perception and expression. Its aim is to establish haptic technologies as the third medium in the fields of art, design, and education, and extend the conventional definition of multimedia. Many haptic devices have been proposed, but most of them are still in the emerging stages of development. To attract the interest of potential users of haptics such as designers, educators, and students, it is necessary to provide easy-to-make and easy-to-use haptic devices. The TECHTILE Toolkit is an introductory haptic rapid-prototyping device that fulfills this requirement.

The current prototype is composed of a haptic recorder (microphone), some haptic reactors (small vibrators), and a signal amplifier optimized to present not only a zone of audibility (20-20,000 Hz), but also low -requency (1-20 Hz) vibration. Although the toolkit is intuitive to use and inexpensive, it can deliver highly realistic haptic sensations. For example, if you want to deliver the haptic sensation of glass balls in a cup, you simply need to use Scotch tape to attach the haptic recorder to the bottom of a paper-cup and the haptic reactor on the bottom of another cup. Then when you drop balls in the cup with the haptic recorder, the haptic sense of collision and rotation of balls is copied to another cup in real time. It is also possible to record the haptic signal in the audio track of an mpeg4 movie through the USB port of the toolkit and play it back with video and sound, so users can upload original haptic content on YouTube or Skype or Ustream to share the haptic sensation worldwide.

Kouta Minamizawa
Keio University

Yasuaki Kakehi
Keio University

Masashi Nakatani
Keio University

Soichiro Mihara
Yamaguchi Center for Arts and Media

Susumu Tachi
Keio University

Hand-Rewriting: Automatic Rewriting Like Natural Handwriting

Approaches for combining handwriting acts and computer simulation, like the “pen tablet” and the “digital pen”, are gaining acceptance. In contrast to those two approaches, this system automatically performs “rewrite” processing on paper in correspondence with pen-and-paper handwriting.

The Hand-Rewriting system combines two technical innovations:

1. A function that automatically erases specific areas of the paper, without the need to use an eraser, when the user writes letters or draws pictures on the paper. Local erasure on the paper in this manner can express letters and pictures written in thermochromic ink created by laser-generated thermal conversion.

2. A function that can repeatedly display additional related information on the paper, in color, when the user writes letters or sketches pictures on the paper. The local colors on the paper are created by projecting an ultraviolet (UV) pattern from a UV projector onto paper coated with photochromic material (PM).

With those control technologies, three types of interactive applications of the Hand-Rewriting system have been developed:

1. When the user writes something by hand, the system automatically erases parts of the characters, and the effect is to transform the letters into ornamental writing.

3. When the user sketches something by hand, the system automatically colors the interior of the outline sketch, and the effect is to replicate the sketch itself in the manner of a stamp.

3. When the user makes a mistake while writing, the system automatically erases the incorrect part and displays a guide giving the correct entry in color.

With these applications, users can enjoy performing various creative activities by freely using both kinds of control.

Tomoko Hashida
The University of Tokyo

Takeshi Naemura
The University of Tokyo

Kohei Nishimura
The University of Tokyo

REVEL: A Tactile Feedback Technology for Augmented Reality

REVEL is an augmented reality (AR) tactile technology that allows users wearing an interactive device to change the tactile feeling of real objects by augmenting them with virtual tactile texture. Unlike previous attempts to enhance AR environments with haptics, REVEL neither physically actuates objects or uses any force- or tactile-feedback devices, nor does it require users to wear tactile gloves or other equipment on their hands. Instead, it employs the principle of reverse electrovibration to inject a weak electrical signal anywhere on the user's body, creating an oscillating electrical field around the user’s fingers. With a finger slide on a surface of the object, the user perceives highly distinctive tactile textures that augment the physical object. By tracking the objects and location of the touch, the system associates dynamic tactile sensations to the interaction context.

REVEL is built upon previous work on designing electrovibration-based tactile feedback for touch surfaces [Bau, et al. 2010]. This project expands tactile interfaces based on electrovibration beyond touch surfaces and brings them into the real world.

Olivier Bau
Disney Research, Pittsburgh

Ivan Poupyrev
Disney Research, Pittsburgh

Mathieu Le Goc
Disney Research, Pittsburgh

Laureline Galliot
Disney Research, Pittsburgh

Matthew Glisson
Disney Research, Pittsburgh

Tensor Displays: Compressive Light-Field Synthesis Using Multilayer Displays With Directional Backlighting

Introducing a family of compressive light-field displays that employ a stack of time-multiplexed, light-attenuating layers illuminated by uniform or directional backlighting (any low-resolution light-field emitter). This project shows that the light field emitted by an N-layer, M-frame tensor display can be represented by an Nth-order, rank-M tensor. With this representation, it introduces a unified optimization framework, based on nonnegative tensor factorization (NTF), encompassing all tensor display architectures. This framework is the first to allow joint multilayer, multiframe light-field decompositions, which significantly reduces artifacts observed with prior multilayer-only and multiframe-only decompositions. It is also the first optimization method for designs combining multiple layers with directional backlighting.

The prototype presented at SIGGRAPH 2012 demonstrates the benefits and limitations of tensor displays by using modified LCD panels and a custom integral imaging backlight. This efficient, GPU-based NTF implementation enables interactive applications. Simulations and experiments show that tensor displays reveal practical architectures with greater depths of field, wider fields of view, and thinner form factors, compared to prior automultiscopic displays.

Matthew Hirsch
MIT Media Lab

Douglas Lanman
MIT Media Lab

Gordon Wetzstein
MIT Media Lab

Ramesh Raskar
MIT Media Lab

Ungrounded Haptic Rendering Device for Torque Simulation in Virtual Tennis

This haptic device that looks and handles like a real tennis racket is capable of rendering a variety of strong forces and twisting torques experienced in real tennis.

Impact forces are common in real life, and our daily experience shows that our sensory systsem can discern the variations in these vibrations. Common vibration-motor-based haptics cannot create the force magnitude, or the variations in frequencies and waveforms, or the immediacy of such impacts. This prototype device overcomes the above deficiencies to enhance the user experience.

The primary innovation is the A-shaped vibration element. One lightweight high-power push-pull solenoid actuator is mounted to each slanted side of the A. Part of the handle of the racket houses that vibration element, while the rim is mounted to the horizontal beam of the A. In this way, the racket is divided into two flexibly coupled moving masses. The low-mass rim has a large moment of inertia and acts as an effective counterweight to generate torque. Peak torque is generated within 10ms while the prototype is freely swung in the air, without being attached to the ground or large masses. The actuator on each slanted side of the A can render large accelerations of varying frequencies and waveforms. Actuating both sides in phase simulates a direct hit on the racket's center, while out-of-phase actuation simulates the torques experienced in off-center impacts. The device generates up to 0.4Nm of torque for simulating ball impacts, and 0.2Nm about the long axis of the handle to simulate twisting in the player's grip. Tracking of the user's head allows for projective mapping during graphics rendering, so that the 3D trajectory of the ball can be perceived, and adds visual cues to improve the haptic effect.

Wee Teck Fong
Institute for Infocomm Research

Ching Ling Chin
Institute for Infocomm Research

Farzam Farbiz
Institute for Infocomm Research

Zhiyong Huang
Institute for Infocomm Research

Gocen: A Handwritten Notation Interface for Musical Performance and Learning Music

Since the 1960s, Optical Music Recognition (OMR) has become a mature technology. But there have been only a few studies of hand-written notation and interactive systems for OMR. This project combines notation with performance to make music more intuitive easier to learn.

With Gocen's unique, intuitive interaction, users can enjoy writing musical notation and playing instruments at the same time. They can make sounds by passing the green bar on the computer display through simplified notes while pressing the “manual play" button. The system detects the size of the musical notation and interprets it as the control note velocity. While playing a note, users can change the pitch of the note by moving the device vertically to simulate a vibrato. They can change instruments by selecting text that designates instrument names: pf(piano), bs(bass), gt(guitar), dr(drums), etc. Users can also record audio events in a timeline by pressing the “recording" button. Each recorded note is set in the quantized timeline.

Tetsuaki Baba
Tokyo Metropolitan University

Yuya Kikukawa
Tokyo Metropolitan University

Toshiki Yoshiike
Tokyo Metropolitan University

Tatsuhiko Suzuki
Tokyo Metropolitan University

Rika Shoji
Tokyo Metropolitan University

Kumiko Kushiyama
Tokyo Metropolitan University

TELESAR V: TELExistence Surrogate Anthropomorphic Robot

TELESAR V is a fundamental concept named for the general technology that enables human beings to experience a real-time sensation of being and interacting in a remote location.

Conventional teleoperated robots often demonstrate higher degrees of freedom to manipulate specialized tools with precision. But these movements are mediated by the human participant's natural movements, which sometimes generates confusing feedback. These teleoperated robots also need special training to understand the body boundaries when performing tasks. TELESAR V is a dexterous anthropomorphic slave robot that duplicates the same size and movements of an ordinary human and maps the user’s spinal, neck, head, and arm movements. With this system, users can perform tasks dexterously and feel the robot's body boundaries through wide-angle high-definition vision, binaural stereo audio, and fingertip haptic sensations.

Charith Lasantha Fernando
Keio University

Masahiro Furukawa
Keio University

Tadatoshi Kurogi
Keio University

Kyo Hirota
Keio University

Sho Kamuro
The University of Tokyo

Katsunari Sato
Keio University

Kouta Minamizawa
Keio University

Susumu Tachi
Keio University

Augmented Reflection of Reality

Unlike existing augmented-reality techniques, which typically augment the real world surrounding a user with virtual objects and visualize those effects using various see-through displays, this system focuses on augmenting the user's full body. A half-silvered mirror combines the user's reflection with synthetic data to provide a mixed world. With a live and direct view of the user and the surrounding environment, the system allows the user to intuitively control virtual objects (for example, virtual drums) via the augmented reflection.

Wing Ho Andy Li
City University of Hong Kong

Hongbo Fu
City University of Hong Kong

ClaytricSurface: An Interactive Surface With Dynamic Softness Control Capability

In the field of human-computer interaction, specifically interface-surface research, the "softness" of a surface medium is one significant factor in determining a suitable means of interaction. With direct touch input, for example, the degree of surface softness allows for generation of various touch sensations and tactile feedback. Softness also affects the shape of the surface: a soft surface allow users to deform the surface at will, while a hard surface maintains its shape. So far, flexible surfaces have been considered static and unchangeable.

ClaytricSurface considers the softness of a surface to be dynamic and thus further explores interaction possibilities with this type of surface. The surface can be used as both a traditional rigid, planar display and a flexible soft display. And users can dynamically change the surface tension at any time.

Yasushi Matoba
The University of Electro-Communications

Toshiki Sato
The University of Electro-Communications

Nobuhiro Takahashi
The University of Electro-Communications

Hideki Koike
The University of Electro-Communications

Interactive Light-Field Painting

Since Sutherland's seminal SketchPad work in 1964, direct interaction with computers has been compelling: we can directly touch, move, and change what we see. Direct interaction is a major contribution to the success of smartphones and tablets, but the world is not flat. While existing technologies can display realistic multi-view stereoscopic 3D content reasonably well, interaction within the same 3D space often requires extensive additional hardware. This project presents a cheap and easy system that uses the same lenslet array for both multi-view autostereoscopic display and 3D light-pen position sensing.

The display provides multi-user, glasses-free autostereoscopic viewing with motion parallax. A single near-infrared camera located behind the lenslet array is used to track a light pen held by the user. Full 3D position tracking is accomplished by analysing the pattern produced when light from the pen shines through the lenselet array. This light pen can be used to directly draw into a displayed light field, or as input for object manipulation or defining parametric lines.

The system has a number of advantages. First, it inexpensively provides both multi-view autostereoscopic display and 3D sensing with 1:1 mapping. A review of the literature indicates that this has not been offered in previous interactive content-creation systems. Second, because the same lenslet array provides both 3D display and 3D sensing, the system design is extremely simple, inexpensive, and easy to build and calibrate. The demo at SIGGRAPH 2012 shows a variety of interesting interaction styles with a prototype implementation: freehand drawing, polygonal and parametric line drawing, model manipulation, and model editing.

James Tompkin
Disney Research, Boston

Samuel Muff
Disney Research, Boston

Stanislav Jakuschevskij
Disney Research, Boston

Jim McCann
Adobe Systems Incorporated

Jan Kautz
University College London

Marc Alexa
Technische Universität Berlin

Wojciech Matusik
Massachusetts Institute of Technology

Drum On

Drum On is a prototype system for overcoming the boredom and limitations of personal instrument practic. It is dervied from an attempt to extend interaction between computers and physical objects.

To set up the system, a projector is positioned at the top of a drum. The projector uses an object-mapping method to project all of the drum kit's components. Animated images on the drum inform the players' hit timing, and a MIDI signal detects and corrects hit timing. Players receive direct visual feedback on their performance.

In actual drum training with Drum On, people who were not familiar with drumming easily learned the basic concepts, and the sometimes-tedious training process turned into enjoyable practice time with visual fun and game factors. Drum On solves some of the problems associated with instrument training (for example, discomfort from the visual mismatch between note and instrument) and integration of media technologies.

Jaehyuck Bae
Seoul National University

Byungjoo Lee
Seoul National University

Sungmin Cho
Seoul National University

Yunsil Heo
Seoul National University

Hyunwoo Bang
Seoul National University

A Colloidal Display: Membrane Screen That Combines Transparency, BRDF, and 3D Volume

This project proposes an innovative solution that transforms a soap film into the world’s thinnest screen. It has several significant points in comparison to other displays or screens:

• The screen’s transparency can be controlled dynamically by using ultrasonic sound waves. Because of its transparency, membranes and a single projector can develop the plane-based 3D screen.

• The screen’s shape, surface texture, and reflectance can be controlled dynamically with ultrasonic sound waves.
Because of its dynamic character, the screen can display realistic material.

• The screen’s unique material, which allows objects to pass through it, promotes new ways of human interaction with flexible displays.

These features open a new path for flexible displays.

Yoichi Ochiai
The University of Tokyo

Alexis Oyama
Carnegie Mellon University

Keisuke Toyoshima
University of Tsukuba

PossessedHand

PossessedHand controls human hands for human-computer interaction (HCI). By applying electrical stimuli to the forearm muscles, it directs the motions of 16 finger joints and informs appropriate timing of finger motions. It consists of a micro-controller and two forearm belts, which have 28 electric pads. Each pad stimulates a specific muscle set. Controlling the muscles requires special system calibration because stimulation location, intensity, and timing are different for each person. This demonstration introduces a musical application in which users wear a data-glove to interact with a graphical interface.

The system is intuitive. No special knowledge or training is required to use PossessedHand.

Emi Tamaki
The University of Tokyo

Jun Rekimoto
The University of Tokyo

SplashDisplay: Volumetric Projecting Using Projectile Beads

The prime feature of SplashDisplay is that it uses projectile beads launched from a table as a display medium. The system launches projectile beads from millimeters to meters into the air, so it can present an image "depth" much like the Z-axis in 3D display technologies. When the beads are still, SplashDisplay presents a stationary screen. When they are airborne and light is projected onto them, they illuminate as they fall, presenting a fireworks-like effect in real time.

Yasushi Matoba
The University of Electro-Communications

Taro Tokui
The University of Electro-Communications

Ryo Sato
The University of Electro-Communications

Toshiki Sato
The University of Electro-Communications

Hideki Koike
The University of Electro-Communications

Turn: A Virtual Pottery by Real Spinning Wheel

There are many desktop modeling tools and techniques for 3D design, but their complex control procedures make them difficult for non-experts to use. Novices must learn 2D mouse functions and unfamiliar keyboard controls to create organic shapes. Turn is a natural user interface that fluently connects users' real-world sculpting experience with virtual pottery making. It suggests new 3D modeling tools and a new approach to 3D digital art and prototyping.

Sungmin Cho
Seoul National University

Yunsil Heo
Seoul National University

Hyunwoo Bang
Seoul National University

Magic Pot: Interactive Metamorphosis of the Perceived Shape

Magic Pot is an interactive system that uses haptic illusion to change the perceived shape of a physical object. While research on haptic presentation often concerns active haptics, which aims to reproduce physical force feedback, recent work is focusing more on alternative approaches such as passive haptics, a category that includes pseudo-haptics. This approach combines visual and haptic senses to create a cross-modal illusional perception. The pseudo-haptic approach is a potential solution for exploiting the boundaries and capabilities of the human sensory system to simulate haptic information without physical force feedback.

This project uses pseudo-haptics to display a variety of shapes in a video see-through system that controls visual stimuli independently from haptic inputs. It is based on a rendering algorithm that detects users' fingertips and displaces and deforms users' hands according to the difference between the physical shape behind the display and the arbitrary virtual shape in the display to compose visual feedback that suggests users are touching the virtual shape. This visual feedback evokes a pseudo-haptic illusion that changes users' perceptions of the static object's shape and size.

Yuki Ban
The University of Tokyo

Takuji Narumi
The University of Tokyo

Tomohiro Tanikawa
The University of Tokyo

Michitaka Hirose
The University of Tokyo

Tavola: Holographic User Experience

Tavola is a new platform for holographic and interactive 3D experience. It enables holographic 3D visual and 3D audio experience in a natural, free-space 3D interaction, and it can augment the interface of smaller devices such as smartphones. The head-tracking component is compact, accurate, and non-intrusive to the user’s appearance. The system supports in-the-air 3D interaction and several hand gestures via a set of natural and immersive free-hand interaction methods. Possible applications include kiosks, virtual tourism, shopping, education, training, environment simulation, and data visualization.

Yue Fei
Panasonic Silicon Valley Laboratory

Andrea Melle
Panasonic Silicon Valley Laboratory

David Kryze
Panasonic Silicon Valley Laboratory

Jean-Claude Junqua
Panasonic Silicon Valley Laboratory

Mood Meter: Large-Scale and Long-Term Smile Monitoring System

Have you ever wondered whether it’s possible to quantitatively measure how friendly or welcoming a community is? Or imagined which parts of the community are happier than others?

Mood Meter is a computer-vision-based system that automatically encourages, recognizes, and counts smiles of a large environment or a community. The system was installed on a college campus during a 10-week festival to count smiles at four key locations. It collected and aggregated anonymous information, and then displayed the information in real time in various intuitive and interactive formats on a public web site, depicting the emotional footprint of the community as a function of smiles at any point in time.

Javier Hernandez
MIT Media Lab

Mohammed Hoque
MIT Media Lab

Rosalind Picard
MIT Media Lab

JUKE Cylinder: A Device to Metamorphose Hands to a Musical Instrument

JUKE Cylinder is a cylindrical interactive device that metamorphoses hands to a musical instrument by localizing the sound image on the hands and enables users to control the pitches of the sound. Users create and control the sounds of real musical instruments (guitar, piano, flute, etc.) with their hands. They perceive that the sounds originate from interactions or objects that would not normally produce audio output.

Masamichi Ueta
The University of Tokyo

Osamu Hoshuyama
NEC Corporation

Takuji Narumi
The University of Tokyo

Tomohiro Tanikawa
The University of Tokyo

Michitaka Hirose
The University of Tokyo

Chilly Chair: Facilitating an Emotional Feeling With Artificial Piloerection

There have been many attempts to add haptic stimulation to music, games, and movies. Chilly Chair is a novel approach to enriching the quality of these experiences by enhancing the emotions evoked by the content.

The project focuses on piloerection, a kind of involuntary emotional reaction. According to James Lange, emotional feeling is experienced as a result of physiological changes induced by the autonomic nervous system. Some neuroscientists also propose that emotional feeling is evoked by the insula cortex, which represents particular bodily reactions such as the sensations of butterflies in the stomach and goose bumps. Chilly Chair goes beyond emotional “reactions” to function as an emotional “input” that enhances the emotion itself.

The prototype device uses and electrostatic force to raise back and forearm hair. The chair measures the skin conductance reaction, which is known to vary with the activation of the sympathetic nervous system, and controls the piloerection accordingly. Psychophysical experiments confirm that this piloerection system enhances feelings of surprise. The Chilly Chair can be applied not only to audio-visual entertainment, but also to non-computational entertainment such as reading books and dreaming while asleep.

Shogo Fukushima
The University of Electro-Communications

Hiroyuki Kajimoto
The University of Electro-Communications

HDRchitecture: Real-Time 3D HDR Imaging for Extreme Dynamic Range

HDRchitecture applies high-dynamic-range imaging to electric arc welding, a technique that also shows promise as a general-purpose seeing aid. The system can be used by welding schools and professionals to inspect welding in real time. A fixed camera system (on a tripod) or a stereo EyeTap cybernetic welding helmet with heads-up display records and streams live video from a welding booth to students or observers in nearby or remote locations. It captures a dynamic range more than a million to one, to reveal details that cannot be seen by the human eye or any currently existing commercially available camera. In comparison to most other work in HDR, the system's custom algorithm can run in real time at an interactive frame rate. It can also produce high-quality images in real time, and it embodies other features that have been designed specifically for the extremes of arc welding. The system also enables stereoscopic vision on the heads-up display as well as through external 3D TV displays using NVIDIA 3D vision-shutter eyeglasses.

Raymond Lo
University of Toronto

Steve Mann
University of Toronto

Jason Huang
University of Toronto

Valmiki Rampersad
University of Toronto

Stuffed Toys Alive! Cuddly Robots From a Fantasy World

Humans of all ages love stuffed toys. We love their cuteness and softness, and many of us play with and talk with them and listen to their complaints. We like to imagine that they are interactive creatures. Many stories and movies feature stuffed toys as living characters. But, of course, they are really just dolls. They are not able to move and react and share our emotions.

This project proposes a mechanism for stuffed-toy robots (“cuddly robots”) that provide the soft feel of stuffed toys. The soft feel is realized by three innovations:

• A driving mechanism that retains the essence of stuffed toys.

• Driving strings that generate a soft mechanism.

• A force sensor.

These cuddly robots suggest a new medium that can entertain us and give us a sense of merging real life with fantasy worlds. In the future, stuffed toys from our fantasy worlds will live with us and enchant us.

Youhei Yamashita
Tokyo Institute of Technology

Tatsuya Ishikawa
University of Electro-Communications

Hironori Mitake
Tokyo Institute of Technology

Ikumi Susa
Tokyo Institute of Technology

Fumihiro Kato
Tokyo Institute of Technology

Yutaka Takase
Tokyo Institute of Technology

Wataru Seshimo
Tokyo Institute of Technology

Yukinobu Takehana
Tokyo Institute of Technology

Satoru Onohara
Tokyo Institute of Technology

Takahiro Harano
Tokyo Institute of Technology

Shoichi Hasegawa
Tokyo Institute of Technology

Makoto Sato
Tokyo Institute of Technology

Botanicus Interacticus: Interactive Plants Technology

Botanicus Interacticus is a technology for designing highly expressive interactive plants, both living and artificial. The project is motivated by the rapid fusion of our computing and dwelling spaces, as well as the increasingly tactile and gestural nature of our interactions with digital devices. It is an interaction platform that expands interaction beyond computing devices and appliances to place it anywhere in the physical environment.

Botanicus Interacticus has a number of unique properties that set it apart from previous work on interactive plants:

• This instrumentation of plants is simple, non-invasive, and does not damage the plants. It requires only a single wire placed anywhere in the soil.

• The interaction goes beyond simple touch detection to allow rich gestural interaction with the plant (for example, sliding fingers on the stem of the orchid, detecting touch location, proximity tracking, and estimating the amount of touch contact.

• The gesture recognition is accurate. It applies machine-learning techniques for precise and unambiguous recognition of gestures.

• It deconstructs the electrical properties of plants and replicates them using electrical components. This allows a broad variety of biologically inspired artificial plants that behave nearly exactly the same as their biological counterparts. The same sensing technology is used with both living and artificial plants, making them interchangeable.

A broad range of applications is possible with this technology: designing interactive, responsive environments; developing a new form of living interaction devices; and developing ambient and pervasive interfaces. At SIGGRAPH 2012, the technology's versatility is demonstrated as an entertainment application where visitors can communicate with living and artificial plants by gesturing on them and observing the plants’ “response” in the form of rich computer-generated imagery and sound.

Ivan Poupyrev
Disney Research, Pittsburgh

Philipp Schoessler
Disney Research, Pittsburgh and Universität der Künste Berlin

Jonas Loh
Studio NAND

Gunnar Green
TheGreenEyl

Eric Brockmeyer
Disney Research, Pittsburgh

Willy Sengewald
TheGreenEyl

Munehiko Sato
Disney Research, Pittsburgh and The University of Tokyo