Conference 5–9 August 2012
Exhibition 7–9 August 2012
Los Angeles Convention Center

Hand, Eye, and Face

Technical Papers

Hand, Eye, and Face

Monday, 6 August 3:45 PM - 5:35 PM | Los Angeles Convention Center, Room 502AB
Session Chair: Vladlen Koltun, Stanford University

Synthesis of Detailed Hand Manipulations Using Contact Sampling

This work synthesizes detailed and physically plausible hand-object manipulations from motions of the full body and the object. By sampling contact positions between the hand and the object, a variety of complex finger gaits with contact rolling, sliding, and relocation are discovered efficiently and automatically.

Yuting Ye
Georgia Institute of Technology

C. Karen Liu
Georgia Institute of Technology

Eyecatch: Simulating Visuomotor Coordination for Object Interception

A novel framework for animating human characters performing visually guided tasks. The paper's main idea is to consider the coordinated dynamics of sensing and movement. Based on experimental evidence, it proposes a generative model that constructs interception behavior, using discrete submovements directed by uncertain visual estimates of target movements.

Sang Hoon Yeo
The University of British Columbia

Martin Lesmana
The University of British Columbia

Debanga R. Neog
The University of British Columbia

Dinesh K. Pai
The University of British Columbia

Discovery of Complex Behaviors through Contact-Invariant Optimization

A fully automatic motion-synthesis framework capable of producing a wide variety of human behaviors, including getting up from the ground, crawling, climbing, moving heavy objects, acrobatics, and various cooperative actions involving two characters. All without requiring behavior-specific domain knowledge or example motion capture data.

Igor Mordatch
University of Washington

Emanuel Todorov
University of Washington

Zoran Popović
University of Washington

Spacetime Expression Cloning for Blendshapes

This novel spacetime facial animation retargeting method for blendshape face models is based on the velocity domain transfer combined with a model-specific prior and produces successful retargeting results. The paper shows that this novel retargeting method has obvious advantages over conventional per-frame transfer methods.

Yeongho Seol
Korea Advanced Institute of Science and Technology and Weta Digital

J.P. Lewis
Weta Digital

Jaewoo Seo
Korea Advanced Institute of Science and Technology

Byungkuk Choi
Korea Advanced Institute of Science and Technology

Ken Anjyo
OLM Digital, Inc. and JST CREST

Junyong Noh
Korea Advanced Institute of Science and Technology

Bilinear Spatiotemporal Basis Models

A bilinear spatiotemporal basis model that compactly describes time-varying data. It generalizes to new sequences, can accurately predict missing data, and enables data-consistent spatiotemporal editing. This paper applies the model to a number of graphics tasks, including motion capture labeling, gap-filling, de-noising, and motion touch-up.

Ijaz Akhter
Lahore University of Management Sciences

Tomas Simon
Carnegie Mellon University

Sohaib Khan
Lahore University of Management Sciences

Iain Matthews
Disney Research Pittsburgh

Yaser Sheikh
Carnegie Mellon University