Alla SHEFFER - Université de British Columbia (UBC) (Canada)Mardi 7 Mars 2017, 10h00 - 12h00
UT3 Paul Sabatier, IRIT, Auditorium J. Herbrand
AbstractArtists use cartoon and gesture drawings to quickly communicate character shape and pose storyboarding and other digital media. Currently such drawings are largely used as a reference for 3D modeling and posing, with the actual 3D manipulation performed using 3D interfaces, which require time and expert 3D knowledge to operate.
In my talk I will describe methods for automatically modeling and posing 3D characters directly using gesture drawings as an input.
I will first introduce a novel technique for the construction of a 3D character proxy, or canvas, directly from a 2D cartoon drawing and a user-provided correspondingly posed 3D skeleton. This choice of input is motivated by the observation that traditional cartoon characters are well approximated by a union of generalized surface of revolution body parts, anchored by a skeletal structure. While typical 2D character contour drawings allow ambiguities in 3D interpretation, our use of a 3D skeleton eliminates such ambiguities and enables the construction of believable character canvases from complex drawings. Our canvases conform to the 2D contours of the input drawings, and are consistent with the perceptual principles of Gestalt continuity, simplicity, and contour persistence.
The combined method generates believable canvases for characters drawn in complex poses with numerous inter-part occlusions, variable contour depth, and significant foreshortening. Our canvases serve as 3D geometric proxies for cartoon characters, enabling unconstrained 3D viewing, articulation and non-photorealistic rendering. We validate our algorithm via a range of user studies and comparisons to ground-truth 3D models, and artist drawn results. We further demonstrate a compelling gallery of 3D character canvases created from a diverse set of cartoon drawings with matching 3D skeletons.
In the second half of my talk I will address the complementary problem of posing existing 3D character using gesture drawings as an input.
Our method is based on the observation that artists are skilled at quickly and effectively conveying poses using such drawings, and design them to facilitate a single perceptually consistent pose interpretation by viewers. Our algorithm leverages perceptual cues to parse the drawings and recover the artist-intended poses. It takes as input a vector-format rough gesture drawing and a rigged 3D character model, and plausibly poses the character to conform to the depicted pose. No other input is required.
Our contribution is two-fold: we first analyze and formulate the pose cues encoded in gesture drawings; we then employ these cues to compute a plausible image space projection of the conveyed pose and to imbue it with depth. We validate our approach via result comparisons to artist-posed models generated from the same reference drawings, via studies that confirm that our results agree with viewer perception, and via comparison to algorithmic alternatives.