Creating Personalized Avatars

Abstract

Digital heritage applications use virtual characters extensively to populate reconstructions of heritage sites in virtual and augmented reality. Creating these believable characters requires a lot of effort. The characters have to be modelled, textured, rigged and animated. In this chapter, we present a framework that captures a point cloud of a real user using multiple depth cameras, and subsequently deforms a template mesh to match the captured geometry. The topology of the template mesh is preserved during the deformation process. We compare the measurements of limb lengths and body part ratios with actual corresponding anthropological measurements from the real user, in order to validate our system. Furthermore, we use a single depth camera to capture motion of a real performer that we can then use to animate the mesh. This semi-automatic process only requires commodity depth cameras (Microsoft Kinect cameras) and no other specialized hardware. We also present extensions to available open-source animation authoring environment in Blender that allow us to synthesize character animation from pre-recorded motion data. We then briefly discuss the challenges involved in enhancing the appearance of the characters by physically-based animation of virtual garments.

Publication
Digital Hampi - Preserving Indian Cultural Heritage, Editors - A. Mallik, S. Chaudhury, V. Chandru and S. Srinivasan. Springer, ISBN 978-981-10-5738-0, 2018.

Related