|
ICVGIP 2010 Plenary Talks
Feature-Based Locomotion Controllers for Physically-Simulated Characters 10:00 - 11:00 Speaker: Aaron Hertzmann
Abstract Understanding the control forces that drive humans and animals is fundamental to describing their movement. Although physics-based methods hold promise for creating animation, they have long been considered too difficult to design and control. Likewise, recent results in computer vision suggest how physical models, if developed, could be important to human pose tracking. I will describe the main problems of human motion modeling. I will then present a new approach to control of physics-based characters based on high-level features of human movement. These controllers provide unprecedented flexibility and generality in real-time character control: they are capture many natural properties of human movement; they can be easily modified and applied to new characters; and they can handle a variety of different terrains and tasks, all within a single control strategy. Until very recently, even making a controller walk without falling
down was extraordinarily difficult. This is no longer the case. Our
work, together with other recent results in this area, suggests that
we are now ready to make great strides in locomotion. About the speaker Aaron Hertzmann Aaron Hertzmann is an Associate Professor of Computer Science at
University of Toronto. He received a BA in Computer Science and Art &
Art History from Rice University in 1996, and an MS and PhD in
Computer Science from New York University in 1998 and 2001,
respectively. In the past, he has worked at Pixar Animation Studios,
University of Washington, Microsoft Research, Mitsubishi Electric
Research Lab, Interval Research Corporation and NEC Research
Institute. His awards include the MIT TR100 (2004), an Ontario Early
Researcher Award (2005), a Sloan Foundation Fellowship (2006), a
Microsoft New Faculty Fellowship (2006), a UofT CS teaching award
(2008), and the CACS/AIC Outstanding Young CS Researcher Prize (2010).
Rendering Primitives with Vision and Graphics. 14:00 - 15:00 Speaker: Sharat Chandran
Abstract The triangle (and the quad) have been the ubiquitous rendering
primitives leading to astonishing impact in performance when it comes to
rendering. Indeed specialized hardware commodities such as the Graphics
Processing Unit (GPU) have survived and performed handsomely. About the speaker Sharat Chandran Sharat Chandran holds a doctorate (1989) in Computer Science from the University of Maryland and an undergraduate degree (1984) in Electrical & Electronics Engineering from the Indian Institute of Technology Bombay. Prof. Sharat's research interests are in computer graphics and vision, and in high performance computation. He is a Professor at the Department of Computer Science and Engineering at the Indian Institute of Technology Bombay.
Modeling Brain Circuitry using Scales Ranging from Micrometer to Nanometer 09:00 - 10:00 Speaker: Pascal Fua
Abstract Electron microscopes (EM) can now provide the nanometer resolution that is needed to image synapses, and therefore connections, while Light Microscopes (LM) see at the micrometer resolution required to model the 3D structure of the dendritic network. Since both the arborescence and the connections are integral parts of the brain's wiring diagram, combining these two modalities is critically important. In this talk, I will therefore present our approach to building the dendritic arborescence and to tracking migrating neurons from LM images, as well as to segmenting intra-neuronal structures from EM images. About the speaker Pascal Fua Pascal Fua joined the faculty of EPFL 1996. He is now a Professor in the School of Computer and Communication Science and heads the Computer Vision Lab. Before that, he worked at SRI International and at INRIA Sophia-Antipolis as a computer scientist. His research interests include shape modeling and motion recovery from images, human body modeling, and optimization-based techniques for image analysis and synthesis.
What does it mean to "understand" an image? 14:00 - 15:00 Speaker: Alexei Efros
Abstract Reasoning about a scene from a photograph is an inherently ambiguous
task. This is because a single image in itself does not carry enough
information to disambiguate the world that it is depicting. Of course
humans have no problems understanding photographs because of all the
prior visual experience they can bring to bear on the task. But if
our goal is to create computer programs that can do the same, we must
first answer a fundamental question: what does it mean to "understand"
an image? Is it simply naming the depicted objects? Or is there a
need for some deeper understanding of the properties of these objects
and the underlying 3D scene? How would we even know if the goal of
image understating has been achieved? While at present we don't have
good answers to these important questions, in this talk I will present
some results suggesting that it might be beneficial to look at image
understanding beyond mere object naming. About the speaker Alexei Efros Alexei "Alyosha" Efros is an associate professor at the Robotics
Institute and the Computer Science Department at Carnegie Mellon
University. His research is in the area of computer vision and
computer graphics, especially at the intersection of the two. He is
particularly interested in using data-driven techniques to tackle
problems which are very hard to model parametrically but where large
quantities of data are readily available. Alyosha received his PhD in
2003 from UC Berkeley under Jitendra Malik and spent the following
year as a fine fellow in Andrew Zisserman's group in Oxford, England.
Alyosha is a recipient of CVPR Best Paper Award (2006), NSF CAREER
award (2006), Sloan Fellowship (2008), Guggenheim Fellowship (2008),
Okawa Grant (2008), Finmeccanica Career Development Chair (2010), and
SIGGRAPH Significant New Researcher Award (2010).
Cell and Tissue Imaging: Opportunities & Challenges (GE Global Research) 09:00 - 10:00 Speaker: Jens Rittscher
Abstract While the chemical structure of DNA is well understood, determining how genome-encoded components function in an integrated manner to perform cellular and organismal function is still an open challenge. The talk will motivate that imaging, more specifically the extraction of quantitative information, plays a critical role in this process. Such measurements will enable the automatic monitoring of cellular and intracellular events, and providing information about specific molecular mechanisms in individual cells. By providing some specific examples it will be illustrated how specific computer vision algorithms enable the analysis of data sets and complex biological specimens that cannot be analyzed through manual inspection. The first of these examples will address the analysis of time lapse microscopy data. In this case, visual tracking algorithms need to be extended to capture relevant biological events. Other examples will illustrate, that the tight packing of cells poses significant challenges for image segmentation in histology images as well as in three dimensional image data. By making effective use of model assumptions it will be possible to segment such complex structures. In addition it will be discussed how statistical shape analysis methods can be applied to assess cellular morphology as well as the structure of entire organisms. While imaging data potentially has much to add to models for systems
biology, the usefulness of imaging information is dependent on the
quantitative nature of the data and other aspects of its quality. Developing
an awareness of the important long-term factors and challenge will help
ensure acceptance of image analysis methods. Today image analysis methods
are already used to study complex biological processes. Ultimately, the
future will see solutions that will reduce it to a simple molecular
diagnostic test. Particularly, emerging markets like India and China could
see such low cost point of care solutions become an integral part of
diagnostics for infection causing organisms such as TB and Malaria. About the speaker Jens Rittscher
Jens Rittscher joined the Visualization and Computer Vision Laboratory at GE Global Research in Niskayuna in 2001. He received a Diploma in Mathematics and Computer Science from the University in Bonn, Germany in 1997, and completed is DPhil under the supervision of Andrew Blake at the University of Oxford in 2001. His research interests include the analysis of visual motion, automatic video annotation, and model based image segmentation techniques. More recently he focused his research efforts in the area of biomedical imaging. With 2008 he published a volume with the tile Microscopic Image Analysis for Life Science Applications together with Raghu Machiraju and Stephen Wong. Currently he also has an adjunct professorship at the Rensselaer Polytechnic Institute, Troy, NY. |