How to Improve video search by parsing video & text
This is a Google Tech Talk from March, 26 2008. Timothee Cour - Research Scientist lectures. Movies and TV are a rich source of highly diverse and complex video of people, objects, actions and locales "in the wild". Harvesting automatically labeled sequences of actions from video would enable creation of large-scale and highly-varied datasets. To enable such collection, we focus on the task of recovering scene structure in movies and TV series for object/person tracking and action retrieval. We present a weakly supervised algorithm that uses the screenplay and closed captions to parse a movie into a hierarchy of shots and scenes. Scene boundaries in the movie are aligned with screenplay scene labels and shots are reordered into a sequence of long continuous tracks or threads which allow for more accurate tracking of people and actions across shot boundaries. Scene segmentation, alignment, and shot threading are formulated as inference in a unified generative model and a novel hierarchical dynamic programming algorithm that can handle alignment and jump-limited reorderings in linear time is introduced. We present quantitative and qualitative results on movie alignment and parsing, and use the recovered structure for tracking and naming of characters as well as retrieval of common actions in several episodes of popular TV series.
If time permits we will also present our recent results on approximate inference with eigenvalue optimization.
Speaker: Timothee Cour - Research Scientist
Timothee Cour is a fifth year PhD student at the University of Pennsylvania, Philadelphia, in Computer Science. He completed his undergraduate education at the Ecole Polytechnique in France, majoring in Computer Science and Applied Mathematics. His research advisor is Prof. Ben Taskar and he also worked closely with Prof. Jianbo Shi.