Zusammenfassung
This thesis deals with monocular object tracking from video sequences. The goal is to improve tracking of previously unseen non-rigid objects under severe articulations without relying on prior information such as detailed 3D models and without expensive offline training with manual annotations. The proposed framework tracks highly articulated objects by decomposing the target object into small parts and apply online tracking. Drift, which is a fundamental problem of online trackers, is reduced by incorporating image segmentation cues and by using a novel global consistency prior. Joint tracking and segmentation is formulated as a high-order probabilistic graphical model over continuous state variables. A novel inference method is proposed, called S-PBP, combining slice sampling and particle belief propagation. It is shown that slice sampling leads to fast convergence and does not rely on hyper-parameter tuning as opposed to competing approaches based on Metropolis-Hastings or heuristi...
Schlagworte
monocular object tracking 3 D models framework tracks Maschinelles Sehen Probabilistisch graphische Modelle MAPInferenz Markov-chain Monte-Carlo Slice-Sampling Produkt-Slice-Sampling Artikulierte Objektverfolgung Visuelle Objektverfolgung Poseschätzung- 1–14 1 Introduction 1–14
- 15–24 2 Related Work 15–24
- 25–55 3 Fundamentals 25–55
- 78–106 5 Tracking 78–106
- 107–110 6 Conclusions 107–110
- 111–127 A Appendix 111–127
- 128–146 Bibliography 128–146