70 research outputs found

    Multigranularity Representations for Human Inter-Actions: Pose, Motion and Intention

    Get PDF
    Tracking people and their body pose in videos is a central problem in computer vision. Standard tracking representations reason about temporal coherence of detected people and body parts. They have difficulty tracking targets under partial occlusions or rare body poses, where detectors often fail, since the number of training examples is often too small to deal with the exponential variability of such configurations. We propose tracking representations that track and segment people and their body pose in videos by exploiting information at multiple detection and segmentation granularities when available, whole body, parts or point trajectories. Detections and motion estimates provide contradictory information in case of false alarm detections or leaking motion affinities. We consolidate contradictory information via graph steering, an algorithm for simultaneous detection and co-clustering in a two-granularity graph of motion trajectories and detections, that corrects motion leakage between correctly detected objects, while being robust to false alarms or spatially inaccurate detections. We first present a motion segmentation framework that exploits long range motion of point trajectories and large spatial support of image regions. We show resulting video segments adapt to targets under partial occlusions and deformations. Second, we augment motion-based representations with object detection for dealing with motion leakage. We demonstrate how to combine dense optical flow trajectory affinities with repulsions from confident detections to reach a global consensus of detection and tracking in crowded scenes. Third, we study human motion and pose estimation. We segment hard to detect, fast moving body limbs from their surrounding clutter and match them against pose exemplars to detect body pose under fast motion. We employ on-the-fly human body kinematics to improve tracking of body joints under wide deformations. We use motion segmentability of body parts for re-ranking a set of body joint candidate trajectories and jointly infer multi-frame body pose and video segmentation. We show empirically that such multi-granularity tracking representation is worthwhile, obtaining significantly more accurate multi-object tracking and detailed body pose estimation in popular datasets

    Downstream Task Self-Supervised Learning for Object Recognition and Tracking

    Get PDF
    This dissertation addresses three limitations of deep learning methods in image and video understanding-based machine vision applications. Firstly, although deep convolutional neural networks (CNNs) are efficient for image recognition applications such as object detection and segmentation, they perform poorly under perspective distortions. In real-world applications, the camera perspective is a common problem that we can address by annotating large amounts of data, thus limiting the applicability of the deep learning models. Secondly, the typical approach for single-camera tracking problems is to use separate motion and appearance models, which are expensive in terms of computations and training data requirements. Finally, conventional multi-camera video understanding techniques use supervised learning algorithms to determine temporal relationships among objects. In large-scale applications, these methods are also limited by the requirement of extensive manually annotated data and computational resources.To address these limitations, we develop an uncertainty-aware self-supervised learning (SSL) technique that captures a model\u27s instance or semantic segmentation uncertainty from overhead images and guides the model to learn the impact of the new perspective on object appearance. The test-time data augmentation-based pseudo-label refinement technique continuously trains a model until convergence on new perspective images. The proposed method can be applied for both self-supervision and semi-supervision, thus increasing the effectiveness of a deep pre-trained model in new domains. Extensive experiments demonstrate the effectiveness of the SSL technique in both object detection and semantic segmentation problems. In video understanding applications, we introduce simultaneous segmentation and tracking as an unsupervised spatio-temporal latent feature clustering problem. The jointly learned multi-task features leverage the task-dependent uncertainty to generate discriminative features in multi-object videos. Experiments have shown that the proposed tracker outperforms several state-of-the-art supervised methods. Finally, we proposed an unsupervised multi-camera tracklet association (MCTA) algorithm to track multiple objects in real-time. MCTA leverages the self-supervised detector model for single-camera tracking and solves the multi-camera tracking problem using multiple pair-wise camera associations modeled as a connected graph. The graph optimization method generates a global solution for partially or fully overlapping camera networks

    Dynamic Switching State Systems for Visual Tracking

    Get PDF
    This work addresses the problem of how to capture the dynamics of maneuvering objects for visual tracking. Towards this end, the perspective of recursive Bayesian filters and the perspective of deep learning approaches for state estimation are considered and their functional viewpoints are brought together

    Dynamic Switching State Systems for Visual Tracking

    Get PDF
    This work addresses the problem of how to capture the dynamics of maneuvering objects for visual tracking. Towards this end, the perspective of recursive Bayesian filters and the perspective of deep learning approaches for state estimation are considered and their functional viewpoints are brought together

    Probabilistic Models and Inference for Multi-View People Detection in Overlapping Depth Images

    Get PDF
    Die sensorübergreifende Personendetektion in einem Netzwerk von 3D-Sensoren ist die Grundlage vieler Anwendungen, wie z.B. Personenzählung, digitale Kundenstromanalyse oder öffentliche Sicherheit. Im Gegensatz zu klassischen Verfahren der Videoüberwachung haben 3D-Sensoren dabei im Allgemeinen eine vertikale top-down Sicht auf die Szene, um das Auftreten von Verdeckungen, wie sie z.B. in einer dicht gedrängten Menschenmenge auftreten, zu reduzieren. Aufgrund der vertikalen top-down Perspektive der Sensoren variiert die äußere Erscheinung von Personen sehr stark in Abhängigkeit von deren Position in der Szene. Des Weiteren sind Personen aufgrund von Verdeckungen, Sensorrauschen sowie dem eingeschränkten Sichtfeld der top-down Sensoren häufig nur partiell in einer einzelnen Ansicht sichtbar. Um diese Herausforderungen zu bewältigen, wird in dieser Arbeit untersucht, wie die räumlich-zeitlichen Multi-View-Beobachtungen von mehreren 3D-Sensoren mit sich überlappenden Sichtbereichen effektiv genutzt werden können. Der Fokus liegt insbesondere auf der Verbesserung der Detektionsleistung durch die gemeinsame Betrachtung sowohl der redundanten als auch der komplementären Multi-Sensor-Beobachtungen, einschließlich des zeitlichen Kontextes. In der Arbeit wird das Problem der Personendetektion in einer Sequenz sich überlappender Tiefenbilder als inverses Problem formuliert. In diesem Kontext wird ein probabilistisches Modell zur Personendetektion in mehreren Tiefenbildern eingeführt. Das Modell beinhaltet ein generatives Szenenmodell, um Personen aus beliebigen Blickwinkeln zu erkennen. Basierend auf der vorgeschlagenen probabilistischen Modellierung werden mehrere Inferenzmethoden untersucht, unter anderem Gradienten-basierte kontinuierliche Optimierung, Variational Inference, sowie Convolutional Neural Networks. Dabei liegt der Schwerpunkt der Arbeit auf dem Einsatz von Variationsmethoden wie Mean-Field Variational Inference. In Abgrenzung zu klassischen Verfahren der Literatur wird hier keine Punkt-Schätzung vorgenommen, sondern die a-posteriori Wahrscheinlichkeitsverteilung der in der Szene anwesenden Personen approximiert. Durch den Einsatz des generativen Vorwärtsmodells, welches die Charakteristik der zugrundeliegenden Sensormodalität beinhaltet, ist das vorgeschlagene Verfahren weitestgehend unabhängig von der konkreten Sensormodalität. Die in der Arbeit vorgestellten Methoden werden anhand eines neu eingeführten Datensatzes zur weitflächigen Personendetektion in mehreren sich überlappenden Tiefenbildern evaluiert. Der Datensatz umfasst Bildmaterial von drei passiven Stereo-Sensoren, welche eine top-down Sicht auf eine Bürosituation vorweisen. In der Evaluation konnte nachgewiesen werden, dass die vorgeschlagene Mean-Field Variational Inference Approximation Stand-der-Technik-Resultate erzielt. Während Deep Learnig Verfahren sehr viele annotierte Trainingsdaten benötigen, basiert die in dieser Arbeit vorgeschlagene Methode auf einem expliziten probabilistischen Modell und benötigt keine Trainingsdaten. Ein weiterer Vorteil zu klassischen Verfahren, welche häufig nur eine MAP Punkt-Schätzung vornehmen, besteht in der Approximation der vollständigen Verbund-Wahrscheinlichkeitsverteilung der in der Szene anwesenden Personen

    Joint Probabilistic People Detection in Overlapping Depth Images

    Get PDF
    Privacy-preserving high-quality people detection is a vital computer vision task for various indoor scenarios, e.g. people counting, customer behavior analysis, ambient assisted living or smart homes. In this work a novel approach for people detection in multiple overlapping depth images is proposed. We present a probabilistic framework utilizing a generative scene model to jointly exploit the multi-view image evidence, allowing us to detect people from arbitrary viewpoints. Our approach makes use of mean-field variational inference to not only estimate the maximum a posteriori (MAP) state but to also approximate the posterior probability distribution of people present in the scene. Evaluation shows state-of-the-art results on a novel data set for indoor people detection and tracking in depth images from the top-view with high perspective distortions. Furthermore it can be demonstrated that our approach (compared to the the mono-view setup) successfully exploits the multi-view image evidence and robustly converges in only a few iterations
    corecore