54 research outputs found

    Multigranularity Representations for Human Inter-Actions: Pose, Motion and Intention

    Get PDF
    Tracking people and their body pose in videos is a central problem in computer vision. Standard tracking representations reason about temporal coherence of detected people and body parts. They have difficulty tracking targets under partial occlusions or rare body poses, where detectors often fail, since the number of training examples is often too small to deal with the exponential variability of such configurations. We propose tracking representations that track and segment people and their body pose in videos by exploiting information at multiple detection and segmentation granularities when available, whole body, parts or point trajectories. Detections and motion estimates provide contradictory information in case of false alarm detections or leaking motion affinities. We consolidate contradictory information via graph steering, an algorithm for simultaneous detection and co-clustering in a two-granularity graph of motion trajectories and detections, that corrects motion leakage between correctly detected objects, while being robust to false alarms or spatially inaccurate detections. We first present a motion segmentation framework that exploits long range motion of point trajectories and large spatial support of image regions. We show resulting video segments adapt to targets under partial occlusions and deformations. Second, we augment motion-based representations with object detection for dealing with motion leakage. We demonstrate how to combine dense optical flow trajectory affinities with repulsions from confident detections to reach a global consensus of detection and tracking in crowded scenes. Third, we study human motion and pose estimation. We segment hard to detect, fast moving body limbs from their surrounding clutter and match them against pose exemplars to detect body pose under fast motion. We employ on-the-fly human body kinematics to improve tracking of body joints under wide deformations. We use motion segmentability of body parts for re-ranking a set of body joint candidate trajectories and jointly infer multi-frame body pose and video segmentation. We show empirically that such multi-granularity tracking representation is worthwhile, obtaining significantly more accurate multi-object tracking and detailed body pose estimation in popular datasets

    Multiresolution models in image restoration and reconstruction with medical and other applications

    Get PDF

    Spatially Regularized Spherical Reconstruction: A Cross-Domain Filtering Approach for HARDI Signals

    Get PDF
    Despite the immense advances of science and medicine in recent years, several aspects regarding the physiology and the anatomy of the human brain are yet to be discovered and understood. A particularly challenging area in the study of human brain anatomy is that of brain connectivity, which describes the intricate means by which different regions of the brain interact with each other. The study of brain connectivity is deeply dependent on understanding the organization of white matter. The latter is predominantly comprised of bundles of myelinated axons, which serve as connecting pathways between approximately 10¹¹ neurons in the brain. Consequently, the delineation of fine anatomical details of white matter represents a highly challenging objective, and it is still an active area of research in the fields of neuroimaging and neuroscience, in general. Recent advances in medical imaging have resulted in a quantum leap in our understanding of brain anatomy and functionality. In particular, the advent of diffusion magnetic resonance imaging (dMRI) has provided researchers with a non-invasive means to infer information about the connectivity of the human brain. In a nutshell, dMRI is a set of imaging tools which aim at quantifying the process of water diffusion within the human brain to delineate the complex structural configurations of the white matter. Among the existing tools of dMRI high angular resolution diffusion imaging (HARDI) offers a desirable trade-off between its reconstruction accuracy and practical feasibility. In particular, HARDI excels in its ability to delineate complex directional patterns of the neural pathways throughout the brain, while remaining feasible for many clinical applications. Unfortunately, HARDI presents a fundamental trade-off between its ability to discriminate crossings of neural fiber tracts (i.e., its angular resolution) and the signal-to-noise ratio (SNR) of its associated images. Consequently, given that the angular resolution is of fundamental importance in the context of dMRI reconstruction, there is a need for effective algorithms for de-noising HARDI data. In this regard, the most effective de-noising approaches have been observed to be those which exploit both the angular and the spatial-domain regularity of HARDI signals. Accordingly, in this thesis, we propose a formulation of the problem of reconstruction of HARDI signals which incorporates regularization assumptions on both their angular and their spatial domains, while leading to a particularly simple numerical implementation. Experimental evidence suggests that the resulting cross-domain regularization procedure outperforms many other state of the art HARDI de-noising methods. Moreover, the proposed implementation of the algorithm supersedes the original reconstruction problem by a sequence of efficient filters which can be executed in parallel, suggesting its computational advantages over alternative implementations

    Joint methods in imaging based on diffuse image representations

    Get PDF
    This thesis deals with the application and the analysis of different variants of the Mumford-Shah model in the context of image processing. In this kind of models, a given function is approximated in a piecewise smooth or piecewise constant manner. Especially the numerical treatment of the discontinuities requires additional models that are also outlined in this work. The main part of this thesis is concerned with four different topics. Simultaneous edge detection and registration of two images: The image edges are detected with the Ambrosio-Tortorelli model, an approximation of the Mumford-Shah model that approximates the discontinuity set with a phase field, and the registration is based on these edges. The registration obtained by this model is fully symmetric in the sense that the same matching is obtained if the roles of the two input images are swapped. Detection of grain boundaries from atomic scale images of metals or metal alloys: This is an image processing problem from materials science where atomic scale images are obtained either experimentally for instance by transmission electron microscopy or by numerical simulation tools. Grains are homogenous material regions whose atomic lattice orientation differs from their surroundings. Based on a Mumford-Shah type functional, the grain boundaries are modeled as the discontinuity set of the lattice orientation. In addition to the grain boundaries, the model incorporates the extraction of a global elastic deformation of the atomic lattice. Numerically, the discontinuity set is modeled by a level set function following the approach by Chan and Vese. Joint motion estimation and restoration of motion-blurred video: A variational model for joint object detection, motion estimation and deblurring of consecutive video frames is proposed. For this purpose, a new motion blur model is developed that accurately describes the blur also close to the boundary of a moving object. Here, the video is assumed to consist of an object moving in front of a static background. The segmentation into object and background is handled by a Mumford-Shah type aspect of the proposed model. Convexification of the binary Mumford-Shah segmentation model: After considering the application of Mumford-Shah type models to tackle specific image processing problems in the previous topics, the Mumford-Shah model itself is studied more closely. Inspired by the work of Nikolova, Esedoglu and Chan, a method is developed that allows global minimization of the binary Mumford-Shah segmentation model by solving a convex, unconstrained optimization problem. In an outlook, segmentation of flowfields into piecewise affine regions using this convexification method is briefly discussed

    Combining Features and Semantics for Low-level Computer Vision

    Get PDF
    Visual perception of depth and motion plays a significant role in understanding and navigating the environment. Reconstructing outdoor scenes in 3D and estimating the motion from video cameras are of utmost importance for applications like autonomous driving. The corresponding problems in computer vision have witnessed tremendous progress over the last decades, yet some aspects still remain challenging today. Striking examples are reflecting and textureless surfaces or large motions which cannot be easily recovered using traditional local methods. Further challenges include occlusions, large distortions and difficult lighting conditions. In this thesis, we propose to overcome these challenges by modeling non-local interactions leveraging semantics and contextual information. Firstly, for binocular stereo estimation, we propose to regularize over larger areas on the image using object-category specific disparity proposals which we sample using inverse graphics techniques based on a sparse disparity estimate and a semantic segmentation of the image. The disparity proposals encode the fact that objects of certain categories are not arbitrarily shaped but typically exhibit regular structures. We integrate them as non-local regularizer for the challenging object class 'car' into a superpixel-based graphical model and demonstrate its benefits especially in reflective regions. Secondly, for 3D reconstruction, we leverage the fact that the larger the reconstructed area, the more likely objects of similar type and shape will occur in the scene. This is particularly true for outdoor scenes where buildings and vehicles often suffer from missing texture or reflections, but share similarity in 3D shape. We take advantage of this shape similarity by localizing objects using detectors and jointly reconstructing them while learning a volumetric model of their shape. This allows to reduce noise while completing missing surfaces as objects of similar shape benefit from all observations for the respective category. Evaluations with respect to LIDAR ground-truth on a novel challenging suburban dataset show the advantages of modeling structural dependencies between objects. Finally, motivated by the success of deep learning techniques in matching problems, we present a method for learning context-aware features for solving optical flow using discrete optimization. Towards this goal, we present an efficient way of training a context network with a large receptive field size on top of a local network using dilated convolutions on patches. We perform feature matching by comparing each pixel in the reference image to every pixel in the target image, utilizing fast GPU matrix multiplication. The matching cost volume from the network's output forms the data term for discrete MAP inference in a pairwise Markov random field. Extensive evaluations reveal the importance of context for feature matching.Die visuelle Wahrnehmung von Tiefe und Bewegung spielt eine wichtige Rolle bei dem Verständnis und der Navigation in unserer Umwelt. Die 3D Rekonstruktion von Szenen im Freien und die Schätzung der Bewegung von Videokameras sind von größter Bedeutung für Anwendungen, wie das autonome Fahren. Die Erforschung der entsprechenden Probleme des maschinellen Sehens hat in den letzten Jahrzehnten enorme Fortschritte gemacht, jedoch bleiben einige Aspekte heute noch ungelöst. Beispiele hierfür sind reflektierende und texturlose Oberflächen oder große Bewegungen, bei denen herkömmliche lokale Methoden häufig scheitern. Weitere Herausforderungen sind niedrige Bildraten, Verdeckungen, große Verzerrungen und schwierige Lichtverhältnisse. In dieser Arbeit schlagen wir vor nicht-lokale Interaktionen zu modellieren, die semantische und kontextbezogene Informationen nutzen, um diese Herausforderungen zu meistern. Für die binokulare Stereo Schätzung schlagen wir zuallererst vor zusammenhängende Bereiche mit objektklassen-spezifischen Disparitäts Vorschlägen zu regularisieren, die wir mit inversen Grafik Techniken auf der Grundlage einer spärlichen Disparitätsschätzung und semantischen Segmentierung des Bildes erhalten. Die Disparitäts Vorschläge kodieren die Tatsache, dass die Gegenstände bestimmter Kategorien nicht willkürlich geformt sind, sondern typischerweise regelmäßige Strukturen aufweisen. Wir integrieren sie für die komplexe Objektklasse 'Auto' in Form eines nicht-lokalen Regularisierungsterm in ein Superpixel-basiertes grafisches Modell und zeigen die Vorteile vor allem in reflektierenden Bereichen. Zweitens nutzen wir für die 3D-Rekonstruktion die Tatsache, dass mit der Größe der rekonstruierten Fläche auch die Wahrscheinlichkeit steigt, Objekte von ähnlicher Art und Form in der Szene zu enthalten. Dies gilt besonders für Szenen im Freien, in denen Gebäude und Fahrzeuge oft vorkommen, die unter fehlender Textur oder Reflexionen leiden aber ähnlichkeit in der Form aufweisen. Wir nutzen diese ähnlichkeiten zur Lokalisierung von Objekten mit Detektoren und zur gemeinsamen Rekonstruktion indem ein volumetrisches Modell ihrer Form erlernt wird. Dies ermöglicht auftretendes Rauschen zu reduzieren, während fehlende Flächen vervollständigt werden, da Objekte ähnlicher Form von allen Beobachtungen der jeweiligen Kategorie profitieren. Die Evaluierung auf einem neuen, herausfordernden vorstädtischen Datensatz in Anbetracht von LIDAR-Entfernungsdaten zeigt die Vorteile der Modellierung von strukturellen Abhängigkeiten zwischen Objekten. Zuletzt, motiviert durch den Erfolg von Deep Learning Techniken bei der Mustererkennung, präsentieren wir eine Methode zum Erlernen von kontextbezogenen Merkmalen zur Lösung des optischen Flusses mittels diskreter Optimierung. Dazu stellen wir eine effiziente Methode vor um zusätzlich zu einem Lokalen Netzwerk ein Kontext-Netzwerk zu erlernen, das mit Hilfe von erweiterter Faltung auf Patches ein großes rezeptives Feld besitzt. Für das Feature Matching vergleichen wir mit schnellen GPU-Matrixmultiplikation jedes Pixel im Referenzbild mit jedem Pixel im Zielbild. Das aus dem Netzwerk resultierende Matching Kostenvolumen bildet den Datenterm für eine diskrete MAP Inferenz in einem paarweisen Markov Random Field. Eine umfangreiche Evaluierung zeigt die Relevanz des Kontextes für das Feature Matching

    2022 Review of Data-Driven Plasma Science

    Get PDF
    Data-driven science and technology offer transformative tools and methods to science. This review article highlights the latest development and progress in the interdisciplinary field of data-driven plasma science (DDPS), i.e., plasma science whose progress is driven strongly by data and data analyses. Plasma is considered to be the most ubiquitous form of observable matter in the universe. Data associated with plasmas can, therefore, cover extremely large spatial and temporal scales, and often provide essential information for other scientific disciplines. Thanks to the latest technological developments, plasma experiments, observations, and computation now produce a large amount of data that can no longer be analyzed or interpreted manually. This trend now necessitates a highly sophisticated use of high-performance computers for data analyses, making artificial intelligence and machine learning vital components of DDPS. This article contains seven primary sections, in addition to the introduction and summary. Following an overview of fundamental data-driven science, five other sections cover widely studied topics of plasma science and technologies, i.e., basic plasma physics and laboratory experiments, magnetic confinement fusion, inertial confinement fusion and high-energy-density physics, space and astronomical plasmas, and plasma technologies for industrial and other applications. The final section before the summary discusses plasma-related databases that could significantly contribute to DDPS. Each primary section starts with a brief introduction to the topic, discusses the state-of-the-art developments in the use of data and/or data-scientific approaches, and presents the summary and outlook. Despite the recent impressive signs of progress, the DDPS is still in its infancy. This article attempts to offer a broad perspective on the development of this field and identify where further innovations are required

    Sensor Signal and Information Processing II

    Get PDF
    In the current age of information explosion, newly invented technological sensors and software are now tightly integrated with our everyday lives. Many sensor processing algorithms have incorporated some forms of computational intelligence as part of their core framework in problem solving. These algorithms have the capacity to generalize and discover knowledge for themselves and learn new information whenever unseen data are captured. The primary aim of sensor processing is to develop techniques to interpret, understand, and act on information contained in the data. The interest of this book is in developing intelligent signal processing in order to pave the way for smart sensors. This involves mathematical advancement of nonlinear signal processing theory and its applications that extend far beyond traditional techniques. It bridges the boundary between theory and application, developing novel theoretically inspired methodologies targeting both longstanding and emergent signal processing applications. The topic ranges from phishing detection to integration of terrestrial laser scanning, and from fault diagnosis to bio-inspiring filtering. The book will appeal to established practitioners, along with researchers and students in the emerging field of smart sensors processing
    • …
    corecore