572 research outputs found

    Active query process for digital video surveillance forensic applications

    Get PDF
    Multimedia forensics is a new emerging discipline regarding the analysis and exploitation of digital data as support for investigation to extract probative elements. Among them, visual data about people and people activities, extracted from videos in an efficient way, are becoming day by day more appealing for forensics, due to the availability of large video-surveillance footage. Thus, many research studies and prototypes investigate the analysis of soft biometrics data, such as people appearance and people trajectories. In this work, we propose new solutions for querying and retrieving visual data in an interactive and active fashion for soft biometrics in forensics. The innovative proposal joins the capability of transductive learning for semi-supervised search by similarity and a typical multimedia methodology based on user-guided relevance feedback to allow an active interaction with the visual data of people, appearance and trajectory in large surveillance areas. Approaches proposed are very general and can be exploited independently by the surveillance setting and the type of video analytic tools

    Decentralized Riemannian Particle Filtering with Applications to Multi-Agent Localization

    Get PDF
    The primary focus of this research is to develop consistent nonlinear decentralized particle filtering approaches to the problem of multiple agent localization. A key aspect in our development is the use of Riemannian geometry to exploit the inherently non-Euclidean characteristics that are typical when considering multiple agent localization scenarios. A decentralized formulation is considered due to the practical advantages it provides over centralized fusion architectures. Inspiration is taken from the relatively new field of information geometry and the more established research field of computer vision. Differential geometric tools such as manifolds, geodesics, tangent spaces, exponential, and logarithmic mappings are used extensively to describe probabilistic quantities. Numerous probabilistic parameterizations were identified, settling on the efficient square-root probability density function parameterization. The square-root parameterization has the benefit of allowing filter calculations to be carried out on the well studied Riemannian unit hypersphere. A key advantage for selecting the unit hypersphere is that it permits closed-form calculations, a characteristic that is not shared by current solution approaches. Through the use of the Riemannian geometry of the unit hypersphere, we are able to demonstrate the ability to produce estimates that are not overly optimistic. Results are presented that clearly show the ability of the proposed approaches to outperform current state-of-the-art decentralized particle filtering methods. In particular, results are presented that emphasize the achievable improvement in estimation error, estimator consistency, and required computational burden

    Satellite Cluster Tracking via Extent Estimation

    Get PDF
    Clusters of closely-spaced objects in orbit present unique tracking and prediction challenges. Association of observations to individual objects is often not possible until the objects have drifted sufficiently far apart from one another. This dissertation proposes a new paradigm for initial tracking of these clusters of objects: instead of tracking the objects independently, the cluster is tracked as a single entity, parameterized by its centroid and extent, or shape. The feasibility of this method is explored using a decoupled centroid and extent estimation scheme. The dynamics of the centroid of a cluster of satellites are studied, and a set of modified equinoctial elements is shown to minimize the discrepancy between the motion of the centroid and the observation-space centroid. The extent estimator is formulated as a matrix-variate particle filter. Several matrix similarity measures are tested as the filter weighting function, and the Bhattacharyya distance is shown to outperform the others in test cases. Finally, the combined centroid and extent filter is tested on a set of three on-orbit breakup events, generated using the NASA standard breakup model and simulated using realistic force models. The filter is shown to perform well across low-Earth, geosynchronous, and highly-elliptical orbits, with centroid error generally below five kilometers and well-fitting extent estimates. These results demonstrate that a decoupled centroid and extent filter can effectively track clusters of closely-spaced satellites. This could improve spaceflight safety by providing quantitative tracking information for the entire cluster much earlier than would otherwise be available through typical means

    Invariances for Gaussian models

    Get PDF
    At the heart of a statistical analysis, we are interested in drawing conclusions about random variables and the laws they follow. For this we require a sample, therefore our approach is best described as learning from data. In many instances, we already have an intuition about the generating process, meaning the space of all possible models reduces to a specific class that is defined up to a set of unknown parameters. Consequently, learning becomes the task of inferring these parameters given observations. Within this scope, the thesis answers the following two questions: Why are invariances needed? Among all parameters of the model, we often distinguish between those of interest and the so-called nuisance. The latter does not carry any meaning for our purposes, but may still play a crucial role in how the model supports the parameters of interest. This is a fundamental problem in statistics which is solved by finding suitable transformations such that the model becomes invariant against unidentifiable properties. Often, the application at hand already decides upon the necessary requirements: a Euclidean distance matrix, for example, does not carry translational information of the underlying coordinate system. Why Gaussian models? The normal distribution constitutes an important class in statistics due to frequent occurrences in nature, hence it is highly relevant for many research disciplines including physics, astronomy, engineering, but also psychology and social sciences. Besides fundamental results like the central limit theorem, a significant part of its appeal is rooted in convenient mathematical properties which permit closed-form solutions to numerous problems. In this work, we develop and discuss generalizations of three established models: a Gaussian mixture model, a Gaussian graphical model and the Gaussian information bottleneck. On the one hand, all of these are analytically convenient, but on the other hand they suffer from strict normality requirements which seriously limit their range of application. To this end, our focus is to explore solutions and relax these restrictions. We successfully show that with the addition of invariances, the aforementioned models gain a substantial leap forward while retaining their core concepts of the Gaussian foundation

    Generalized Hidden Filter Markov Models Applied to Speaker Recognition

    Get PDF
    Classification of time series has wide Air Force, DoD and commercial interest, from automatic target recognition systems on munitions to recognition of speakers in diverse environments. The ability to effectively model the temporal information contained in a sequence is of paramount importance. Toward this goal, this research develops theoretical extensions to a class of stochastic models and demonstrates their effectiveness on the problem of text-independent (language constrained) speaker recognition. Specifically within the hidden Markov model architecture, additional constraints are implemented which better incorporate observation correlations and context, where standard approaches fail. Two methods of modeling correlations are developed, and their mathematical properties of convergence and reestimation are analyzed. These differ in modeling correlation present in the time samples and those present in the processed features, such as Mel frequency cepstral coefficients. The system models speaker dependent phonemes, making use of word dictionary grammars, and recognition is based on normalized log-likelihood Viterbi decoding. Both closed set identification and speaker verification using cohorts are performed on the YOHO database. YOHO is the only large scale, multiple-session, high-quality speech database for speaker authentication and contains over one hundred speakers stating combination locks. Equal error rates of 0.21% for males and 0.31% for females are demonstrated. A critical error analysis using a hypothesis test formulation provides the maximum number of errors observable while still meeting the goal error rates of 1% False Reject and 0.1% False Accept. Our system achieves this goal
    • …
    corecore