441 research outputs found

    Anomaly Detection, Rule Adaptation and Rule Induction Methodologies in the Context of Automated Sports Video Annotation.

    Get PDF
    Automated video annotation is a topic of considerable interest in computer vision due to its applications in video search, object based video encoding and enhanced broadcast content. The domain of sport broadcasting is, in particular, the subject of current research attention due to its fixed, rule governed, content. This research work aims to develop, analyze and demonstrate novel methodologies that can be useful in the context of adaptive and automated video annotation systems. In this thesis, we present methodologies for addressing the problems of anomaly detection, rule adaptation and rule induction for court based sports such as tennis and badminton. We first introduce an HMM induction strategy for a court-model based method that uses the court structure in the form of a lattice for two related modalities of singles and doubles tennis to tackle the problems of anomaly detection and rectification. We also introduce another anomaly detection methodology that is based on the disparity between the low-level vision based classifiers and the high-level contextual classifier. Another approach to address the problem of rule adaptation is also proposed that employs Convex hulling of the anomalous states. We also investigate a number of novel hierarchical HMM generating methods for stochastic induction of game rules. These methodologies include, Cartesian product Label-based Hierarchical Bottom-up Clustering (CLHBC) that employs prior information within the label structures. A new constrained variant of the classical Chinese Restaurant Process (CRP) is also introduced that is relevant to sports games. We also propose two hybrid methodologies in this context and a comparative analysis is made against the flat Markov model. We also show that these methods are also generalizable to other rule based environments

    Multifrequency methods for Electrical Impedance Tomography

    Get PDF
    Multifrequency Electrical Impedance Tomography (MFEIT) is an emerging imaging modality which exploits the dependence of tissue impedance on frequency to recover images of conductivity. Given the low cost and portability of EIT scanners, MFEIT could provide emergency diagnosis of pathologies such as acute stroke, brain injury and breast cancer. Whereas time-difference, or dynamic, EIT is an established technique for monitoring lung ventilation, MFEIT has received less attention in the literature, and the imaging methodology is at an early stage of development. MFEIT holds the unique potential to form images from static data, but high sensitivity to noise and modelling errors must be overcome. The subject of this doctoral thesis is the investigation of novel techniques for including spectral information in the image reconstruction process. The aim is to improve the ill-posedness of the inverse problem and deliver the first imaging methodology with sufficient robustness for clinical application. First, a simple linear model for the conductivity is defined and a simultaneous multifrequency method is developed. Second, the method is applied to a realistic numerical model of a human head, and the robustness to modelling errors is investigated. Third, a combined image reconstruction and classification method is developed, which allows for the simultaneous recovery of the conductivity and the spectral information by introducing a Gaussian-mixture model for the conductivity. Finally, a graph-cut image segmentation technique is integrated in the imaging method. In conclusion, this work identifies spectral information as a key resource for producing MFEIT images and points to a new direction for the development of MFEIT algorithms

    Situation Assessment for Mobile Robots

    Get PDF

    Fault-tolerant feature-based estimation of space debris motion and inertial properties

    Get PDF
    The exponential increase of the needs of people in the modern society and the contextual development of the space technologies have led to a significant use of the lower Earth’s orbits for placing artificial satellites. The current overpopulation of these orbits also increased the interest of the major space agencies in technologies for the removal of at least the biggest spacecraft that have reached their end-life or have failed their mission. One of the key functionalities required in a mission for removing a non-cooperative spacecraft is the assessment of its kinematics and inertial properties. In a few cases, this information can be approximated by ground observations. However, a re-assessment after the rendezvous phase is of critical importance for refining the capture strategies preventing accidents. The CADET program (CApture and DE-orbiting Technologies), funded by Regione Piemonte and led by Aviospace s.r.l., involved Politecnico di Torino in the research for solutions to the above issue. This dissertation proposes methods and algorithms for estimating the location of the center of mass, the angular rate, and the moments of inertia of a passive object. These methods require that the chaser spacecraft be capable of tracking several features of the target through passive vision sensors. Because of harsh lighting conditions in the space environment, feature-based methods should tolerate temporary failures in detecting features. The principal works on this topic do not consider this important aspect, making it a characteristic trait of the proposed methods. Compared to typical v treatments of the estimation problem, the proposed techniques do not depend solely on state observers. However, methods for recovering missing information, like compressive sampling techniques, are used for preprocessing input data to support the efficient usage of state observers. Simulation results showed accuracy properties that are comparable to those of the best-known methods already proposed in the literature. The developed algorithms were tested in the laboratory staged by Aviospace s.r.l., whose name is CADETLab. The results of the experimental tests suggested the practical applicability of such algorithms for supporting a real active removal mission

    Resource Constrained Adaptive Sensing.

    Full text link
    RESOURCE CONSTRAINED ADAPTIVE SENSING by Raghuram Rangarajan Chair: Alfred O. Hero III Many signal processing methods in applications such as radar imaging, communication systems, and wireless sensor networks can be presented in an adaptive sensing context. The goal in adaptive sensing is to control the acquisition of data measurements through adaptive design of the input parameters, e.g., waveforms, energies, projections, and sensors for optimizing performance. This dissertation develops new methods for resource constrained adaptive sensing in the context of parameter estimation and detection, sensor management, and target tracking. We begin by investigating the advantages of adaptive waveform amplitude design for estimating parameters of an unknown channel/medium under average energy constraints. We present a statistical framework for sequential design (e.g., design of waveforms in adaptive sensing) of experiments that improves parameter estimation (e.g., scatter coefficients for radar imaging, channel coefficients for channel estimation) performance in terms of reduction in mean-squared error (MSE). We derive optimal adaptive energy allocation strategies that achieve an MSE improvement of more than 5dB over non adaptive methods. As a natural extension to the problem of estimation, we derive optimal energy allocation strategies for binary hypotheses testing under the frequentist and Bayesian frameworks which yield at least 2dB improvement in performance. We then shift our focus towards spatial design of waveforms by considering the problem of optimal waveform selection from a large waveform library for a state estimation problem. Since the optimal solution to this subset selection problem is combinatorially complex, we propose a convex relaxation to the problem and provide a low complexity suboptimal solution that achieves near optimal performance. Finally, we address the problem of sensor and target localization in wireless sensor networks. We develop a novel sparsity penalized multidimensional scaling algorithm for blind target tracking, i.e., a sensor network which can simultaneously track targets and obtain sensor location estimates.Ph.D.Electrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/57621/2/rangaraj_1.pd

    Selective review of offline change point detection methods

    Full text link
    This article presents a selective survey of algorithms for the offline detection of multiple change points in multivariate time series. A general yet structuring methodological strategy is adopted to organize this vast body of work. More precisely, detection algorithms considered in this review are characterized by three elements: a cost function, a search method and a constraint on the number of changes. Each of those elements is described, reviewed and discussed separately. Implementations of the main algorithms described in this article are provided within a Python package called ruptures

    Robust density modelling using the student's t-distribution for human action recognition

    Full text link
    The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE
    corecore