289 research outputs found

    Adaptive processing with signal contaminated training samples

    Get PDF
    We consider the adaptive beamforming or adaptive detection problem in the case of signal contaminated training samples, i.e., when the latter may contain a signal-like component. Since this results in a significant degradation of the signal to interference and noise ratio at the output of the adaptive filter, we investigate a scheme to jointly detect the contaminated samples and subsequently take this information into account for estimation of the disturbance covariance matrix. Towards this end, a Bayesian model is proposed, parameterized by binary variables indicating the presence/absence of signal-like components in the training samples. These variables, together with the signal amplitudes and the disturbance covariance matrix are jointly estimated using a minimum mean-square error (MMSE) approach. Two strategies are proposed to implement the MMSE estimator. First, a stochastic Markov Chain Monte Carlo method is presented based on Gibbs sampling. Then a computationally more efficient scheme based on variational Bayesian analysis is proposed. Numerical simulations attest to the improvement achieved by this method compared to conventional methods such as diagonal loading. A successful application to real radar data is also presented

    Ultrasound Signal Processing: From Models to Deep Learning

    Get PDF
    Medical ultrasound imaging relies heavily on high-quality signal processing algorithms to provide reliable and interpretable image reconstructions. Hand-crafted reconstruction methods, often based on approximations of the underlying measurement model, are useful in practice, but notoriously fall behind in terms of image quality. More sophisticated solutions, based on statistical modelling, careful parameter tuning, or through increased model complexity, can be sensitive to different environments. Recently, deep learning based methods have gained popularity, which are optimized in a data-driven fashion. These model-agnostic methods often rely on generic model structures, and require vast training data to converge to a robust solution. A relatively new paradigm combines the power of the two: leveraging data-driven deep learning, as well as exploiting domain knowledge. These model-based solutions yield high robustness, and require less trainable parameters and training data than conventional neural networks. In this work we provide an overview of these methods from the recent literature, and discuss a wide variety of ultrasound applications. We aim to inspire the reader to further research in this area, and to address the opportunities within the field of ultrasound signal processing. We conclude with a future perspective on these model-based deep learning techniques for medical ultrasound applications

    Augmenting Deep Learning Performance in an Evidential Multiple Classifier System

    Get PDF
    International audienceThe main objective of this work is to study the applicability of ensemble methods in the context of deep learning with limited amounts of labeled data. We exploit an ensemble of neural networks derived using Monte Carlo dropout, along with an ensemble of SVM classifiers which owes its effectiveness to the hand-crafted features used as inputs and to an active learning procedure. In order to leverage each classifier's respective strengths, we combine them in an evidential framework, which models specifically their imprecision and uncertainty. The application we consider in order to illustrate the interest of our Multiple Classifier System is pedestrian detection in high-density crowds, which is ideally suited for its difficulty, cost of labeling and intrinsic imprecision of annotation data. We show that the fusion resulting from the effective modeling of uncertainty allows for performance improvement, and at the same time, for a deeper interpretation of the result in terms of commitment of the decision

    From small to large baseline multiview stereo : dealing with blur, clutter and occlusions

    Get PDF
    This thesis addresses the problem of reconstructing the three-dimensional (3D) digital model of a scene from a collection of two-dimensional (2D) images taken from it. To address this fundamental computer vision problem, we propose three algorithms. They are the main contributions of this thesis. First, we solve multiview stereo with the o -axis aperture camera. This system has a very small baseline as images are captured from viewpoints close to each other. The key idea is to change the size or the 3D location of the aperture of the camera so as to extract selected portions of the scene. Our imaging model takes both defocus and stereo information into account and allows to solve shape reconstruction and image restoration in one go. The o -axis aperture camera can be used in a small-scale space where the camera motion is constrained by the surrounding environment, such as in 3D endoscopy. Second, to solve multiview stereo with large baseline, we present a framework that poses the problem of recovering a 3D surface in the scene as a regularized minimal partition problem of a visibility function. The formulation is convex and hence guarantees that the solution converges to the global minimum. Our formulation is robust to view-varying extensive occlusions, clutter and image noise. At any stage during the estimation process the method does not rely on the visual hull, 2D silhouettes, approximate depth maps, or knowing which views are dependent(i.e., overlapping) and which are independent( i.e., non overlapping). Furthermore, the degenerate solution, the null surface, is not included as a global solution in this formulation. One limitation of this algorithm is that its computation complexity grows with the number of views that we combine simultaneously. To address this limitation, we propose a third formulation. In this formulation, the visibility functions are integrated within a narrow band around the estimated surface by setting weights to each point along optical rays. This thesis presents technical descriptions for each algorithm and detailed analyses to show how these algorithms improve existing reconstruction techniques

    Decision making with reciprocal chains and binary neural network models

    Get PDF
    Automated decision making systems are relied on in increasingly diverse and critical settings. Human users expect such systems to improve or augment their own decision making in complex scenarios, in real time, often across distributed networks of devices. This thesis studies binary decision making systems of two forms. The rst system is built from a reciprocal chain, a statistical model able to capture the intentional behaviour of targets moving through a statespace, such as moving towards a destination state. The rst part of the thesis questions the utility of this higher level information in a tracking problem where the system must decide whether a target exists or not. The contributions of this study characterise the bene ts to be expected from reciprocal chains for tracking, using statistical tools and a novel simulation environment that provides relevant numerical experiments. Real world decision making systems often combine statistical models, such as the reciprocal chain, with the second type of system studied in this thesis, a neural network. In the tracking context, a neural network typically forms the object detection system. However, the power consumption and memory usage of state of the art neural networks makes their use on small devices infeasible. This motivates the study of binary neural networks in the second part of the thesis. Such networks use less memory and are e cient to run, compared to standard full precision networks. However, their optimisation is di cult, due to the non-di erentiable functions involved. Several algorithms elect to optimise surrogate networks that are di erentiable and correspond in some way to the original binary network. Unfortunately, the many choices involved in the algorithm design are poorly understood. The second part of the thesis questions the role of parameter initialisation in the optimisation of binary neural networks. Borrowing analytic tools from statistical physics, it is possible to characterise the typical behaviour of a range of algorithms at initialisation precisely, by studying how input signals propagate through these networks on average. This theoretical development also yields practical outcomes, providing scales that limit network depth and suggesting new initialisation methods for binary neural networks.Thesis (Ph.D.) -- University of Adelaide, School of Electrical & Electronic Engineering, 202

    Intelligent video surveillance

    Get PDF
    In the focus of this thesis are the new and modified algorithms for object detection, recognition and tracking within the context of video analytics. The manual video surveillance has been proven to have low effectiveness and, at the same time, high expense because of the need in manual labour of operators, which are additionally prone to erroneous decisions. Along with increase of the number of surveillance cameras, there is a strong need to push for automatisation of the video analytics. The benefits of this approach can be found both in military and civilian applications. For military applications, it can help in localisation and tracking of objects of interest. For civilian applications, the similar object localisation procedures can make the criminal investigations more effective, extracting the meaningful data from the massive video footage. Recently, the wide accessibility of consumer unmanned aerial vehicles has become a new threat as even the simplest and cheapest airborne vessels can carry some cargo that means they can be upgraded to a serious weapon. Additionally they can be used for spying that imposes a threat to a private life. The autonomous car driving systems are now impossible without applying machine vision methods. The industrial applications require automatic quality control, including non-destructive methods and particularly methods based on the video analysis. All these applications give a strong evidence in a practical need in machine vision algorithms for object detection, tracking and classification and gave a reason for writing this thesis. The contributions to knowledge of the thesis consist of two main parts: video tracking and object detection and recognition, unified by the common idea of its applicability to video analytics problems. The novel algorithms for object detection and tracking, described in this thesis, are unsupervised and have only a small number of parameters. The approach is based on rigid motion segmentation by Bayesian filtering. The Bayesian filter, which was proposed specially for this method and contributes to its novelty, is formulated as a generic approach, and then applied to the video analytics problems. The method is augmented with optional object coordinate estimation using plain two-dimensional terrain assumption which gives a basis for the algorithm usage inside larger sensor data fusion models. The proposed approach for object detection and classification is based on the evolving systems concept and the new Typicality-Eccentricity Data Analytics (TEDA) framework. The methods are capable of solving classical problems of data mining: clustering, classification, and regression. The methods are proposed in a domain-independent way and are capable of addressing shift and drift of the data streams. Examples are given for the clustering and classification of the imagery data. For all the developed algorithms, the experiments have shown sustainable results on the testing data. The practical applications of the proposed algorithms are carefully examined and tested

    Sensor Signal and Information Processing II

    Get PDF
    In the current age of information explosion, newly invented technological sensors and software are now tightly integrated with our everyday lives. Many sensor processing algorithms have incorporated some forms of computational intelligence as part of their core framework in problem solving. These algorithms have the capacity to generalize and discover knowledge for themselves and learn new information whenever unseen data are captured. The primary aim of sensor processing is to develop techniques to interpret, understand, and act on information contained in the data. The interest of this book is in developing intelligent signal processing in order to pave the way for smart sensors. This involves mathematical advancement of nonlinear signal processing theory and its applications that extend far beyond traditional techniques. It bridges the boundary between theory and application, developing novel theoretically inspired methodologies targeting both longstanding and emergent signal processing applications. The topic ranges from phishing detection to integration of terrestrial laser scanning, and from fault diagnosis to bio-inspiring filtering. The book will appeal to established practitioners, along with researchers and students in the emerging field of smart sensors processing
    corecore