181 research outputs found

    Acoustic localization of people in reverberant environments using deep learning techniques

    Get PDF
    La localización de las personas a partir de información acústica es cada vez más importante en aplicaciones del mundo real como la seguridad, la vigilancia y la interacción entre personas y robots. En muchos casos, es necesario localizar con precisión personas u objetos en función del sonido que generan, especialmente en entornos ruidosos y reverberantes en los que los métodos de localización tradicionales pueden fallar, o en escenarios en los que los métodos basados en análisis de vídeo no son factibles por no disponer de ese tipo de sensores o por la existencia de oclusiones relevantes. Por ejemplo, en seguridad y vigilancia, la capacidad de localizar con precisión una fuente de sonido puede ayudar a identificar posibles amenazas o intrusos. En entornos sanitarios, la localización acústica puede utilizarse para controlar los movimientos y actividades de los pacientes, especialmente los que tienen problemas de movilidad. En la interacción entre personas y robots, los robots equipados con capacidades de localización acústica pueden percibir y responder mejor a su entorno, lo que permite interacciones más naturales e intuitivas con los humanos. Por lo tanto, el desarrollo de sistemas de localización acústica precisos y robustos utilizando técnicas avanzadas como el aprendizaje profundo es de gran importancia práctica. Es por esto que en esta tesis doctoral se aborda dicho problema en tres líneas de investigación fundamentales: (i) El diseño de un sistema extremo a extremo (end-to-end) basado en redes neuronales capaz de mejorar las tasas de localización de sistemas ya existentes en el estado del arte. (ii) El diseño de un sistema capaz de localizar a uno o varios hablantes simultáneos en entornos con características y con geometrías de arrays de sensores diferentes sin necesidad de re-entrenar. (iii) El diseño de sistemas capaces de refinar los mapas de potencia acústica necesarios para localizar a las fuentes acústicas para conseguir una mejor localización posterior. A la hora de evaluar la consecución de dichos objetivos se han utilizado diversas bases de datos realistas con características diferentes, donde las personas involucradas en las escenas pueden actuar sin ningún tipo de restricción. Todos los sistemas propuestos han sido evaluados bajo las mismas condiciones consiguiendo superar en términos de error de localización a los sistemas actuales del estado del arte

    Regularized Numerical Algorithms For Stable Parameter Estimation In Epidemiology And Implications For Forecasting

    Get PDF
    When an emerging outbreak occurs, stable parameter estimation and reliable projections of future incidence cases using limited (early) data can play an important role in optimal allocation of resources and in the development of effective public health intervention programs. However, the inverse parameter identification problem is ill-posed and cannot be solved with classical tools of computational mathematics. In this dissertation, various regularization methods are employed to incorporate stability in parameter estimation algorithms. The recovered parameters are then used to generate future incident curves as well as the carrying capacity of the epidemic and the turning point of the outbreak. For the nonlinear generalized Richards model of disease progression, we develop a novel iteratively regularized Gauss-Newton-type algorithm to reconstruct major characteristics of an emerging infection. This problem-oriented numerical scheme takes full advantage of a priori information available for our specific application in order to stabilize the iterative process. Another important aspect of our research is a reliable estimation of time-dependent transmission rate in a compartmental SEIR disease model. To that end, the ODE-constrained minimization problem is reduced to a linear Volterra integral equation of the first kind, and a combination of regularizing filters is employed to approximate the unknown transmission parameter in a stable manner. To justify our theoretical findings, extensive numerical experiments have been conducted with both synthetic and real data for various infectious diseases

    Sequential Importance Resampling Particle Filter for Ambiguity Resolution

    Get PDF
    In this thesis the sequential importance resampling particle filter for estimating the full geometry-based float solution state vector for Global Navigation Satellite System (GNSS) ambiguity resolution is implemented. The full geometry-based state vector, consisting on position, velocity, acceleration, and float ambiguities, is estimated using a particle filter in RTK mode. In contrast to utilizing multi-frequency and multi-constellation GNSS measurements, this study employed solely L1 GPS code and carrier phase observations. This approach simulates scenarios wherein the signal reception environment is suboptimal and only a restricted number of satellites are visible. However, it should be noted that the methodology outlined in this thesis can be expanded for cases involving multiple frequencies and constellations. The distribution of particles after the resampling step is used to compute an empirical covariance matrix Pk based on the incorporated observations at each epoch. This covariance matrix is then used to transform the distribution using the decorrelating Z transformation of the LAMBDA method [1]. The performance of a float solution based on point mass representation is compared to the typically used extended Kalman filter (EKF) for searching the integer ambiguities using the three common search methods described in [2]: Integer Rounding, Integer Bootstrapping, and Integer Least Squares with and without the Z transformation. As Bayesian estimators are able to include highly non-linear elements and accurately describe non-Gaussian posterior densities, the particle filter outperforms the EKF when a constraint leading to highly non-Gaussian distributions is added to the estimator. Such is the case of the map-aiding constraint, which integrates digital road maps with GPS observations to compute a more accurate position state. The comparison between the position accuracy of the particle filter solution with and without the map-aiding constraint to the solution estimated with the EKF is made. The algorithm is tested in different segments of data and shows how the position convergence improves when adding digital road map information within the first thirty seconds of initializing the Particle Filter in different scenarios that include driving in a straight line, turning, and changing lanes. The assessment of the effect of the map-aiding algorithm on the ambiguity domain is carried out as well and it is shown how the convergence time of the float ambiguities improves when the position accuracy is improved by the constraint. The particle filter is able to weight the measurements according to any kind of distribution, unlike the EKF which always assumes a Gaussian distribution. The performance of the PF when having non-Gaussian measurements is assessed, such as when the measurements are distorted by multipath. Two additional steps are implemented, an outlier detection technique based on the predicted set of particles, and the use of a mixture of Gaussians to weight the measurements detected as outliers. The implemented outlier detection algorithm is based on the residual (or innovation) testing technique which is commonly applied into the EKF. The innovation and its covariance matrix are estimated from a predicted set of residuals using the transitional prior distribution and the measurement model. Then, the innovation is compared against the critical value of N (0, 1) at a level of significance α. The mixture of Gaussians is the weighted sum of two Gaussians, one from the measurement noise matrix, and the second being a scaled version of the first one describing the multipath error. This procedure de-weights the measurements with multipath, and reduces the bias in the position estimate. The proposed map-aiding algorithm improves the ambiguity convergence time by approximately 80%, while the deweighting process enhances it by around 25% for the segments of the vehicle dataset that were analyzed. This work serves as a demonstration of cases wherein the particle filter addresses the limitations of the EKF in estimating the float solution in ambiguity resolution. Such limitations include constraints that give rise to non-Gaussian probability density functions and the utilization of a distinct likelihood function for outlier measurements, as opposed to the Gaussian assumption made by the EKF. The proposed map-aided particle filter can be implemented in real-time to enhance the float ambiguity during the initial epochs after the filter has been initialized. This implementation proves beneficial in urban environments where there is a loss or complete obstruction of the GNSS signal

    Cyclotomy and analytic geometry over F_1

    Full text link
    Geometry over non--existent "field with one element" F1F_1 conceived by Jacques Tits [Ti] half a century ago recently found an incarnation, in at least two related but different guises. In this paper I analyze the crucial role of roots of unity in this geometry and propose a version of the notion of "analytic functions" over F1F_1. The paper combines a focused survey with some new constructions. In new version, several local additions and changes are made, references added.Comment: 30 page

    Multivariate Statistical Machine Learning Methods for Genomic Prediction

    Get PDF
    This book is open access under a CC BY 4.0 license This open access book brings together the latest genome base prediction models currently being used by statisticians, breeders and data scientists. It provides an accessible way to understand the theory behind each statistical learning tool, the required pre-processing, the basics of model building, how to train statistical learning methods, the basic R scripts needed to implement each statistical learning tool, and the output of each tool. To do so, for each tool the book provides background theory, some elements of the R statistical software for its implementation, the conceptual underpinnings, and at least two illustrative examples with data from real-world genomic selection experiments. Lastly, worked-out examples help readers check their own comprehension. The book will greatly appeal to readers in plant (and animal) breeding, geneticists and statisticians, as it provides in a very accessible way the necessary theory, the appropriate R code, and illustrative examples for a complete understanding of each statistical learning tool. In addition, it weighs the advantages and disadvantages of each tool

    Multivariate Statistical Machine Learning Methods for Genomic Prediction

    Get PDF
    This book is open access under a CC BY 4.0 license This open access book brings together the latest genome base prediction models currently being used by statisticians, breeders and data scientists. It provides an accessible way to understand the theory behind each statistical learning tool, the required pre-processing, the basics of model building, how to train statistical learning methods, the basic R scripts needed to implement each statistical learning tool, and the output of each tool. To do so, for each tool the book provides background theory, some elements of the R statistical software for its implementation, the conceptual underpinnings, and at least two illustrative examples with data from real-world genomic selection experiments. Lastly, worked-out examples help readers check their own comprehension. The book will greatly appeal to readers in plant (and animal) breeding, geneticists and statisticians, as it provides in a very accessible way the necessary theory, the appropriate R code, and illustrative examples for a complete understanding of each statistical learning tool. In addition, it weighs the advantages and disadvantages of each tool

    Policy Extraction via Online Q-Value Distillation

    Get PDF
    Recently, deep neural networks have been capable of solving complex control tasks in certain challenging environments. However, these deep learning policies continue to be hard to interpret, explain and verify which limits their practical applicability. Decision Trees lend themselves well to explanation and verification tools but are not easy to train especially in an online fashion. The aim of this thesis is to explore online tree construction algorithms and demonstrate the technique and effectiveness of distilling reinforcement learning policies into a Bayesian tree structure. We introduce Q-BSP Trees and an Ordered Sequential Monte Carlo training algorithm that helps condense the Q-function from fully trained Deep Q-Networks into the tree structure. QBSP Forests generate partitioning rules that transparently reconstruct the value function for all possible states. It convincingly beats performance benchmarks provided by earlier policy distillation methods resulting in performance closest to the original Deep Learning policy
    • …
    corecore