3,239 research outputs found

    Model error estimation in ensemble data assimilation

    Get PDF
    A new methodology is proposed to estimate and account for systematic model error in linear filtering as well as in nonlinear ensemble based filtering. Our results extend the work of Dee and Todling (2000) on constant bias errors to time-varying model errors. In contrast to existing methodologies, the new filter can also deal with the case where no dynamical model for the systematic error is available. In the latter case, the applicability is limited by a matrix rank condition which has to be satisfied in order for the filter to exist. <br><br> The performance of the filter developed in this paper is limited by the availability and the accuracy of observations and by the variance of the stochastic model error component. The effect of these aspects on the estimation accuracy is investigated in several numerical experiments using the Lorenz (1996) model. Experimental results indicate that the availability of a dynamical model for the systematic error significantly reduces the variance of the model error estimates, but has only minor effect on the estimates of the system state. The filter is able to estimate additive model error of any type, provided that the rank condition is satisfied and that the stochastic errors and measurement errors are significantly smaller than the systematic errors. The results of this study are encouraging. However, it remains to be seen how the filter performs in more realistic applications

    Método para el registro automático de imágenes basado en transformaciones proyectivas planas dependientes de las distancias y orientado a imágenes sin características comunes

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Ciencias Físicas, Departamento de Arquitectura de Computadores y Automática, leída el 18-12-2015Multisensory data fusion oriented to image-based application improves the accuracy, quality and availability of the data, and consequently, the performance of robotic systems, by means of combining the information of a scene acquired from multiple and different sources into a unified representation of the 3D world scene, which is more enlightening and enriching for the subsequent image processing, improving either the reliability by using the redundant information, or the capability by taking advantage of complementary information. Image registration is one of the most relevant steps in image fusion techniques. This procedure aims the geometrical alignment of two or more images. Normally, this process relies on feature-matching techniques, which is a drawback for combining sensors that are not able to deliver common features. For instance, in the combination of ToF and RGB cameras, the robust feature-matching is not reliable. Typically, the fusion of these two sensors has been addressed from the computation of the cameras calibration parameters for coordinate transformation between them. As a result, a low resolution colour depth map is provided. For improving the resolution of these maps and reducing the loss of colour information, extrapolation techniques are adopted. A crucial issue for computing high quality and accurate dense maps is the presence of noise in the depth measurement from the ToF camera, which is normally reduced by means of sensor calibration and filtering techniques. However, the filtering methods, implemented for the data extrapolation and denoising, usually over-smooth the data, reducing consequently the accuracy of the registration procedure...La fusión multisensorial orientada a aplicaciones de procesamiento de imágenes, conocida como fusión de imágenes, es una técnica que permite mejorar la exactitud, la calidad y la disponibilidad de datos de un entorno tridimensional, que a su vez permite mejorar el rendimiento y la operatividad de sistemas robóticos. Dicha fusión, se consigue mediante la combinación de la información adquirida por múltiples y diversas fuentes de captura de datos, la cual se agrupa del tal forma que se obtiene una mejor representación del entorno 3D, que es mucho más ilustrativa y enriquecedora para la implementación de métodos de procesamiento de imágenes. Con ello se consigue una mejora en la fiabilidad y capacidad del sistema, empleando la información redundante que ha sido adquirida por múltiples sensores. El registro de imágenes es uno de los procedimientos más importantes que componen la fusión de imágenes. El objetivo principal del registro de imágenes es la consecución de la alineación geométrica entre dos o más imágenes. Normalmente, este proceso depende de técnicas de búsqueda de patrones comunes entre imágenes, lo cual puede ser un inconveniente cuando se combinan sensores que no proporcionan datos con características similares. Un ejemplo de ello, es la fusión de cámaras de color de alta resolución (RGB) con cámaras de Tiempo de Vuelo de baja resolución (Time-of-Flight (ToF)), con las cuales no es posible conseguir una detección robusta de patrones comunes entre las imágenes capturadas por ambos sensores. Por lo general, la fusión entre estas cámaras se realiza mediante el cálculo de los parámetros de calibración de las mismas, que permiten realizar la trasformación homogénea entre ellas. Y como resultado de este xii Abstract procedimiento, se obtienen mapas de profundad y de color de baja resolución. Con el objetivo de mejorar la resolución de estos mapas y de evitar la pérdida de información de color, se utilizan diversas técnicas de extrapolación de datos. Un factor crucial a tomar en cuenta para la obtención de mapas de alta calidad y alta exactitud, es la presencia de ruido en las medidas de profundidad obtenidas por las cámaras ToF. Este problema, normalmente se reduce mediante la calibración de estos sensores y con técnicas de filtrado de datos. Sin embargo, las técnicas de filtrado utilizadas, tanto para la interpolación de datos, como para la reducción del ruido, suelen producir el sobre-alisamiento de los datos originales, lo cual reduce la exactitud del registro de imágenes...Sección Deptal. de Arquitectura de Computadores y Automática (Físicas)Fac. de Ciencias FísicasTRUEunpu

    The path inference filter: model-based low-latency map matching of probe vehicle data

    Full text link
    We consider the problem of reconstructing vehicle trajectories from sparse sequences of GPS points, for which the sampling interval is between 10 seconds and 2 minutes. We introduce a new class of algorithms, called altogether path inference filter (PIF), that maps GPS data in real time, for a variety of trade-offs and scenarios, and with a high throughput. Numerous prior approaches in map-matching can be shown to be special cases of the path inference filter presented in this article. We present an efficient procedure for automatically training the filter on new data, with or without ground truth observations. The framework is evaluated on a large San Francisco taxi dataset and is shown to improve upon the current state of the art. This filter also provides insights about driving patterns of drivers. The path inference filter has been deployed at an industrial scale inside the Mobile Millennium traffic information system, and is used to map fleets of data in San Francisco, Sacramento, Stockholm and Porto.Comment: Preprint, 23 pages and 23 figure

    Electrocardiogram Signal Analysis and Simulations for Non-Invasive Diagnosis - Model-Based and Data-Driven Approaches for the Estimation of Ionic Concentrations and Localization of Excitation Origins

    Get PDF
    Das Elektrokardiogramm (EKG) ist die Standardtechnik zur Messung der elektrischen Aktivität des Herzens. EKG-Geräte sind verfügbar, kostengünstig und erlauben zudem eine nichtinvasive Messung. Das ist insbesondere wichtig für die Diagnose von kardiovaskulären Erkrankungen (KVE). Letztere sind mit verursachten Kosten von 210 Milliarden Euro eine der Hauptbelastungen für das Gesundheitssystem in Europa und dort der Grund für 3,9 Millionen Todesfälle – dies entspricht 45% aller Todesfälle. Neben weiteren Risikofaktoren spielen chronische Nierenerkrankungen und strukturelle Veränderungen des Herzgewebes eine entscheidende Rolle für das Auftreten von KVE. Deshalb werden in dieser Arbeit zwei Pathologien, die in Verbindung zu KVE stehen, betrachtet: Elektrolytkonzentrationsveränderungen bei chronisch Nierenkranken und ektope Foki, die autonom Erregungen iniitieren. In beiden Projekten ist die Entwicklung von Methoden mithilfe von simulierten Signalen zur Diagnoseunterstützung das übergeordnete Ziel. Im ersten Projekt helfen simulierte EKGs die Signalverarbeitungskette zur EKG-basierten Schätzung der Ionenkonzentrationen von Kalium und Calcium zu optimieren. Die Erkenntnisse dieser Optimierung fließen in zwei patienten-spezifische Methoden zur Kaliumkonzentrationsschätzung ein, die wiederum mithilfe von Patientendaten ausgewertet werden. Die Methoden lieferten im Mittel einen absoluten Fehler von 0,37 mmol/l für einen patienten-spezifischen Ansatz und 0,48 mmol/l für einen globalen Ansatz mit zusätzlicher patienten-spezifischer Korrektur. Die Vorteile der Schätzmethoden werden gegenüber bereits existierender Ansätze dargelegt. Alle entwickelten Algorithmen sind ferner unter einer Open-Source-Lizenz veröffentlicht. Das zweite Projekt zielte auf die Lokalisierung von ektopen Foki mithilfe des EKGs ohne die Nutzung der individuellen Patientengeometrie. 1.766.406 simulierte EKG-Signale (Body Surface Potential Maps (BSPMs)) wurden zum Trainieren von zwei Convolutional Neural Networks (CNNs) erzeugt. Das erste CNN sorgt für die Schätzung von Anfang und Ende der Depolarisation der Ventrikel. Das zweite CNN nutzt die Information der Depolarisation im BSPM zur Schätzung des Erregungsurpsrungs. Der spezielle Aufbau des CNNs ermöglicht die Darstellung mehrerer Lösungen, wie sie durch Mehrdeutigkeiten im BSPM vorliegen können. Der kleinste Median des Lokalisierungsfehlers lag bei 1,54 mm für den Test-Datensatz der simulierten Signale, bzw. bei 37 mm für Patientensignale. Somit erlaubt die Kombination beider CNNs die verlässliche Lokalisierung von ektopen Foki auch anhand von Patientendaten, obwohl Patientendaten vorher nicht im Training genutzt wurden. Die Resultate dieser zwei Projekte demonstrieren, wie EKG-Simulationen zur Entwicklung und Verbesserung von EKG-Signalverarbeitungsmethoden eingesetzt werden und bei der Diagnosefindung helfen können. Zudem zeigt sich das Potential der Kombination von Simulationen und CNNs, um einerseits die zumeist raren klinischen Signale zu ersetzen und andererseits Modelle zu finden, die für mehrere Patienten/-innen gültig sind. Die vorgestellten Methoden bergen die Möglichkeit, die Diagnosestellungen zu beschleunigen und mit hoher Wahrscheinlichkeit den Therapieerfolg der Patienten zu verbessern

    Magnetoencephalography—theory, instrumentation, and applications to noninvasive studies of the working human brain

    Get PDF
    Magnetoencephalography (MEG) is a noninvasive technique for investigating neuronal activity in the living human brain. The time resolution of the method is better than 1 ms and the spatial discrimination is, under favorable circumstances, 2-3 mm for sources in the cerebral cortex. In MEG studies, the weak 10 fT-1 pT magnetic fields produced by electric currents flowing in neurons are measured with multichannel SQUID (superconducting quantum interference device) gradiometers. The sites in the cerebral cortex that are activated by a stimulus can be found from the detected magnetic-field distribution, provided that appropriate assumptions about the source render the solution of the inverse problem unique. Many interesting properties of the working human brain can be studied, including spontaneous activity and signal processing following external stimuli. For clinical purposes, determination of the locations of epileptic foci is of interest. The authors begin with a general introduction and a short discussion of the neural basis of MEG. The mathematical theory of the method is then explained in detail, followed by a thorough description of MEG instrumentation, data analysis, and practical construction of multi-SQUID devices. Finally, several MEG experiments performed in the authors' laboratory are described, covering studies of evoked responses and of spontaneous activity in both healthy and diseased brains. Many MEG studies by other groups are discussed briefly as well.Peer reviewe

    DEFORM'06 - Proceedings of the Workshop on Image Registration in Deformable Environments

    Get PDF
    Preface These are the proceedings of DEFORM'06, the Workshop on Image Registration in Deformable Environments, associated to BMVC'06, the 17th British Machine Vision Conference, held in Edinburgh, UK, in September 2006. The goal of DEFORM'06 was to bring together people from different domains having interests in deformable image registration. In response to our Call for Papers, we received 17 submissions and selected 8 for oral presentation at the workshop. In addition to the regular papers, Andrew Fitzgibbon from Microsoft Research Cambridge gave an invited talk at the workshop. The conference website including online proceedings remains open, see http://comsee.univ-bpclermont.fr/events/DEFORM06. We would like to thank the BMVC'06 co-chairs, Mike Chantler, Manuel Trucco and especially Bob Fisher for is great help in the local arrangements, Andrew Fitzgibbon, and the Programme Committee members who provided insightful reviews of the submitted papers. Special thanks go to Marc Richetin, head of the CNRS Research Federation TIMS, which sponsored the workshop. August 2006 Adrien Bartoli Nassir Navab Vincent Lepeti

    Model error estimation in ensemble data assimilation

    Get PDF

    PLANT: A Method for Detecting Changes of Slope in Noisy Trajectories

    Get PDF
    Time traces obtained from a variety of biophysical experiments contain valuable information on underlying processes occurring at the molecular level. Accurate quantification of these data can help explain the details of the complex dynamics of biological systems. Here, we describe PLANT (Piecewise Linear Approximation of Noisy Trajectories), a segmentation algorithm that allows the reconstruction of time-trace data with constant noise as consecutive straight lines, from which changes of slopes and their respective durations can be extracted. We present a general description of the algorithm and perform extensive simulations to characterize its strengths and limitations, providing a rationale for the performance of the algorithm in the different conditions tested. We further apply the algorithm to experimental data obtained from tracking the centroid position of lymphocytes migrating under the effect of a laminar flow and from single myosin molecules interacting with actin in a dual-trap force-clamp configuration.The authors gratefully acknowledge financial support fromthe European Commission (FP7-ICT-2011-7, grant number 288263), Erasmus Mundus Doctorate Program Europhoton-ics (grant number 159224-1-2009-1-FR-ERA MUNDUS-EMJD), Spanish Ministry of Economy and Competi-tiveness (“Severo Ochoa” Programme for Centres of Excellence in Research & Development SEV-2015-0522,and FIS2014-56107-R grants), Generalitat de Catalunyathrough the CERCA program, Italian Ministry of Uni-versity and Research (FIRB “Futuro in Ricerca” 2013grant n. RBFR13V4M2 and Flagship Project NANOMAX),Fundaci ́o Privada CELLEX (Barcelona), Ente Cassa diRisparmio di Firenze, Human Frontier Science Program (GARGP0027/2012) and LaserLab Europe 4 (GA 654148). C.M.acknowledges funding from the Spanish Ministry of Econ-omy and Competitiveness (MINECO) and the EuropeanSocial Fund (ESF) through the Ram ́on y Cajal program 2015(RYC-2015-17896).Peer ReviewedPostprint (author's final draft
    corecore