135 research outputs found

    Seamless Multimodal Biometrics for Continuous Personalised Wellbeing Monitoring

    Full text link
    Artificially intelligent perception is increasingly present in the lives of every one of us. Vehicles are no exception, (...) In the near future, pattern recognition will have an even stronger role in vehicles, as self-driving cars will require automated ways to understand what is happening around (and within) them and act accordingly. (...) This doctoral work focused on advancing in-vehicle sensing through the research of novel computer vision and pattern recognition methodologies for both biometrics and wellbeing monitoring. The main focus has been on electrocardiogram (ECG) biometrics, a trait well-known for its potential for seamless driver monitoring. Major efforts were devoted to achieving improved performance in identification and identity verification in off-the-person scenarios, well-known for increased noise and variability. Here, end-to-end deep learning ECG biometric solutions were proposed and important topics were addressed such as cross-database and long-term performance, waveform relevance through explainability, and interlead conversion. Face biometrics, a natural complement to the ECG in seamless unconstrained scenarios, was also studied in this work. The open challenges of masked face recognition and interpretability in biometrics were tackled in an effort to evolve towards algorithms that are more transparent, trustworthy, and robust to significant occlusions. Within the topic of wellbeing monitoring, improved solutions to multimodal emotion recognition in groups of people and activity/violence recognition in in-vehicle scenarios were proposed. At last, we also proposed a novel way to learn template security within end-to-end models, dismissing additional separate encryption processes, and a self-supervised learning approach tailored to sequential data, in order to ensure data security and optimal performance. (...)Comment: Doctoral thesis presented and approved on the 21st of December 2022 to the University of Port

    New Computational Methods for Automated Large-Scale Archaeological Site Detection

    Get PDF
    Aquesta tesi doctoral presenta una sèrie d'enfocaments, fluxos de treball i models innovadors en el camp de l'arqueologia computacional per a la detecció automatitzada a gran escala de jaciments arqueològics. S'introdueixen nous conceptes, enfocaments i estratègies, com ara lidar multitemporal, aprenentatge automàtic híbrid, refinament, curriculum learning i blob analysis; així com diferents mètodes d'augment de dades aplicats per primera vegada en el camp de l'arqueologia. S'utilitzen múltiples fonts, com ara imatges de satèl·lits multiespectrals, fotografies RGB de plataformes VANT, mapes històrics i diverses combinacions de sensors, dades i fonts. Els mètodes creats durant el desenvolupament d'aquest doctorat s'han avaluat en projectes en curs: Urbanització a Hispània i la Gàl·lia Mediterrània en el primer mil·lenni aC, detecció de monticles funeraris utilitzant algorismes d'aprenentatge automàtic al nord-oest de la Península Ibèrica, prospecció arqueològica intel·ligent basada en drons (DIASur), i cartografiat del patrimoni arqueològic al sud d'Àsia (MAHSA), per a la qual s'han dissenyat fluxos de treball adaptats als reptes específics del projecte. Aquests nous mètodes han aconseguit proporcionar solucions als problemes comuns d'estudis arqueològics presents en estudis similars, com la baixa precisió en detecció i les poques dades d'entrenament. Els mètodes validats i presentats com a part de la tesi doctoral s'han publicat en accés obert amb el codi disponible perquè puguin implementar-se en altres estudis arqueològics.Esta tesis doctoral presenta una serie de enfoques, flujos de trabajo y modelos innovadores en el campo de la arqueología computacional para la detección automatizada a gran escala de yacimientos arqueológicos. Se introducen nuevos conceptos, enfoques y estrategias, como lidar multitemporal, aprendizaje automático híbrido, refinamiento, curriculum learning y blob analysis; así como diferentes métodos de aumento de datos aplicados por primera vez en el campo de la arqueología. Se utilizan múltiples fuentes, como lidar, imágenes satelitales multiespectrales, fotografías RGB de plataformas VANT, mapas históricos y varias combinaciones de sensores, datos y fuentes. Los métodos creados durante el desarrollo de este doctorado han sido evaluados en proyectos en curso: Urbanización en Iberia y la Galia Mediterránea en el Primer Milenio a. C., Detección de túmulos mediante algoritmos de aprendizaje automático en el Noroeste de la Península Ibérica, Prospección Arqueológica Inteligente basada en Drones (DIASur), y cartografiado del Patrimonio del Sur de Asia (MAHSA), para los que se han diseñado flujos de trabajo adaptados a los retos específicos del proyecto. Estos nuevos métodos han logrado proporcionar soluciones a problemas comunes de la prospección arqueológica presentes en estudios similares, como la baja precisión en detección y los pocos datos de entrenamiento. Los métodos validados y presentados como parte de la tesis doctoral se han publicado en acceso abierto con su código disponible para que puedan implementarse en otros estudios arqueológicos.This doctoral thesis presents a series of innovative approaches, workflows and models in the field of computational archaeology for the automated large-scale detection of archaeological sites. New concepts, approaches and strategies are introduced such as multitemporal lidar, hybrid machine learning, refinement, curriculum learning and blob analysis; as well as different data augmentation methods applied for the first time in the field of archaeology. Multiple sources are used, such as lidar, multispectral satellite imagery, RGB photographs from UAV platform, historical maps, and several combinations of sensors, data, and sources. The methods created during the development of this PhD have been evaluated in ongoing projects: Urbanization in Iberia and Mediterranean Gaul in the First Millennium BC, Detection of burial mounds using machine learning algorithms in the Northwest of the Iberian Peninsula, Drone-based Intelligent Archaeological Survey (DIASur), and Mapping Archaeological Heritage in South Asia (MAHSA), for which workflows adapted to the project’ s specific challenges have been designed. These new methods have managed to provide solutions to common archaeological survey problems, presented in similar large-scale site detection studies, such as the low precision in previous detection studies and how to handle problems with few training data. The validated approaches for site detection presented as part of the PhD have been published as open access papers with freely available code so can be implemented in other archaeological studies

    Template-Based Static Posterior Inference for Bayesian Probabilistic Programming

    Full text link
    In Bayesian probabilistic programming, a central problem is to estimate the normalised posterior distribution (NPD) of a probabilistic program with conditioning. Prominent approximate approaches to address this problem include Markov chain Monte Carlo and variational inference, but neither can generate guaranteed outcomes within limited time. Moreover, most existing formal approaches that perform exact inference for NPD are restricted to programs with closed-form solutions or bounded loops/recursion. A recent work (Beutner et al., PLDI 2022) derived guaranteed bounds for NPD over programs with unbounded recursion. However, as this approach requires recursion unrolling, it suffers from the path explosion problem. Furthermore, previous approaches do not consider score-recursive probabilistic programs that allow score statements inside loops, which is non-trivial and requires careful treatment to ensure the integrability of the normalising constant in NPD. In this work, we propose a novel automated approach to derive bounds for NPD via polynomial templates. Our approach can handle probabilistic programs with unbounded while loops and continuous distributions with infinite supports. The novelties in our approach are three-fold: First, we use polynomial templates to circumvent the path explosion problem from recursion unrolling; Second, we derive a novel multiplicative variant of Optional Stopping Theorem that addresses the integrability issue in score-recursive programs; Third, to increase the accuracy of the derived bounds via polynomial templates, we propose a novel technique of truncation that truncates a program into a bounded range of program values. Experiments over a wide range of benchmarks demonstrate that our approach is time-efficient and can derive bounds for NPD that are comparable with (or tighter than) the recursion-unrolling approach (Beutner et al., PLDI 2022)

    Decision-Aware Actor-Critic with Function Approximation and Theoretical Guarantees

    Full text link
    Actor-critic (AC) methods are widely used in reinforcement learning (RL) and benefit from the flexibility of using any policy gradient method as the actor and value-based method as the critic. The critic is usually trained by minimizing the TD error, an objective that is potentially decorrelated with the true goal of achieving a high reward with the actor. We address this mismatch by designing a joint objective for training the actor and critic in a decision-aware fashion. We use the proposed objective to design a generic, AC algorithm that can easily handle any function approximation. We explicitly characterize the conditions under which the resulting algorithm guarantees monotonic policy improvement, regardless of the choice of the policy and critic parameterization. Instantiating the generic algorithm results in an actor that involves maximizing a sequence of surrogate functions (similar to TRPO, PPO) and a critic that involves minimizing a closely connected objective. Using simple bandit examples, we provably establish the benefit of the proposed critic objective over the standard squared error. Finally, we empirically demonstrate the benefit of our decision-aware actor-critic framework on simple RL problems.Comment: 44 page

    Evaluating Classifiers During Dataset Shift

    Get PDF
    Deployment of a classifier into a machine learning application likely begins with training different types of algorithms on a subset of the available historical data and then evaluating them on datasets that are drawn from identical distributions. The goal of this evaluation process is to select the classifier that is believed to be most robust in maintaining good future performance, and then deploy that classifier to end-users who use it to make predictions on new data. Often times, predictive models are deployed in conditions that differ from those used in training, meaning that dataset shift occurred. In these situations, there are no guarantees that predictions made by the predictive model in deployment will still be as reliable and accurate as they were during the training of the model. This study demonstrated a technique that can be utilized by others when selecting a classifier for deployment, as well as the first comparative study that evaluates machine learning classifier performance on synthetic datasets with different levels of prior-probability, covariate, and concept dataset shifts. The results from this study showed the impact of dataset shift on the performance of different classifiers for two real-world datasets related to teacher retention in Wisconsin and detecting fraud in testing, as well as demonstrated a framework that can be used by others when selecting a classifier for deployment. By using the methods from this study as a proactive approach to evaluate classifiers on synthetic dataset shift, different classifiers would have been considered for deployment of both predictive models, compared to only using evaluation datasets that were drawn from identical distributions. The results from both real-world datasets also showed that there was no classifier that dealt well with prior-probability shift and that classifiers were affected less by covariate and concept shift than was expected. Two supplemental demonstrations of the methodology showed that it can be extended for additional purposes of evaluating classifiers on dataset shift. Results from analyzing the effects of hyperparameter choices on classifier performance under dataset shift, as well as the effects of actual dataset shift on classifier performance, showed that different hyperparameter configurations have an impact on the performance of a classifier in general, but can also have an impact on how robust that classifier might be to dataset shift

    Geographic information extraction from texts

    Get PDF
    A large volume of unstructured texts, containing valuable geographic information, is available online. This information – provided implicitly or explicitly – is useful not only for scientific studies (e.g., spatial humanities) but also for many practical applications (e.g., geographic information retrieval). Although large progress has been achieved in geographic information extraction from texts, there are still unsolved challenges and issues, ranging from methods, systems, and data, to applications and privacy. Therefore, this workshop will provide a timely opportunity to discuss the recent advances, new ideas, and concepts but also identify research gaps in geographic information extraction

    Neural Natural Language Processing for Long Texts: A Survey of the State-of-the-Art

    Full text link
    The adoption of Deep Neural Networks (DNNs) has greatly benefited Natural Language Processing (NLP) during the past decade. However, the demands of long document analysis are quite different from those of shorter texts, while the ever increasing size of documents uploaded on-line renders automated understanding of long texts a critical area of research. This article has two goals: a) it overviews the relevant neural building blocks, thus serving as a short tutorial, and b) it surveys the state-of-the-art in long document NLP, mainly focusing on two central tasks: document classification and document summarization. Sentiment analysis for long texts is also covered, since it is typically treated as a particular case of document classification. Additionally, this article discusses the main challenges, issues and current solutions related to long document NLP. Finally, the relevant, publicly available, annotated datasets are presented, in order to facilitate further research.Comment: 53 pages, 2 figures, 171 citation
    • …
    corecore