156,397 research outputs found

    Deep active localization

    Full text link
    Des progrès considérables ont été réalisés en robotique mobile au cours des dernières décennies et ces robots sont maintenant capables d’effectuer des tâches qu’on croyait au- paravant impossibles. Un facteur critique qui a permis aux robots d’accomplir ces diverses tâches difficiles est leur capacité à déterminer où ils se trouvent dans un environnement donné (localisation). On parvient à une automatisation plus poussée en laissant le robot choisir ses propres actions au lieu de faire appel à un téléopérateur humain. Cependant, la détermination précise de la pose (position + orientation) du robot et l’adaptation de cette capacité à des environnements plus vastes constituent depuis longtemps un défi dans le do- maine de la robotique mobile. Les approches traditionnelles à cette tâche de " localisation active " utilisent un critère théorique de l’information pour la sélection des actions ainsi que des modèles perceptuels faits à la main. Avec une augmentation constante des capacités de calcul disponibles au cours des trois dernières décennies, l’algorithme back-propagation a trouvé son utilisation dans des réseaux neuronaux beaucoup plus profonds et dans de nombreuses applications. En l’absence de données labellisées, le paradigme de l’apprentissage par le renforcement (RL) a récemment connu beaucoup de succès en ce qu’il apprend en interagissant avec l’environnement. Cepen- dant, il n’est pas pratique pour un algorithme RL d’apprendre raisonnablement bien à partir de l’expérience limitée du monde réel. C’est pourquoi il est courant d’entraîner l’agent dans un simulateur puis de transférer efficacement l’apprentissage dans de vrais robots. Dans cette thèse, nous proposons une méthode différentiable de bout en bout afin d’ap- prendre à choisir des mesures informatives pour la localisation de robots, qui peut être entraînée entièrement en simulation et ensuite transférée sur le robot réel sans aucun ajus- tement. Pour ce faire, on s’appuie sur les progrès récents de l’apprentissage profond et des paradigmes d’apprentissage de renforcement, combinés aux techniques de randomisation des domaine. Le système est composé de deux modules d’apprentissage : un réseau neuronal convolutionnel pour la perception, et un module de planification utilisant l’apprentissage profond par renforcement. Nous utilisons une approche multi-échelles dans le modèle per- ceptuel puisque la sélection d’action à l’aide de l’apprentissage par renforcement nécessite une précision de la position inférieure à la précision nécessaire au contrôle du robot. Nous démontrons que le système résultant surpasse les approches traditionnelles, en termes de perception et de planification. Nous démontrons également la robustesse de notre approche vis-à-vis différentes configurations de cartes et d’autres facteurs de nuisance par l’utilisa- tion de la randomisation de domaine au cours de l’entraînement. Le code a été publié : https://github.com/montrealrobotics/dal et est compatible avec le framework OpenAI gym, ainsi qu’avec le simulateur Gazebo.Mobile robots have made significant advances in recent decades and are now able to perform tasks that were once thought to be impossible. One critical factor that has enabled robots to perform these various challenging tasks is their ability to determine where they are located in a given environment (localization). Further automation is achieved by letting the robot choose its own actions instead of a human teleoperating it. However, determining its pose (position + orientation) precisely and scaling this capability to larger environments has been a long-standing challenge in the field of mobile robotics. Traditional approaches to this task of active localization use an information-theoretic criterion for action selection and hand-crafted perceptual models. With a steady rise in available computation in the last three decades, the back-propagation algorithm found its use in much deeper neural networks and in numerous applications. When labelled data is not available, the paradigm of reinforcement learning (RL) is used, where it learns by interacting with the environment. However, it is impractical for most RL algorithms to learn reasonably well from just the limited real world experience. Hence, it is common practice to train the RL based models in a simulator and efficiently transfer (without any significant loss of performance) these trained models into real robots. In this thesis, we propose an end-to-end differentiable method for learning to take in- formative actions for robot localization that is trainable entirely in simulation and then transferable onto real robot hardware with zero refinement. This is achieved by leveraging recent advancements in deep learning and reinforcement learning combined with domain randomization techniques. The system is composed of two learned modules: a convolu- tional neural network for perception, and a deep reinforcement learned planning module. We leverage a multi-scale approach in the perceptual model since the accuracy needed to take actions using reinforcement learning is much less than the accuracy needed for robot control. We demonstrate that the resulting system outperforms traditional approaches for either perception or planning. We also demonstrate our approach’s robustness to different map configurations and other nuisance parameters through the use of domain randomization in training. The code has been released: https://github.com/montrealrobotics/dal and is compatible with the OpenAI gym framework, as well as the Gazebo simulator

    Collaborative Deep Reinforcement Learning for Joint Object Search

    Full text link
    We examine the problem of joint top-down active search of multiple objects under interaction, e.g., person riding a bicycle, cups held by the table, etc.. Such objects under interaction often can provide contextual cues to each other to facilitate more efficient search. By treating each detector as an agent, we present the first collaborative multi-agent deep reinforcement learning algorithm to learn the optimal policy for joint active object localization, which effectively exploits such beneficial contextual information. We learn inter-agent communication through cross connections with gates between the Q-networks, which is facilitated by a novel multi-agent deep Q-learning algorithm with joint exploitation sampling. We verify our proposed method on multiple object detection benchmarks. Not only does our model help to improve the performance of state-of-the-art active localization models, it also reveals interesting co-detection patterns that are intuitively interpretable

    Panoramic Annular Localizer: Tackling the Variation Challenges of Outdoor Localization Using Panoramic Annular Images and Active Deep Descriptors

    Full text link
    Visual localization is an attractive problem that estimates the camera localization from database images based on the query image. It is a crucial task for various applications, such as autonomous vehicles, assistive navigation and augmented reality. The challenging issues of the task lie in various appearance variations between query and database images, including illumination variations, dynamic object variations and viewpoint variations. In order to tackle those challenges, Panoramic Annular Localizer into which panoramic annular lens and robust deep image descriptors are incorporated is proposed in this paper. The panoramic annular images captured by the single camera are processed and fed into the NetVLAD network to form the active deep descriptor, and sequential matching is utilized to generate the localization result. The experiments carried on the public datasets and in the field illustrate the validation of the proposed system.Comment: Accepted by ITSC 201

    An inversion method based on random sampling for real-time MEG neuroimaging

    Get PDF
    The MagnetoEncephaloGraphy (MEG) is a non-invasive neuroimaging technique with a high temporal resolution which can be successfully used in real-time applications, such as brain-computer interface training or neurofeedback rehabilitation. The localization of the active area of the brain from MEG data results in a highly ill-posed and ill-conditioned inverse problem that requires fast and efficient inversion methods to be solved. In this paper we use an inversion method based on random spatial sampling to solve the MEG inverse problem. The method is fast, efficient and has a low computational load. The numerical tests show that the method can produce accurate map of the electric activity inside the brain even in case of deep neural sources

    Privacy-Preserving by Design: Indoor Positioning System Using Wi-Fi Passive TDOA

    Full text link
    Indoor localization systems have become increasingly important in a wide range of applications, including industry, security, logistics, and emergency services. However, the growing demand for accurate localization has heightened concerns over privacy, as many localization systems rely on active signals that can be misused by an adversary to track users' movements or manipulate their measurements. This paper presents PassiFi, a novel passive Wi-Fi time-based indoor localization system that effectively balances accuracy and privacy. PassiFi uses a passive WiFi Time Difference of Arrival (TDoA) approach that ensures users' privacy and safeguards the integrity of their measurement data while still achieving high accuracy. The system adopts a fingerprinting approach to address multi-path and non-line-of-sight problems and utilizes deep neural networks to learn the complex relationship between TDoA and location. Evaluation in a real-world testbed demonstrates PassiFi's exceptional performance, surpassing traditional multilateration by 128%, achieving sub-meter accuracy on par with state-of-the-art active measurement systems, all while preserving privacy

    Active Visual Localization in Partially Calibrated Environments

    Full text link
    Humans can robustly localize themselves without a map after they get lost following prominent visual cues or landmarks. In this work, we aim at endowing autonomous agents the same ability. Such ability is important in robotics applications yet very challenging when an agent is exposed to partially calibrated environments, where camera images with accurate 6 Degree-of-Freedom pose labels only cover part of the scene. To address the above challenge, we explore using Reinforcement Learning to search for a policy to generate intelligent motions so as to actively localize the agent given visual information in partially calibrated environments. Our core contribution is to formulate the active visual localization problem as a Partially Observable Markov Decision Process and propose an algorithmic framework based on Deep Reinforcement Learning to solve it. We further propose an indoor scene dataset ACR-6, which consists of both synthetic and real data and simulates challenging scenarios for active visual localization. We benchmark our algorithm against handcrafted baselines for localization and demonstrate that our approach significantly outperforms them on localization success rate.Comment: https://www.youtube.com/watch?v=DIH-GbytCPM&feature=youtu.b
    • …
    corecore