40 research outputs found

    Deep Learning Features at Scale for Visual Place Recognition

    Full text link
    The success of deep learning techniques in the computer vision domain has triggered a range of initial investigations into their utility for visual place recognition, all using generic features from networks that were trained for other types of recognition tasks. In this paper, we train, at large scale, two CNN architectures for the specific place recognition task and employ a multi-scale feature encoding method to generate condition- and viewpoint-invariant features. To enable this training to occur, we have developed a massive Specific PlacEs Dataset (SPED) with hundreds of examples of place appearance change at thousands of different places, as opposed to the semantic place type datasets currently available. This new dataset enables us to set up a training regime that interprets place recognition as a classification problem. We comprehensively evaluate our trained networks on several challenging benchmark place recognition datasets and demonstrate that they achieve an average 10% increase in performance over other place recognition algorithms and pre-trained CNNs. By analyzing the network responses and their differences from pre-trained networks, we provide insights into what a network learns when training for place recognition, and what these results signify for future research in this area.Comment: 8 pages, 10 figures. Accepted by International Conference on Robotics and Automation (ICRA) 2017. This is the submitted version. The final published version may be slightly differen

    Implementing Best Practices for Co-Prescribing Naloxone in Your Agency: A Guide for Healthcare Professions

    Get PDF
    From April 2020 to April 2021, it has been recorded that there have been 75,673 opioid overdose-related deaths in the United States. This number is up almost 20,000 more from the last period that deaths were recorded, coming in at 56,064 from April 2019 to April 2020. The use of naloxone, has been proven to save the lives of overdose patients on opioids by reversing its effects. It has already shown significant reduction in opioid overdose related mortality. Pharmacists are now able to prescribe naloxone with opioid prescriptions without a script from a doctor. EMS units are active in communities utilizing naloxone in overdose cases. More importantly, healthcare providers need to continue increasing the amount of naloxone that is co-prescribed to their patients on opioids. Not only should more healthcare providers have more access to naloxone, but they should be able to explain how to use it to their patients. Motivating more healthcare providers to use naloxone is one of our fundamentals of our best practices intervention. In this pilot study, which is under a larger grant study that surveys physicians on attitudes and behaviors on co-prescribing naloxone, we decided to start by surveying medical students and assessing their knowledge of naloxone pre- and post-using our Continuing Medical Education (CME) called, Implementing Best Practices for Co-Prescribing Naloxone in Your Agency: A guide for healthcare professions”. Based on the preliminary data, all domains tested in the survey showed statistically significant improvement (p \u3c0.001) after going through the program. Based on these results, it provides further reinforcement that our CME training could be implemented in other healthcare sectors to further increase naloxone co-prescription rates by healthcare providers

    Image features for visual teach-and-repeat navigation in changing environments

    Get PDF
    We present an evaluation of standard image features in the context of long-term visual teach-and-repeat navigation of mobile robots, where the environment exhibits significant changes in appearance caused by seasonal weather variations and daily illumination changes. We argue that for long-term autonomous navigation, the viewpoint-, scale- and rotation- invariance of the standard feature extractors is less important than their robustness to the mid- and long-term environment appearance changes. Therefore, we focus our evaluation on the robustness of image registration to variable lighting and naturally-occurring seasonal changes. We combine detection and description components of different image extractors and evaluate their performance on five datasets collected by mobile vehicles in three different outdoor environments over the course of one year. Moreover, we propose a trainable feature descriptor based on a combination of evolutionary algorithms and Binary Robust Independent Elementary Features, which we call GRIEF (Generated BRIEF). In terms of robustness to seasonal changes, the most promising results were achieved by the SpG/CNN and the STAR/GRIEF feature, which was slightly less robust, but faster to calculate

    Computable Banach spaces via domain theory

    Get PDF
    This paper extends the order-theoretic approach to computable analysis via continuous domains to complete metric spaces and Banach spaces. We employ the domain of formal balls to define a computability theory for complete metric spaces. For Banach spaces, the domain specialises to the domain of closed balls, ordered by reversed inclusion. We characterise computable linear operators as those which map computable sequences to computable sequences and are effectively bounded. We show that the domain-theoretic computability theory is equivalent to the wellestablished approach by Pour-El and Richards. 1 Introduction This paper is part of a programme to introduce the theory of continuous domains as a new approach to computable analysis. Initiated by the various applications of continuous domain theory to modelling classical mathematical spaces and performing computations as outlined in the recent survey paper by Edalat [6], the authors started this work with [9] which was concerned with co..

    A Holistic Approach to Reactive Mobile Manipulation

    No full text
    We present the design and implementation of a taskable reactive mobile manipulation system. In contrary to related work, we treat the arm and base degrees of freedom as a holistic structure which greatly improves the speed and fluidity of the resulting motion. At the core of this approach is a robust and reactive motion controller which can achieve a desired end-effector pose, while avoiding joint position and velocity limits, and ensuring the mobile manipulator is manoeuvrable throughout the trajectory. This can support sensor-based behaviours such as closed-loop visual grasping. As no planning is involved in our approach, the robot is never stationary thinking about what to do next. We show the versatility of our holistic motion controller by implementing a pick and place system using behaviour trees and demonstrate this task on a 9-degree-of-freedom mobile manipulator. Additionally, we provide an open-source implementation of our motion controller for both non-holonomic and omnidirectional mobile manipulators available at jhavl.github.io/holistic

    QuadricSLAM: Dual quadrics as SLAM landmarks

    No full text
    Research in Simultaneous Localization And Mapping (SLAM) is increasingly moving towards richer world representations involving objects and high level features that enable a semantic model of the world for robots, potentially leading to a more meaningful set of robot-world interactions. Many of these advances are grounded in state-of-the-art computer vision techniques primarily developed in the context of image-based benchmark datasets, leaving several challenges to be addressed in adapting them for use in robotics. In this paper, we derive a SLAM formulation that uses dual quadrics as 3D landmark representations, exploiting their ability to compactly represent the size, position and orientation of an object, and show how 2D bounding boxes (such as those typically obtained from visual object detection systems) can directly constrain the quadric parameters via a novel geometric error formulation. We develop a sensor model for deep-learned object detectors that addresses the challenge of partial object detections often encountered in robotics applications, and demonstrate how to jointly estimate the camera pose and constrained dual quadric parameters in factor graph based SLAM with a general perspective camera.</p

    Online Monitoring of Object Detection Performance during Deployment

    No full text
    During deployment, an object detector is expected to operate at a similar performance level reported on its testing dataset. However, when deployed onboard mobile robots that operate under varying and complex environmental conditions, the detector's performance can fluctuate and occasionally degrade severely without warning. Undetected, this can lead the robot to take unsafe and risky actions based on low-quality and unreliable object detections. We address this problem and introduce a cascaded neural network that monitors the performance of the object detector by predicting the quality of its mean average precision (mAP) on a sliding window of the input frames. The proposed cascaded network exploits the internal features from the deep neural network of the object detector. We evaluate our proposed approach using different combinations of autonomous driving datasets and object detectors. </p

    Density-aware NeRF Ensembles: Quantifying Predictive Uncertainty in Neural Radiance Fields

    No full text
    We show that ensembling effectively quantifies model uncertainty in Neural Radiance Fields (NeRFs) if a density-aware epistemic uncertainty term is considered. The naive ensembles investigated in prior work simply average rendered RGB images to quantify the model uncertainty caused by conflicting explanations of the observed scene. In contrast, we additionally consider the termination probabilities along individual rays to identify epistemic model uncertainty due to a lack of knowledge about the parts of a scene unobserved during training. We achieve new state-of-the-art performance across established uncertainty quantification benchmarks for NeRFs, outperforming methods that require complex changes to the NeRF architecture and training regime. We furthermore demonstrate that NeRF uncertainty can be utilised for next-best view selection and model refinement.</p

    Fastslam using surf features: An efficient implementation and practical experiences

    No full text
    Abstract: This paper describes how the recently published SURF features can be used as landmarks for an online FastSLAM algorithm that simultaneously estimates the robot pose and the pose of a large number of landmarks. An implementation with particular focus on e cient data structures like two-stage landmark data base and special balanced binary trees is described. Practical results on outdoor data sets at 3 Hz with about 3-6 % error of total traveled distance are shown. 1
    corecore