45,915 research outputs found

    Visual-Inertial first responder localisation in large-scale indoor training environments.

    Get PDF
    Accurately and reliably determining the position and heading of first responders undertaking training exercises can provide valuable insights into their situational awareness and give a larger context to the decisions made. Measuring first responder movement, however, requires an accurate and portable localisation system. Training exercises of- ten take place in large-scale indoor environments with limited power infrastructure to support localisation. Indoor positioning technologies that use radio or sound waves for localisation require an extensive network of transmitters or receivers to be installed within the environment to ensure reliable coverage. These technologies also need power sources to operate, making their use impractical for this application. Inertial sensors are infrastructure independent, low cost, and low power positioning devices which are attached to the person or object being tracked, but their localisation accuracy deteriorates over long-term tracking due to intrinsic biases and sensor noise. This thesis investigates how inertial sensor tracking can be improved by providing correction from a visual sensor that uses passive infrastructure (fiducial markers) to calculate accurate position and heading values. Even though using a visual sensor increase the accuracy of the localisation system, combining them with inertial sensors is not trivial, especially when mounted on different parts of the human body and going through different motion dynamics. Additionally, visual sensors have higher energy consumption, requiring more batteries to be carried by the first responder. This thesis presents a novel sensor fusion approach by loosely coupling visual and inertial sensors to create a positioning system that accurately localises walking humans in largescale indoor environments. Experimental evaluation of the devised localisation system indicates sub-metre accuracy for a 250m long indoor trajectory. The thesis also proposes two methods to improve the energy efficiency of the localisation system. The first is a distance-based error correction approach which uses distance estimation from the foot-mounted inertial sensor to reduce the number of corrections required from the visual sensor. Results indicate a 70% decrease in energy consumption while maintaining submetre localisation accuracy. The second method is a motion type adaptive error correction approach, which uses the human walking motion type (forward, backward, or sideways) as an input to further optimise the energy efficiency of the localisation system by modulating the operation of the visual sensor. Results of this approach indicate a 25% reduction in the number of corrections required to keep submetre localisation accuracy. Overall, this thesis advances the state of the art by providing a sensor fusion solution for long-term submetre accurate localisation and methods to reduce the energy consumption, making it more practical for use in first responder training exercises

    Localising Faster: Efficient and precise lidar-based robot localisation in large-scale environments

    Get PDF
    This paper proposes a novel approach for global localisation of mobile robots in large-scale environments. Our method leverages learning-based localisation and filtering-based localisation, to localise the robot efficiently and precisely through seeding Monte Carlo Localisation (MCL) with a deep learned distribution. In particular, a fast localisation system rapidly estimates the 6-DOF pose through a deep-probabilistic model (Gaussian Process Regression with a deep kernel), then a precise recursive estimator refines the estimated robot pose according to the geometric alignment. More importantly, the Gaussian method (i.e. deep probabilistic localisation) and non-Gaussian method (i.e. MCL) can be integrated naturally via importance sampling. Consequently, the two systems can be integrated seamlessly and mutually benefit from each other. To verify the proposed framework, we provide a case study in large-scale localisation with a 3D lidar sensor. Our experiments on the Michigan NCLT long-term dataset show that the proposed method is able to localise the robot in 1.94 s on average (median of 0.8 s) with precision 0.75 m in a large-scale environment of approximately 0.5 km 2

    Neural correlates of navigation in large-scale space

    Get PDF
    Navigation and self-localisation are fundamental to spatial cognition. The cognitive map supporting these abilities is implemented in the hippocampal formation. Place cells in the hippocampus fire when the animal is at a specific location – a place field. They are thought to be involved in navigation and self-localisation but usually studied in constrained environments, limiting observable states. In this thesis, I present two experiments studying place cells in large open field environments, a novel auditory cue-triggered navigational task, and a technical solution for conducting large scale automated experiments. Place cells are frequently reactivated during immobility, rapidly replaying trajectories through environments. These replay events are thought to be involved in navigational planning. Using a novel automated cue-triggered navigational task in a large open field environment, I show that replay is not associated with navigation to the goal. Instead, it occurs reliably at the end of successful trials, when an associated reward is received, but not during consumption of scattered pellets. The trajectories in these events are predictive of the animal’s movement after, but not before, the reward. The number of place fields per cell, their size and other properties have not been fully characterised. Using multiple large open field environments of different size, I show that place field size, shape and density changes systematically with distance from walls. However, through a homeostatic mechanism, the mean firing rate and proportion of co-active units in the population remains constant throughout environments, as does the accuracy of their spatial representation. Multiple place field properties are conserved by cells across environments, including the number of fields, which is quantified relative to environment size using a gamma-Poisson model. Place cell population models suggest two sub-populations, with uniform and boundary dependent field distributions. These results provide a comprehensive account of place cell population statistics in different size environments

    Wild-Places: A Large-Scale Dataset for Lidar Place Recognition in Unstructured Natural Environments

    Full text link
    Many existing datasets for lidar place recognition are solely representative of structured urban environments, and have recently been saturated in performance by deep learning based approaches. Natural and unstructured environments present many additional challenges for the tasks of long-term localisation but these environments are not represented in currently available datasets. To address this we introduce Wild-Places, a challenging large-scale dataset for lidar place recognition in unstructured, natural environments. Wild-Places contains eight lidar sequences collected with a handheld sensor payload over the course of fourteen months, containing a total of 67K undistorted lidar submaps along with accurate 6DoF ground truth. Our dataset contains multiple revisits both within and between sequences, allowing for both intra-sequence (i.e. loop closure detection) and inter-sequence (i.e. re-localisation) place recognition. We also benchmark several state-of-the-art approaches to demonstrate the challenges that this dataset introduces, particularly the case of long-term place recognition due to natural environments changing over time. Our dataset and code will be available at https://csiro-robotics.github.io/Wild-Places.Comment: Equal Contribution from first two authors Under Review Website link: https://csiro-robotics.github.io/Wild-Places

    RFID Localisation For Internet Of Things Smart Homes: A Survey

    Full text link
    The Internet of Things (IoT) enables numerous business opportunities in fields as diverse as e-health, smart cities, smart homes, among many others. The IoT incorporates multiple long-range, short-range, and personal area wireless networks and technologies into the designs of IoT applications. Localisation in indoor positioning systems plays an important role in the IoT. Location Based IoT applications range from tracking objects and people in real-time, assets management, agriculture, assisted monitoring technologies for healthcare, and smart homes, to name a few. Radio Frequency based systems for indoor positioning such as Radio Frequency Identification (RFID) is a key enabler technology for the IoT due to its costeffective, high readability rates, automatic identification and, importantly, its energy efficiency characteristic. This paper reviews the state-of-the-art RFID technologies in IoT Smart Homes applications. It presents several comparable studies of RFID based projects in smart homes and discusses the applications, techniques, algorithms, and challenges of adopting RFID technologies in IoT smart home systems.Comment: 18 pages, 2 figures, 3 table

    Indoor wireless communications and applications

    Get PDF
    Chapter 3 addresses challenges in radio link and system design in indoor scenarios. Given the fact that most human activities take place in indoor environments, the need for supporting ubiquitous indoor data connectivity and location/tracking service becomes even more important than in the previous decades. Specific technical challenges addressed in this section are(i), modelling complex indoor radio channels for effective antenna deployment, (ii), potential of millimeter-wave (mm-wave) radios for supporting higher data rates, and (iii), feasible indoor localisation and tracking techniques, which are summarised in three dedicated sections of this chapter

    Learning Deployable Navigation Policies at Kilometer Scale from a Single Traversal

    Full text link
    Model-free reinforcement learning has recently been shown to be effective at learning navigation policies from complex image input. However, these algorithms tend to require large amounts of interaction with the environment, which can be prohibitively costly to obtain on robots in the real world. We present an approach for efficiently learning goal-directed navigation policies on a mobile robot, from only a single coverage traversal of recorded data. The navigation agent learns an effective policy over a diverse action space in a large heterogeneous environment consisting of more than 2km of travel, through buildings and outdoor regions that collectively exhibit large variations in visual appearance, self-similarity, and connectivity. We compare pretrained visual encoders that enable precomputation of visual embeddings to achieve a throughput of tens of thousands of transitions per second at training time on a commodity desktop computer, allowing agents to learn from millions of trajectories of experience in a matter of hours. We propose multiple forms of computationally efficient stochastic augmentation to enable the learned policy to generalise beyond these precomputed embeddings, and demonstrate successful deployment of the learned policy on the real robot without fine tuning, despite environmental appearance differences at test time. The dataset and code required to reproduce these results and apply the technique to other datasets and robots is made publicly available at rl-navigation.github.io/deployable
    corecore