10,240 research outputs found

    Visualization of Simultaneous Localization and Mapping using SVG

    Get PDF
    Robotic system often use simultaneous localization and mapping method in their operations. Most of the calculation stored as a nested array with multiple level and dimension. SLAM data contains robot movement, object detection and relation between them. This system visualize SLAM data into a map containing robot historical position,object position and relation between object and robot that show detections line from each robot position. The visualized so human eye can understand it. This paper describes the process of movement and detection data composition and conversion to prepare the information required to build a map. The map composed by plotting every movements and detections into polar coordinate area. The map stored into a database for flexible future usage. Commonly used web based interface chosen to display the map via web browser. The map generated by server side scripts that transform polar data into full map

    Sensors, SLAM and Long-term Autonomy: A Review

    Get PDF
    Simultaneous Localization and Mapping, commonly known as SLAM, has been an active research area in the field of Robotics over the past three decades. For solving the SLAM problem, every robot is equipped with either a single sensor or a combination of similar/different sensors. This paper attempts to review, discuss, evaluate and compare these sensors. Keeping an eye on future, this paper also assesses the characteristics of these sensors against factors critical to the long-term autonomy challenge

    Assessing the Viability of Complex Electrical Impedance Tomography (EIT) with a Spatially Distributed Sensor Array for Imaging of River Bed Morphology: a Proof of Concept (Study)

    Get PDF
    This report was produced as part of a NERC funded ‘Connect A’ project to establish a new collaborative partnership between the University of Worcester (UW) and Q-par Angus Ltd. The project aim was to assess the potential of using complex Electrical Impedance Tomography (EIT) to image river bed morphology. An assessment of the viability of sensors inserted vertically into the channel margins to provide real-time or near real-time monitoring of bed morphology is reported. Funding has enabled UW to carry out a literature review of the use of EIT and existing methods used for river bed surveys, and outline the requirements of potential end-users. Q-par Angus has led technical developments and assessed the viability of EIT for this purpose. EIT is one of a suite of tomographic imaging techniques and has already been used as an imaging tool for medical analysis, industrial processing and geophysical site survey work. The method uses electrodes placed on the margins or boundary of the entity being imaged, and a current is applied to some and measured on the remaining ones. Tomographic reconstruction uses algorithms to estimate the distribution of conductivity within the object and produce an image of this distribution from impedance measurements. The advantages of the use of EIT lie with the inherent simplicity, low cost and portability of the hardware, the high speed of data acquisition for real-time or near real-time monitoring, robust sensors, and the object being monitored is done so in a non-invasive manner. The need for sophisticated image reconstruction algorithms, and providing images with adequate spatial resolution are key challenges. A literature review of the use of EIT suggests that to date, despite its many other applications, to the best of our knowledge only one study has utilised EIT for river survey work (Sambuelli et al 2002). The Sambuelli (2002) study supported the notion that EIT may provide an innovative way of describing river bed morphology in a cost effective way. However this study used an invasive sensor array, and therefore the potential for using EIT in a non-invasive way in a river environment is still to be tested. A review of existing methods to monitor river bed morphology indicates that a plethora of techniques have been applied by a range of disciplines including fluvial geomorphology, ecology and engineering. However, none provide non-invasive, low costs assessments in real-time or near real-time. Therefore, EIT has the potential to meet the requirements of end users that no existing technique can accomplish. Work led by Q-par Angus Ltd. has assessed the technical requirements of the proposed approach, including probe design and deployment, sensor array parameters, data acquisition, image reconstruction and test procedure. Consequently, the success of this collaboration, literature review, identification of the proposed approach and potential applications of this technique have encouraged the authors to seek further funding to test, develop and market this approach through the development of a new environmental sensor

    Vision-Based Localization Algorithm Based on Landmark Matching, Triangulation, Reconstruction, and Comparison

    No full text
    Many generic position-estimation algorithms are vulnerable to ambiguity introduced by nonunique landmarks. Also, the available high-dimensional image data is not fully used when these techniques are extended to vision-based localization. This paper presents the landmark matching, triangulation, reconstruction, and comparison (LTRC) global localization algorithm, which is reasonably immune to ambiguous landmark matches. It extracts natural landmarks for the (rough) matching stage before generating the list of possible position estimates through triangulation. Reconstruction and comparison then rank the possible estimates. The LTRC algorithm has been implemented using an interpreted language, onto a robot equipped with a panoramic vision system. Empirical data shows remarkable improvement in accuracy when compared with the established random sample consensus method. LTRC is also robust against inaccurate map data

    Indoor wireless communications and applications

    Get PDF
    Chapter 3 addresses challenges in radio link and system design in indoor scenarios. Given the fact that most human activities take place in indoor environments, the need for supporting ubiquitous indoor data connectivity and location/tracking service becomes even more important than in the previous decades. Specific technical challenges addressed in this section are(i), modelling complex indoor radio channels for effective antenna deployment, (ii), potential of millimeter-wave (mm-wave) radios for supporting higher data rates, and (iii), feasible indoor localisation and tracking techniques, which are summarised in three dedicated sections of this chapter

    Multimodal machine learning for intelligent mobility

    Get PDF
    Scientific problems are solved by finding the optimal solution for a specific task. Some problems can be solved analytically while other problems are solved using data driven methods. The use of digital technologies to improve the transportation of people and goods, which is referred to as intelligent mobility, is one of the principal beneficiaries of data driven solutions. Autonomous vehicles are at the heart of the developments that propel Intelligent Mobility. Due to the high dimensionality and complexities involved in real-world environments, it needs to become commonplace for intelligent mobility to use data-driven solutions. As it is near impossible to program decision making logic for every eventuality manually. While recent developments of data-driven solutions such as deep learning facilitate machines to learn effectively from large datasets, the application of techniques within safety-critical systems such as driverless cars remain scarce.Autonomous vehicles need to be able to make context-driven decisions autonomously in different environments in which they operate. The recent literature on driverless vehicle research is heavily focused only on road or highway environments but have discounted pedestrianized areas and indoor environments. These unstructured environments tend to have more clutter and change rapidly over time. Therefore, for intelligent mobility to make a significant impact on human life, it is vital to extend the application beyond the structured environments. To further advance intelligent mobility, researchers need to take cues from multiple sensor streams, and multiple machine learning algorithms so that decisions can be robust and reliable. Only then will machines indeed be able to operate in unstructured and dynamic environments safely. Towards addressing these limitations, this thesis investigates data driven solutions towards crucial building blocks in intelligent mobility. Specifically, the thesis investigates multimodal sensor data fusion, machine learning, multimodal deep representation learning and its application of intelligent mobility. This work demonstrates that mobile robots can use multimodal machine learning to derive driver policy and therefore make autonomous decisions.To facilitate autonomous decisions necessary to derive safe driving algorithms, we present an algorithm for free space detection and human activity recognition. Driving these decision-making algorithms are specific datasets collected throughout this study. They include the Loughborough London Autonomous Vehicle dataset, and the Loughborough London Human Activity Recognition dataset. The datasets were collected using an autonomous platform design and developed in house as part of this research activity. The proposed framework for Free-Space Detection is based on an active learning paradigm that leverages the relative uncertainty of multimodal sensor data streams (ultrasound and camera). It utilizes an online learning methodology to continuously update the learnt model whenever the vehicle experiences new environments. The proposed Free Space Detection algorithm enables an autonomous vehicle to self-learn, evolve and adapt to new environments never encountered before. The results illustrate that online learning mechanism is superior to one-off training of deep neural networks that require large datasets to generalize to unfamiliar surroundings. The thesis takes the view that human should be at the centre of any technological development related to artificial intelligence. It is imperative within the spectrum of intelligent mobility where an autonomous vehicle should be aware of what humans are doing in its vicinity. Towards improving the robustness of human activity recognition, this thesis proposes a novel algorithm that classifies point-cloud data originated from Light Detection and Ranging sensors. The proposed algorithm leverages multimodality by using the camera data to identify humans and segment the region of interest in point cloud data. The corresponding 3-dimensional data was converted to a Fisher Vector Representation before being classified by a deep Convolutional Neural Network. The proposed algorithm classifies the indoor activities performed by a human subject with an average precision of 90.3%. When compared to an alternative point cloud classifier, PointNet[1], [2], the proposed framework out preformed on all classes. The developed autonomous testbed for data collection and algorithm validation, as well as the multimodal data-driven solutions for driverless cars, is the major contributions of this thesis. It is anticipated that these results and the testbed will have significant implications on the future of intelligent mobility by amplifying the developments of intelligent driverless vehicles.</div

    A Non-Rigid Map Fusion-Based RGB-Depth SLAM Method for Endoscopic Capsule Robots

    Full text link
    In the gastrointestinal (GI) tract endoscopy field, ingestible wireless capsule endoscopy is considered as a minimally invasive novel diagnostic technology to inspect the entire GI tract and to diagnose various diseases and pathologies. Since the development of this technology, medical device companies and many groups have made significant progress to turn such passive capsule endoscopes into robotic active capsule endoscopes to achieve almost all functions of current active flexible endoscopes. However, the use of robotic capsule endoscopy still has some challenges. One such challenge is the precise localization of such active devices in 3D world, which is essential for a precise three-dimensional (3D) mapping of the inner organ. A reliable 3D map of the explored inner organ could assist the doctors to make more intuitive and correct diagnosis. In this paper, we propose to our knowledge for the first time in literature a visual simultaneous localization and mapping (SLAM) method specifically developed for endoscopic capsule robots. The proposed RGB-Depth SLAM method is capable of capturing comprehensive dense globally consistent surfel-based maps of the inner organs explored by an endoscopic capsule robot in real time. This is achieved by using dense frame-to-model camera tracking and windowed surfelbased fusion coupled with frequent model refinement through non-rigid surface deformations

    Use of Pattern Classification Algorithms to Interpret Passive and Active Data Streams from a Walking-Speed Robotic Sensor Platform

    Get PDF
    In order to perform useful tasks for us, robots must have the ability to notice, recognize, and respond to objects and events in their environment. This requires the acquisition and synthesis of information from a variety of sensors. Here we investigate the performance of a number of sensor modalities in an unstructured outdoor environment, including the Microsoft Kinect, thermal infrared camera, and coffee can radar. Special attention is given to acoustic echolocation measurements of approaching vehicles, where an acoustic parametric array propagates an audible signal to the oncoming target and the Kinect microphone array records the reflected backscattered signal. Although useful information about the target is hidden inside the noisy time domain measurements, the Dynamic Wavelet Fingerprint process (DWFP) is used to create a time-frequency representation of the data. A small-dimensional feature vector is created for each measurement using an intelligent feature selection process for use in statistical pattern classification routines. Using our experimentally measured data from real vehicles at 50 m, this process is able to correctly classify vehicles into one of five classes with 94% accuracy. Fully three-dimensional simulations allow us to study the nonlinear beam propagation and interaction with real-world targets to improve classification results

    Robot Mapping and Navigation by Fusing Sensory Information

    Get PDF

    Machining-based coverage path planning for automated structural inspection

    Get PDF
    The automation of robotically delivered nondestructive evaluation inspection shares many aims with traditional manufacture machining. This paper presents a new hardware and software system for automated thickness mapping of large-scale areas, with multiple obstacles, by employing computer-aided drawing (CAD)/computer-aided manufacturing (CAM)-inspired path planning to implement control of a novel mobile robotic thickness mapping inspection vehicle. A custom postprocessor provides the necessary translation from CAM numeric code through robotic kinematic control to combine and automate the overall process. The generalized steps to implement this approach for any mobile robotic platform are presented herein and applied, in this instance, to a novel thickness mapping crawler. The inspection capabilities of the system were evaluated on an indoor mock-inspection scenario, within a motion tracking cell, to provide quantitative performance figures for positional accuracy. Multiple thickness defects simulating corrosion features on a steel sample plate were combined with obstacles to be avoided during the inspection. A minimum thickness mapping error of 0.21 mm and a mean path error of 4.41 mm were observed for a 2 m² carbon steel sample of 10-mm nominal thickness. The potential of this automated approach has benefits in terms of repeatability of area coverage, obstacle avoidance, and reduced path overlap, all of which directly lead to increased task efficiency and reduced inspection time of large structural assets
    corecore