2,534 research outputs found

    Model identification and model analysis in robot training

    Get PDF
    Robot training is a fast and efficient method of obtaining robot control code. Many current machine learning paradigms used for this purpose, however, result in opaque models that are difficult, if not impossible to analyse, which is an impediment in safety-critical applications or application scenarios where humans and robots occupy the same workspace. In experiments with a Magellan Pro mobile robot we demonstrate that it is possible to obtain transparent models of sensor-motor couplings that are amenable to subsequent analysis, and how such analysis can be used to refine and tune the models post hoc

    Metric State Space Reinforcement Learning for a Vision-Capable Mobile Robot

    Full text link
    We address the problem of autonomously learning controllers for vision-capable mobile robots. We extend McCallum's (1995) Nearest-Sequence Memory algorithm to allow for general metrics over state-action trajectories. We demonstrate the feasibility of our approach by successfully running our algorithm on a real mobile robot. The algorithm is novel and unique in that it (a) explores the environment and learns directly on a mobile robot without using a hand-made computer model as an intermediate step, (b) does not require manual discretization of the sensor input space, (c) works in piecewise continuous perceptual spaces, and (d) copes with partial observability. Together this allows learning from much less experience compared to previous methods.Comment: 14 pages, 8 figure

    WPŁYW ALGORYTMÓW STEROWANIA ROBOTEM MOBILNYM NA PROCES UNIKANIA PRZESZKÓD

    Get PDF
    This article presents algorithms for controlling a mobile robot. An algorithms are based on artificial neural network and fuzzy logic. Distance was measured with the use of ultrasonic sensor. The equipment applied as well as signal processing algorithms were characterized. Tests were carried out on a mobile wheeled robot. The analysis of the influence of algorithm while avoiding obstacles was made.W artykule przedstawiono algorytmy sterowania robotem mobilnym. Wykorzystano algorytm oparty o sztuczną sieć neuronową oraz logikę rozmytą. Odległość od przeszkód mierzono za pomocą czujnika ultradźwiękowego. Scharakteryzowano zastosowane urządzenia oraz algorytmy przetwarzania sygnałów. Testy przeprowadzono na mobilnym robocie kołowym. Przeprowadzono analizę wpływu algorytmów sterowania podczas omijania przeszkód

    A modified model for the Lobula Giant Movement Detector and its FPGA implementation

    Get PDF
    The Lobula Giant Movement Detector (LGMD) is a wide-field visual neuron located in the Lobula layer of the Locust nervous system. The LGMD increases its firing rate in response to both the velocity of an approaching object and the proximity of this object. It has been found that it can respond to looming stimuli very quickly and trigger avoidance reactions. It has been successfully applied in visual collision avoidance systems for vehicles and robots. This paper introduces a modified neural model for LGMD that provides additional depth direction information for the movement. The proposed model retains the simplicity of the previous model by adding only a few new cells. It has been simplified and implemented on a Field Programmable Gate Array (FPGA), taking advantage of the inherent parallelism exhibited by the LGMD, and tested on real-time video streams. Experimental results demonstrate the effectiveness as a fast motion detector

    Non-invasive classification of gas–liquid two-phase horizontal flow regimes using an ultrasonic Doppler sensor and a neural network

    Get PDF
    The identification of flow pattern is a key issue in multiphase flow which is encountered in the petrochemical industry. It is difficult to identify the gas–liquid flow regimes objectively with the gas–liquid two-phase flow. This paper presents the feasibility of a clamp-on instrument for an objective flow regime classification of two-phase flow using an ultrasonic Doppler sensor and an artificial neural network, which records and processes the ultrasonic signals reflected from the two-phase flow. Experimental data is obtained on a horizontal test rig with a total pipe length of 21 m and 5.08 cm internal diameter carrying air-water two-phase flow under slug, elongated bubble, stratified-wavy and, stratified flow regimes. Multilayer perceptron neural networks (MLPNNs) are used to develop the classification model. The classifier requires features as an input which is representative of the signals. Ultrasound signal features are extracted by applying both power spectral density (PSD) and discrete wavelet transform (DWT) methods to the flow signals. A classification scheme of '1-of-C coding method for classification' was adopted to classify features extracted into one of four flow regime categories. To improve the performance of the flow regime classifier network, a second level neural network was incorporated by using the output of a first level networks feature as an input feature. The addition of the two network models provided a combined neural network model which has achieved a higher accuracy than single neural network models. Classification accuracies are evaluated in the form of both the PSD and DWT features. The success rates of the two models are: (1) using PSD features, the classifier missed 3 datasets out of 24 test datasets of the classification and scored 87.5% accuracy; (2) with the DWT features, the network misclassified only one data point and it was able to classify the flow patterns up to 95.8% accuracy. This approach has demonstrated the success of a clamp-on ultrasound sensor for flow regime classification that would be possible in industry practice. It is considerably more promising than other techniques as it uses a non-invasive and non-radioactive sensor

    Intelligent Navigation for a Solar Powered Unmanned Underwater Vehicle

    Get PDF
    In this paper, an intelligent navigation system for an unmanned underwater vehicle powered by renewable energy and designed for shadow water inspection in missions of a long duration is proposed. The system is composed of an underwater vehicle, which tows a surface vehicle. The surface vehicle is a small boat with photovoltaic panels, a methanol fuel cell and communication equipment, which provides energy and communication to the underwater vehicle. The underwater vehicle has sensors to monitor the underwater environment such as sidescan sonar and a video camera in a flexible configuration and sensors to measure the physical and chemical parameters of water quality on predefined paths for long distances. The underwater vehicle implements a biologically inspired neural architecture for autonomous intelligent navigation. Navigation is carried out by integrating a kinematic adaptive neuro‐controller for trajectory tracking and an obstacle avoidance adaptive neuro‐  controller. The autonomous underwater vehicle is capable of operating during long periods of observation and monitoring. This autonomous vehicle is a good tool for observing large areas of sea, since it operates for long periods of time due to the contribution of renewable energy. It correlates all sensor data for time and geodetic position. This vehicle has been used for monitoring the Mar Menor lagoon.Supported by the Coastal Monitoring System for the Mar Menor (CMS‐  463.01.08_CLUSTER) project founded by the Regional Government of Murcia, by the SICUVA project (Control and Navigation System for AUV Oceanographic Monitoring Missions. REF: 15357/PI/10) founded by the Seneca Foundation of Regional Government of Murcia and by the DIVISAMOS project (Design of an Autonomous Underwater Vehicle for Inspections and oceanographic mission‐UPCT: DPI‐ 2009‐14744‐C03‐02) founded by the Spanish Ministry of Science and Innovation from Spain

    Multimodal machine learning for intelligent mobility

    Get PDF
    Scientific problems are solved by finding the optimal solution for a specific task. Some problems can be solved analytically while other problems are solved using data driven methods. The use of digital technologies to improve the transportation of people and goods, which is referred to as intelligent mobility, is one of the principal beneficiaries of data driven solutions. Autonomous vehicles are at the heart of the developments that propel Intelligent Mobility. Due to the high dimensionality and complexities involved in real-world environments, it needs to become commonplace for intelligent mobility to use data-driven solutions. As it is near impossible to program decision making logic for every eventuality manually. While recent developments of data-driven solutions such as deep learning facilitate machines to learn effectively from large datasets, the application of techniques within safety-critical systems such as driverless cars remain scarce.Autonomous vehicles need to be able to make context-driven decisions autonomously in different environments in which they operate. The recent literature on driverless vehicle research is heavily focused only on road or highway environments but have discounted pedestrianized areas and indoor environments. These unstructured environments tend to have more clutter and change rapidly over time. Therefore, for intelligent mobility to make a significant impact on human life, it is vital to extend the application beyond the structured environments. To further advance intelligent mobility, researchers need to take cues from multiple sensor streams, and multiple machine learning algorithms so that decisions can be robust and reliable. Only then will machines indeed be able to operate in unstructured and dynamic environments safely. Towards addressing these limitations, this thesis investigates data driven solutions towards crucial building blocks in intelligent mobility. Specifically, the thesis investigates multimodal sensor data fusion, machine learning, multimodal deep representation learning and its application of intelligent mobility. This work demonstrates that mobile robots can use multimodal machine learning to derive driver policy and therefore make autonomous decisions.To facilitate autonomous decisions necessary to derive safe driving algorithms, we present an algorithm for free space detection and human activity recognition. Driving these decision-making algorithms are specific datasets collected throughout this study. They include the Loughborough London Autonomous Vehicle dataset, and the Loughborough London Human Activity Recognition dataset. The datasets were collected using an autonomous platform design and developed in house as part of this research activity. The proposed framework for Free-Space Detection is based on an active learning paradigm that leverages the relative uncertainty of multimodal sensor data streams (ultrasound and camera). It utilizes an online learning methodology to continuously update the learnt model whenever the vehicle experiences new environments. The proposed Free Space Detection algorithm enables an autonomous vehicle to self-learn, evolve and adapt to new environments never encountered before. The results illustrate that online learning mechanism is superior to one-off training of deep neural networks that require large datasets to generalize to unfamiliar surroundings. The thesis takes the view that human should be at the centre of any technological development related to artificial intelligence. It is imperative within the spectrum of intelligent mobility where an autonomous vehicle should be aware of what humans are doing in its vicinity. Towards improving the robustness of human activity recognition, this thesis proposes a novel algorithm that classifies point-cloud data originated from Light Detection and Ranging sensors. The proposed algorithm leverages multimodality by using the camera data to identify humans and segment the region of interest in point cloud data. The corresponding 3-dimensional data was converted to a Fisher Vector Representation before being classified by a deep Convolutional Neural Network. The proposed algorithm classifies the indoor activities performed by a human subject with an average precision of 90.3%. When compared to an alternative point cloud classifier, PointNet[1], [2], the proposed framework out preformed on all classes. The developed autonomous testbed for data collection and algorithm validation, as well as the multimodal data-driven solutions for driverless cars, is the major contributions of this thesis. It is anticipated that these results and the testbed will have significant implications on the future of intelligent mobility by amplifying the developments of intelligent driverless vehicles.</div
    corecore