2,218 research outputs found

    Conceptual spatial representations for indoor mobile robots

    Get PDF
    We present an approach for creating conceptual representations of human-made indoor environments using mobile robots. The concepts refer to spatial and functional properties of typical indoor environments. Following ļ¬ndings in cognitive psychology, our model is composed of layers representing maps at diļ¬€erent levels of abstraction. The complete system is integrated in a mobile robot endowed with laser and vision sensors for place and object recognition. The system also incorporates a linguistic framework that actively supports the map acquisition process, and which is used for situated dialogue. Finally, we discuss the capabilities of the integrated system

    Semantic labeling of places using information extracted from laser and vision sensor data

    Get PDF
    Indoor environments can typically be divided into places with different functionalities like corridors, kitchens, offices, or seminar rooms. The ability to learn such semantic categories from sensor data enables a mobile robot to extend the representation of the environment facilitating the interaction withhumans. As an example, natural language terms like corridor or room can be used to communicate the position of the robot in a map in a more intuitive way. In this work, we firrst propose an approach based on supervised learning to classify the pose of a mobile robot into semantic classes. Our method uses AdaBoost to boost simple features extracted from range data and vision into a strong classifier. We present two main applications of this approach. Firstly, we show how our approach can be utilized by a moving robot for an online classification of the poses traversed along its path using a hidden Markov model. Secondly, we introduce an approach to learn topological maps from geometric maps by applying our semantic classification procedure in combination with a probabilistic relaxation procedure. We finally show how to apply associative Markov networks (AMNs) together with AdaBoost for classifying complete geometric maps. Experimental results obtained in simulation and with real robots demonstrate the effectiveness of our approach in various indoor environments

    Multi-robot team formation control in the GUARDIANS project

    Get PDF
    Purpose The GUARDIANS multi-robot team is to be deployed in a large warehouse in smoke. The team is to assist firefighters search the warehouse in the event or danger of a fire. The large dimensions of the environment together with development of smoke which drastically reduces visibility, represent major challenges for search and rescue operations. The GUARDIANS robots guide and accompany the firefighters on site whilst indicating possible obstacles and the locations of danger and maintaining communications links. Design/methodology/approach In order to fulfill the aforementioned tasks the robots need to exhibit certain behaviours. Among the basic behaviours are capabilities to stay together as a group, that is, generate a formation and navigate while keeping this formation. The control model used to generate these behaviours is based on the so-called social potential field framework, which we adapt to the specific tasks required for the GUARDIANS scenario. All tasks can be achieved without central control, and some of the behaviours can be performed without explicit communication between the robots. Findings The GUARDIANS environment requires flexible formations of the robot team: the formation has to adapt itself to the circumstances. Thus the application has forced us to redefine the concept of a formation. Using the graph-theoretic terminology, we can say that a formation may be stretched out as a path or be compact as a star or wheel. We have implemented the developed behaviours in simulation environments as well as on real ERA-MOBI robots commonly referred to as Erratics. We discuss advantages and shortcomings of our model, based on the simulations as well as on the implementation with a team of Erratics.</p

    Online Mapping-Based Navigation System for Wheeled Mobile Robot in Road Following and Roundabout

    Get PDF
    A road mapping and feature extraction for mobile robot navigation in road roundabout and road following environments is presented in this chapter. In this work, the online mapping of mobile robot employing the utilization of sensor fusion technique is used to extract the road characteristics that will be used with path planning algorithm to enable the robot to move from a certain start position to predetermined goal, such as road curbs, road borders, and roundabout. The sensor fusion is performed using many sensors, namely, laser range finder, camera, and odometry, which are combined on a new wheeled mobile robot prototype to determine the best optimum path of the robot and localize it within its environments. The local maps are developed using an imageā€™s preprocessing and processing algorithms and an artificial threshold of LRF signal processing to recognize the road environment parameters such as road curbs, width, and roundabout. The path planning in the road environments is accomplished using a novel approach so called Laser Simulator to find the trajectory in the local maps developed by sensor fusion. Results show the capability of the wheeled mobile robot to effectively recognize the road environments, build a local mapping, and find the path in both road following and roundabout

    Neural Network Memory Architectures for Autonomous Robot Navigation

    Full text link
    This paper highlights the significance of including memory structures in neural networks when the latter are used to learn perception-action loops for autonomous robot navigation. Traditional navigation approaches rely on global maps of the environment to overcome cul-de-sacs and plan feasible motions. Yet, maintaining an accurate global map may be challenging in real-world settings. A possible way to mitigate this limitation is to use learning techniques that forgo hand-engineered map representations and infer appropriate control responses directly from sensed information. An important but unexplored aspect of such approaches is the effect of memory on their performance. This work is a first thorough study of memory structures for deep-neural-network-based robot navigation, and offers novel tools to train such networks from supervision and quantify their ability to generalize to unseen scenarios. We analyze the separation and generalization abilities of feedforward, long short-term memory, and differentiable neural computer networks. We introduce a new method to evaluate the generalization ability by estimating the VC-dimension of networks with a final linear readout layer. We validate that the VC estimates are good predictors of actual test performance. The reported method can be applied to deep learning problems beyond robotics

    Integrated cockpit for A-129

    Get PDF
    Weight, size, and mission requirements for the A-129 mandated an integrated system approach for the crew/cockpit interface design. Instead of the usual multitude of cockpit controls, indicators, gauges, and lights, the primary crew interface is a single multifunction keyboard and one or more multifunction CRT display units. This cockpit design approach imposed unusual constraints upon the system architecture to overcome the inherent information access limitations of a data input/output window that was restricted by the available space. The conceptual approach and resulting design of the A-129 cockpit with the intent to enhance the development of cockpit standardization are described

    Symbolic Trajectory Description in Mobile Robotics

    Get PDF

    Accumulator-free Hough Transform for Sequence Collinear Points

    Get PDF
    The perception, localization, and navigation of its environment are essential for autonomous mobile robots and vehicles. For that reason, a 2D Laser rangefinder sensor is used popularly in mobile robot applications to measure the origin of the robot to its surrounding objects. The measurement data generated by the sensor is transmitted to the controller, where the data is processed by one or multiple suitable algorithms in several steps to extract the desired information. Universal Hough Transform (UHT) is one of the appropriate and popular algorithms to extract the primitive geometry such as straight line, which later will be used in the further step of data processing. However, the UHT has high computational complexity and requires the so-called accumulator array, which is less suitable for real-time applications where a high speed and low complexity computation is highly demanded. In this study, an Accumulator-free Hough Transform (AfHT) is proposed to reduce the computational complexity and eliminate the need for the accumulator array. The proposed algorithm is validated using the measurement data from a 2D laser scanner and compared to the standard Hough Transform. As a result, the extracted value of AfHT shows a good agreement with that of UHT but with a significant reduction in the complexity of the computation and the need for computer memory
    • ā€¦
    corecore