1,022 research outputs found

    Remote Teleoperated and Autonomous Mobile Security Robot Development in Ship Environment

    Get PDF
    We propose a wireless remote teleoperated and autonomous mobile security robot based on a multisensor system to monitor the ship/cabin environment. By doing this, pilots in charge of monitoring can be away from the scene and feel like being at the site monitoring and responding to any potential safety problems. Also, this robot can be a supplementary device for safety cabin crew members who are very busy and/or very tired of properly responding to crises. This can make one crew member on duty at the cabin a possible option. In fact, when the robot detects something unusual in the cabin, it can also notify the pilot so that the pilot can teleoperate the robot to response to whatever is needed. As a result, a cabin without any crew members on duty can be achieved through this type of robot/system

    A Grammatical Approach to the Modeling of an Autonomous Robot

    Get PDF
    Virtual Worlds Generator is a grammatical model that is proposed to define virtual worlds. It integrates the diversity of sensors and interaction devices, multimodality and a virtual simulation system. Its grammar allows the definition and abstraction in symbols strings of the scenes of the virtual world, independently of the hardware that is used to represent the world or to interact with it. A case study is presented to explain how to use the proposed model to formalize a robot navigation system with multimodal perception and a hybrid control scheme of the robot. The result is an instance of the model grammar that implements the robotic system and is independent of the sensing devices used for perception and interaction. As a conclusion the Virtual Worlds Generator adds value in the simulation of virtual worlds since the definition can be done formally and independently of the peculiarities of the supporting devices

    Navigation and Control of Automated Guided Vehicle using Fuzzy Inference System and Neural Network Technique

    Get PDF
    Automatic motion planning and navigation is the primary task of an Automated Guided Vehicle (AGV) or mobile robot. All such navigation systems consist of a data collection system, a decision making system and a hardware control system. Artificial Intelligence based decision making systems have become increasingly more successful as they are capable of handling large complex calculations and have a good performance under unpredictable and imprecise environments. This research focuses on developing Fuzzy Logic and Neural Network based implementations for the navigation of an AGV by using heading angle and obstacle distances as inputs to generate the velocity and steering angle as output. The Gaussian, Triangular and Trapezoidal membership functions for the Fuzzy Inference System and the Feed forward back propagation were developed, modelled and simulated on MATLAB. The reserach presents an evaluation of the four different decision making systems and a study has been conducted to compare their performances. The hardware control for an AGV should be robust and precise. For practical implementation a prototype, that functions via DC servo motors and a gear systems, was constructed and installed on a commercial vehicle

    Vehicle recognition and tracking using a generic multi-sensor and multi-algorithm fusion approach

    Get PDF
    International audienceThis paper tackles the problem of improving the robustness of vehicle detection for Adaptive Cruise Control (ACC) applications. Our approach is based on a multisensor and a multialgorithms data fusion for vehicle detection and recognition. Our architecture combines two sensors: a frontal camera and a laser scanner. The improvement of the robustness stems from two aspects. First, we addressed the vision-based detection by developing an original approach based on fine gradient analysis, enhanced with a genetic AdaBoost-based algorithm for vehicle recognition. Then, we use the theory of evidence as a fusion framework to combine confidence levels delivered by the algorithms in order to improve the classification 'vehicle versus non-vehicle'. The final architecture of the system is very modular, generic and flexible in that it could be used for other detection applications or using other sensors or algorithms providing the same outputs. The system was successfully implemented on a prototype vehicle and was evaluated under real conditions and over various multisensor databases and various test scenarios, illustrating very good performances

    GRASP News Volume 9, Number 1

    Get PDF
    A report of the General Robotics and Active Sensory Perception (GRASP) Laboratory

    An annotated bibligraphy of multisensor integration

    Get PDF
    technical reportIn this paper we give an annotated bibliography of the multisensor integration literature

    \u3cem\u3eGRASP News\u3c/em\u3e: Volume 9, Number 1

    Get PDF
    The past year at the GRASP Lab has been an exciting and productive period. As always, innovation and technical advancement arising from past research has lead to unexpected questions and fertile areas for new research. New robots, new mobile platforms, new sensors and cameras, and new personnel have all contributed to the breathtaking pace of the change. Perhaps the most significant change is the trend towards multi-disciplinary projects, most notable the multi-agent project (see inside for details on this, and all the other new and on-going projects). This issue of GRASP News covers the developments for the year 1992 and the first quarter of 1993

    Intelligent Robotic Perception Systems

    Get PDF
    Robotic perception is related to many applications in robotics where sensory data and artificial intelligence/machine learning (AI/ML) techniques are involved. Examples of such applications are object detection, environment representation, scene understanding, human/pedestrian detection, activity recognition, semantic place classification, object modeling, among others. Robotic perception, in the scope of this chapter, encompasses the ML algorithms and techniques that empower robots to learn from sensory data and, based on learned models, to react and take decisions accordingly. The recent developments in machine learning, namely deep-learning approaches, are evident and, consequently, robotic perception systems are evolving in a way that new applications and tasks are becoming a reality. Recent advances in human-robot interaction, complex robotic tasks, intelligent reasoning, and decision-making are, at some extent, the results of the notorious evolution and success of ML algorithms. This chapter will cover recent and emerging topics and use-cases related to intelligent perception systems in robotics

    Ultrasonic sensor configuration for mobile robot navigation systems to assist visually impaired person

    Get PDF
    Ultrasonic sensor is one of the electronic components used in designing a mobile robot navigation system to assist visually impaired person. However, no guideline or algorithm has been established so far to ease the selection and determination of optimum number of ultrasonic sensors to be used and the layout for the sensors. The purpose of this study is to obtain an algorithm that can be used as a guideline for selecting appropriate ultrasonic component model. The algorithm is used for determining the optimum numbers and optimum layout for ultrasonic sensors of interest when used for a mobile robot navigation system for a 180° obstacle detection using theoretical calculations. All theoretical values obtained are compared with real-time data using an actual ultrasonic sensor placed on experimental platform. This set up is used with different numbers and placements using the selected ultrasonic sensor, HC–SR04 and is compared with the theoretical values for validation. Then, relevant equations are used to calculate the number of sensors and layout used for another ultrasonic sensor, MA40B8 to show the correctness of the equations used in this study. The MA40B8 ultrasonic sensor was originally used for a 360° obstacle detection system. It is proven that the equations used in this study are valid theoretically and experimentally. The algorithm can also be used to decide the optimum numbers and optimum layout for ultrasonic sensors for a 180° obstacle detection

    Sensor Augmented Virtual Reality Based Teleoperation Using Mixed Autonomy

    Get PDF
    A multimodal teleoperation interface is introduced, featuring an integrated virtual reality (VR) based simulation augmented by sensors and image processing capabilities onboard the remotely operated vehicle. The proposed virtual reality interface fuses an existing VR model with live video feed and prediction states, thereby creating a multimodal control interface. VR addresses the typical limitations of video based teleoperation caused by signal lag and limited field of view, allowing the operator to navigate in a continuous fashion. The vehicle incorporates an onboard computer and a stereo vision system to facilitate obstacle detection. A vehicle adaptation system with a priori risk maps and a real-state tracking system enable temporary autonomous operation of the vehicle for local navigation around obstacles and automatic re-establishment of the vehicle’s teleoperated state. The system provides real time update of the virtual environment based on anomalies encountered by the vehicle. The VR based multimodal teleoperation interface is expected to be more adaptable and intuitive when compared with other interfaces
    corecore