22,797 research outputs found

    RANCANG BANGUN SISTEM MOBILE ROBOT PENDETEKSI OBJEK BERBASIS RASPBERRY PI B+

    Get PDF
    Design of Mobile Robot System Based Object Detection Raspberry PI B + is design an intelligent robot that can follow an object without the need to connect to the device because the computer has a core that can help process the data directly. This robot can follow objects with the aid of a camera that is integrated with a module Raspberry PI, this robot also uses Arduino UNO as a data processor in the driving sectors and the sensor, the sensor used is Gyro and Accelerometer sensor that serves to stabilize the robot. The working principle of the device is a camera that terpasangpada module Raspi will capture objects that have been programmed, then the data from the camera is transmitted to the module Arduino that will make the servo go forward so it can reach the object, when the camera detects an object, the transmitter and receiver on Raspi and Arduino will each transmits data that has output of 5V Keywords: Raspberry PI B +, Arduino UNO, Camera Raspberry PI, Gyro and Accelerometer Sensor, Servo, Mobile Robo

    Visual Localization and Mapping in Dynamic and Changing Environments

    Full text link
    The real-world deployment of fully autonomous mobile robots depends on a robust SLAM (Simultaneous Localization and Mapping) system, capable of handling dynamic environments, where objects are moving in front of the robot, and changing environments, where objects are moved or replaced after the robot has already mapped the scene. This paper presents Changing-SLAM, a method for robust Visual SLAM in both dynamic and changing environments. This is achieved by using a Bayesian filter combined with a long-term data association algorithm. Also, it employs an efficient algorithm for dynamic keypoints filtering based on object detection that correctly identify features inside the bounding box that are not dynamic, preventing a depletion of features that could cause lost tracks. Furthermore, a new dataset was developed with RGB-D data especially designed for the evaluation of changing environments on an object level, called PUC-USP dataset. Six sequences were created using a mobile robot, an RGB-D camera and a motion capture system. The sequences were designed to capture different scenarios that could lead to a tracking failure or a map corruption. To the best of our knowledge, Changing-SLAM is the first Visual SLAM system that is robust to both dynamic and changing environments, not assuming a given camera pose or a known map, being also able to operate in real time. The proposed method was evaluated using benchmark datasets and compared with other state-of-the-art methods, proving to be highly accurate.Comment: 14 pages, 13 figure

    Under vehicle perception for high level safety measures using a catadioptric camera system

    Get PDF
    In recent years, under vehicle surveillance and the classification of the vehicles become an indispensable task that must be achieved for security measures in certain areas such as shopping centers, government buildings, army camps etc. The main challenge to achieve this task is to monitor the under frames of the means of transportations. In this paper, we present a novel solution to achieve this aim. Our solution consists of three main parts: monitoring, detection and classification. In the first part we design a new catadioptric camera system in which the perspective camera points downwards to the catadioptric mirror mounted to the body of a mobile robot. Thanks to the catadioptric mirror the scenes against the camera optical axis direction can be viewed. In the second part we use speeded up robust features (SURF) in an object recognition algorithm. Fast appearance based mapping algorithm (FAB-MAP) is exploited for the classification of the means of transportations in the third part. Proposed technique is implemented in a laboratory environment

    Gesture Recognition Aplication based on Dynamic Time Warping (DTW) FOR Omni-Wheel Mobile Robot

    Get PDF
    This project presents of the movement of omni-wheel robot moves in the trajectory obtained from the gesture recognition system based on Dynamic Time Warping. Single camera is used as the input of the system, which is also a reference to the movement of the omni-wheel robot. Some systems for gesture recognition have been developed using various methods and different approaches. The movement of the omni-wheel robot using the method of Dynamic Time Wrapping (DTW) which has the advantage able to calculate the distance of two data vectors with different lengths. By using this method we can measure the similarity between two sequences at different times and speeds. Dynamic Time Warping to compare the two parameters at varying times and speeds. Application of DTW widely applied in video, audio, graphics, etc. Due to data that can be changed in a linear manner so that it can be analyzed with DTW. In short can find the most suitable value by minimizing the difference between two multidimensional signals that have been compressed. DTW method is expected to gesture recognition system to work optimally, have a high enough value of accuracy and processing time is realtime

    A Lens-Calibrated Active Marker Metrology System

    Get PDF
    This paper presents a prototypical marker tracking system, MT, which is capable of recording multiple mobile robot trajectories in parallel for offline analysis. The system is also capable of providing trajectory data in realtime to agents (such as robots in an arena) and implements several multi-agent operators to simplify agent-based perception. The latter characteristic provides an ability to minimise the normally expensive process of implementing agent-centric perceptual mechanisms and provides a means for multiagent "global knowledge" (Parker 1993)

    Human-Machine Interface for Remote Training of Robot Tasks

    Full text link
    Regardless of their industrial or research application, the streamlining of robot operations is limited by the proximity of experienced users to the actual hardware. Be it massive open online robotics courses, crowd-sourcing of robot task training, or remote research on massive robot farms for machine learning, the need to create an apt remote Human-Machine Interface is quite prevalent. The paper at hand proposes a novel solution to the programming/training of remote robots employing an intuitive and accurate user-interface which offers all the benefits of working with real robots without imposing delays and inefficiency. The system includes: a vision-based 3D hand detection and gesture recognition subsystem, a simulated digital twin of a robot as visual feedback, and the "remote" robot learning/executing trajectories using dynamic motion primitives. Our results indicate that the system is a promising solution to the problem of remote training of robot tasks.Comment: Accepted in IEEE International Conference on Imaging Systems and Techniques - IST201
    • …
    corecore