2,158 research outputs found

    Gesture Recognition Aplication based on Dynamic Time Warping (DTW) FOR Omni-Wheel Mobile Robot

    Get PDF
    This project presents of the movement of omni-wheel robot moves in the trajectory obtained from the gesture recognition system based on Dynamic Time Warping. Single camera is used as the input of the system, which is also a reference to the movement of the omni-wheel robot. Some systems for gesture recognition have been developed using various methods and different approaches. The movement of the omni-wheel robot using the method of Dynamic Time Wrapping (DTW) which has the advantage able to calculate the distance of two data vectors with different lengths. By using this method we can measure the similarity between two sequences at different times and speeds. Dynamic Time Warping to compare the two parameters at varying times and speeds. Application of DTW widely applied in video, audio, graphics, etc. Due to data that can be changed in a linear manner so that it can be analyzed with DTW. In short can find the most suitable value by minimizing the difference between two multidimensional signals that have been compressed. DTW method is expected to gesture recognition system to work optimally, have a high enough value of accuracy and processing time is realtime

    Aerial-Ground collaborative sensing: Third-Person view for teleoperation

    Full text link
    Rapid deployment and operation are key requirements in time critical application, such as Search and Rescue (SaR). Efficiently teleoperated ground robots can support first-responders in such situations. However, first-person view teleoperation is sub-optimal in difficult terrains, while a third-person perspective can drastically increase teleoperation performance. Here, we propose a Micro Aerial Vehicle (MAV)-based system that can autonomously provide third-person perspective to ground robots. While our approach is based on local visual servoing, it further leverages the global localization of several ground robots to seamlessly transfer between these ground robots in GPS-denied environments. Therewith one MAV can support multiple ground robots on a demand basis. Furthermore, our system enables different visual detection regimes, and enhanced operability, and return-home functionality. We evaluate our system in real-world SaR scenarios.Comment: Accepted for publication in 2018 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR

    Long-term experiments with an adaptive spherical view representation for navigation in changing environments

    Get PDF
    Real-world environments such as houses and offices change over time, meaning that a mobile robot’s map will become out of date. In this work, we introduce a method to update the reference views in a hybrid metric-topological map so that a mobile robot can continue to localize itself in a changing environment. The updating mechanism, based on the multi-store model of human memory, incorporates a spherical metric representation of the observed visual features for each node in the map, which enables the robot to estimate its heading and navigate using multi-view geometry, as well as representing the local 3D geometry of the environment. A series of experiments demonstrate the persistence performance of the proposed system in real changing environments, including analysis of the long-term stability

    Object Detection of Omnidirectional Vision Using PSO-Neural Network for Soccer Robot

    Get PDF
    The vision system in soccer robot is needed to recognize the object around the robot environment. Omnidirectional vision system has been widely developed to find the object such as a ball, goalpost, and the white line in a field and recognized the distance and an angle between the object and robot. The most challenging in develop Omni-vision system is image distortion resulting from spherical mirror or lenses. This paper presents an efficient Omni-vision system using spherical lenses for real-time object detection. Aiming to overcome the image distortion and computation complexity, the distance calculation between object and robot from the spherical image is modeled using the neural network with optimized by particle swarm optimization. The experimental result shows the effectiveness of our development in the term of accuracy and processing time

    Methods for Reliable Robot Vision with a Dioptric System

    Get PDF
    Image processin
    corecore