45,887 research outputs found

    Mobile robot control and guidance using computer vision

    Get PDF
    Teória riadenia je aplikovaná takmer všade okolo nás, počnúc reguláciou teploty v našich domoch cez modelovanie správania sa trhu až po riadenie vesmírnych lodí. Dôvod, prečo je tomu tak je, je jej výkonnosť a krása, obe skryté v použitých matematických aparátoch. Kolesové mobilné roboty sú taktiež veľmi rozšírené, hlavne v priemysle, kvôli ich obrovskej nosnosti, ktorá je nevyhnutná. Veľmi presné detekovanie trasy je dôležité kvôli dôslednému riadeniu robota po jeho ceste. Jedna z najjednoduchších možností ako vynačiť trasu je použitie čiary, ktorá má iné optické vlastnosti ako jej podklad. Na detekovanie takto určenej trasy existuje niekoľko metód, ale najvýkonnejšie a najefektívnejšie je použitie kamery s následným digitánym spacovaním obrazu, ktoré je síce náročné na výpočtovú silu, ale je ním možne získať obrovské množstvo dát. Hoci táto metóda môže vyzerať zastaralo (vodiaca čiara) stále sa využíva v niektorých projektoch práve kvôli jej jednoduchosti a nízkej cene. Skutočné použitie kamery pre navigáciu všeobecne (nie pomocou čiary) je dokonca zatiaľ oblasť, ktorá je stále vo vývoji. Cieľom tejto práce je zostaviť malý mobilný robot, vytvoriť matematický model, navrhnúť niekoľko regulátorov a riadiť robot po vyznačenej čiare za použitia camery a počítačového videnia.Products of theory of control are almost everywhere around us beginning with temperature controller in our houses through modeling of market behavior to spaceship control. The reason of its widespread is its performance and beauty hidden in mathematical approach. Wheeled mobile robots are also widely used in industry because of their loading capability which is necessary. High accuracy path-tracking is very important for the mobile robots to precisely follow the designed path. One of the easiest way how to mark the path is to use line with different optical features than background. There are several methods to detect line and most powerful and efficient is camera usage. Subsequently, there is need to use digital image processing which demanding for computational performance but there is possibility to obtain huge amount of data. Although this principle could look obsolete (guide line) it is still used in many solutions because of its simplicity and cheapness. What is more, using camera to navigate robot global (without using line) is sort of modern technology and it is still in development. Objective of this work is to assembly small mobile robot, create its mathematical model, design several controllers and use camera and computer vision for robot guidance using guide line.

    Prediction of Human Trajectory Following a Haptic Robotic Guide Using Recurrent Neural Networks

    Full text link
    Social intelligence is an important requirement for enabling robots to collaborate with people. In particular, human path prediction is an essential capability for robots in that it prevents potential collision with a human and allows the robot to safely make larger movements. In this paper, we present a method for predicting the trajectory of a human who follows a haptic robotic guide without using sight, which is valuable for assistive robots that aid the visually impaired. We apply a deep learning method based on recurrent neural networks using multimodal data: (1) human trajectory, (2) movement of the robotic guide, (3) haptic input data measured from the physical interaction between the human and the robot, (4) human depth data. We collected actual human trajectory and multimodal response data through indoor experiments. Our model outperformed the baseline result while using only the robot data with the observed human trajectory, and it shows even better results when using additional haptic and depth data.Comment: 6 pages, Submitted to IEEE World Haptics Conference 201

    A robotic wheelchair trainer: design overview and a feasibility study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Experiencing independent mobility is important for children with a severe movement disability, but learning to drive a powered wheelchair can be labor intensive, requiring hand-over-hand assistance from a skilled therapist.</p> <p>Methods</p> <p>To improve accessibility to training, we developed a robotic wheelchair trainer that steers itself along a course marked by a line on the floor using computer vision, haptically guiding the driver's hand in appropriate steering motions using a force feedback joystick, as the driver tries to catch a mobile robot in a game of "robot tag". This paper provides a detailed design description of the computer vision and control system. In addition, we present data from a pilot study in which we used the chair to teach children without motor impairment aged 4-9 (n = 22) to drive the wheelchair in a single training session, in order to verify that the wheelchair could enable learning by the non-impaired motor system, and to establish normative values of learning rates.</p> <p>Results and Discussion</p> <p>Training with haptic guidance from the robotic wheelchair trainer improved the steering ability of children without motor impairment significantly more than training without guidance. We also report the results of a case study with one 8-year-old child with a severe motor impairment due to cerebral palsy, who replicated the single-session training protocol that the non-disabled children participated in. This child also improved steering ability after training with guidance from the joystick by an amount even greater than the children without motor impairment.</p> <p>Conclusions</p> <p>The system not only provided a safe, fun context for automating driver's training, but also enhanced motor learning by the non-impaired motor system, presumably by demonstrating through intuitive movement and force of the joystick itself exemplary control to follow the course. The case study indicates that a child with a motor system impaired by CP can also gain a short-term benefit from driver's training with haptic guidance.</p

    Incremental Learning for Robot Perception through HRI

    Full text link
    Scene understanding and object recognition is a difficult to achieve yet crucial skill for robots. Recently, Convolutional Neural Networks (CNN), have shown success in this task. However, there is still a gap between their performance on image datasets and real-world robotics scenarios. We present a novel paradigm for incrementally improving a robot's visual perception through active human interaction. In this paradigm, the user introduces novel objects to the robot by means of pointing and voice commands. Given this information, the robot visually explores the object and adds images from it to re-train the perception module. Our base perception module is based on recent development in object detection and recognition using deep learning. Our method leverages state of the art CNNs from off-line batch learning, human guidance, robot exploration and incremental on-line learning

    Fast, Accurate Thin-Structure Obstacle Detection for Autonomous Mobile Robots

    Full text link
    Safety is paramount for mobile robotic platforms such as self-driving cars and unmanned aerial vehicles. This work is devoted to a task that is indispensable for safety yet was largely overlooked in the past -- detecting obstacles that are of very thin structures, such as wires, cables and tree branches. This is a challenging problem, as thin objects can be problematic for active sensors such as lidar and sonar and even for stereo cameras. In this work, we propose to use video sequences for thin obstacle detection. We represent obstacles with edges in the video frames, and reconstruct them in 3D using efficient edge-based visual odometry techniques. We provide both a monocular camera solution and a stereo camera solution. The former incorporates Inertial Measurement Unit (IMU) data to solve scale ambiguity, while the latter enjoys a novel, purely vision-based solution. Experiments demonstrated that the proposed methods are fast and able to detect thin obstacles robustly and accurately under various conditions.Comment: Appeared at IEEE CVPR 2017 Workshop on Embedded Visio

    Deep Detection of People and their Mobility Aids for a Hospital Robot

    Full text link
    Robots operating in populated environments encounter many different types of people, some of whom might have an advanced need for cautious interaction, because of physical impairments or their advanced age. Robots therefore need to recognize such advanced demands to provide appropriate assistance, guidance or other forms of support. In this paper, we propose a depth-based perception pipeline that estimates the position and velocity of people in the environment and categorizes them according to the mobility aids they use: pedestrian, person in wheelchair, person in a wheelchair with a person pushing them, person with crutches and person using a walker. We present a fast region proposal method that feeds a Region-based Convolutional Network (Fast R-CNN). With this, we speed up the object detection process by a factor of seven compared to a dense sliding window approach. We furthermore propose a probabilistic position, velocity and class estimator to smooth the CNN's detections and account for occlusions and misclassifications. In addition, we introduce a new hospital dataset with over 17,000 annotated RGB-D images. Extensive experiments confirm that our pipeline successfully keeps track of people and their mobility aids, even in challenging situations with multiple people from different categories and frequent occlusions. Videos of our experiments and the dataset are available at http://www2.informatik.uni-freiburg.de/~kollmitz/MobilityAidsComment: 7 pages, ECMR 2017, dataset and videos: http://www2.informatik.uni-freiburg.de/~kollmitz/MobilityAids

    Automated pick-up of suturing needles for robotic surgical assistance

    Get PDF
    Robot-assisted laparoscopic prostatectomy (RALP) is a treatment for prostate cancer that involves complete or nerve sparing removal prostate tissue that contains cancer. After removal the bladder neck is successively sutured directly with the urethra. The procedure is called urethrovesical anastomosis and is one of the most dexterity demanding tasks during RALP. Two suturing instruments and a pair of needles are used in combination to perform a running stitch during urethrovesical anastomosis. While robotic instruments provide enhanced dexterity to perform the anastomosis, it is still highly challenging and difficult to learn. In this paper, we presents a vision-guided needle grasping method for automatically grasping the needle that has been inserted into the patient prior to anastomosis. We aim to automatically grasp the suturing needle in a position that avoids hand-offs and immediately enables the start of suturing. The full grasping process can be broken down into: a needle detection algorithm; an approach phase where the surgical tool moves closer to the needle based on visual feedback; and a grasping phase through path planning based on observed surgical practice. Our experimental results show examples of successful autonomous grasping that has the potential to simplify and decrease the operational time in RALP by assisting a small component of urethrovesical anastomosis
    corecore