4,123 research outputs found

    Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain

    Get PDF
    Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors

    Safe navigation and human-robot interaction in assistant robotic applications

    Get PDF
    L'abstract Ăš presente nell'allegato / the abstract is in the attachmen

    Robots learn to behave: improving human-robot collaboration in flexible manufacturing applications

    Get PDF
    L'abstract Ăš presente nell'allegato / the abstract is in the attachmen

    Safe cooperation between human operators and visually controlled industrial manipulators

    Get PDF
    Industrial tasks can be improved substantially by making humans and robots collaborate in the same workspace. The main goal of this chapter is the development of a human-robot interaction system which enables this collaboration and guarantees the safety of the human operator. This system is composed of two subsystems: the human tracking system and the robot control system. The human tracking system deals with the precise real-time localization of the human operator in the industrial environment. It is composed of two systems: an inertial motion capture system and an Ultra-WideBand localization system. The robot control system is based on visual servoing. A safety behaviour which stops the normal path tracking of the robot is performed when the robot and the human are too close. This safety behaviour has been implemented through a multi-threaded software architecture in order to share information between both systems. Thereby, the localization measurements obtained by the human tracking system are processed by the robot control system to compute the minimum human-robot distance and determine if the safety behaviour must be activated.Spanish Ministry of Science and Innovation and the Spanish Ministry of Education through the projects DPI2005-06222 and DPI2008-02647 and the grant AP2005-1458

    On-board Obstacle Avoidance in the Teleoperation of Unmanned Aerial Vehicles

    Get PDF
    Teleoperation von Drohnen in Umgebungen ohne GPS-Verbindung und wenig Bewegungsspielraum stellt den Operator vor besondere Herausforderungen. Hindernisse in einer unbekannten Umgebung erfordern eine zuverlĂ€ssige ZustandsschĂ€tzung und Algorithmen zur Vermeidung von Kollisionen. In dieser Dissertation prĂ€sentieren wir ein System zur kollisionsfreien Navigation einer ferngesteuerten Drohne mit vier Propellern (Quadcopter) in abgeschlossenen RĂ€umen. Die Plattform ist mit einem Miniaturcomputer und dem Minimum an Sensoren ausgestattet. Diese Ausstattung genĂŒgt den Anforderungen an die Rechenleistung. Dieses Setup ermöglicht des Weiteren eine hochgenaue ZustandsschĂ€tzung mit Hilfe einer Kaskaden-Architektur, sehr gutes Folgeverhalten bezĂŒglich der kommandierten Geschwindigkeit, sowie eine kollisionsfreie Navigation. Ein KomplementĂ€rfilter berechnet die Höhe der Drohne, wĂ€hrend ein Kalman-Filter Beschleunigung durch eine IMU und Messungen eines Optical-Flow Sensors fusioniert und in die Softwarearchitektur integriert. Eine RGB-D Kamera stellt dem Operator ein visuelles Feedback, sowie Distanzmessungen zur VerfĂŒgung, um ein Roboter-zentriertes Modell umliegender Hindernisse mit Hilfe eines Bin-Occupancy-Filters zu erstellen. Der Algorithmus speichert die Position dieser Hindernisse, auch wenn sie das Sehfeld des Sensors verlassen, mit Hilfe des geschĂ€tzten Zustandes des Roboters. Das Prinzip des Ausweich-Algorithmus basiert auf dem Ansatz einer modell-prĂ€diktiven Regelung. Durch Vorhersage der wahrscheinlichen Position eines Hindernisses werden die durch den Operator kommandierten Sollwerte gefiltert, um eine mögliche Kollision mit einem Hindernis zu vermeiden. Die Plattform wurde experimentell sowohl in einer rĂ€umlich abgeschlossenen Umgebung mit zahlreichen Hindernissen als auch bei TestflĂŒgen in offener Umgebung mit natĂŒrlichen Hindernissen wie z.B. BĂ€ume getestet. Fliegende Roboter bergen das Risiko, im Fall eines Fehlers, sei es ein Bedienungs- oder Berechnungsfehler, durch einen Aufprall am Boden oder an Hindernissen Schaden zu nehmen. Aus diesem Grund nimmt die Entwicklung von Algorithmen dieser Roboter ein hohes Maß an Zeit und Ressourcen in Anspruch. In dieser Arbeit prĂ€sentieren wir zwei Methoden (Software-in-the-loop- und Hardware-in-the-loop-Simulation) um den Entwicklungsprozess zu vereinfachen. Via Software-in-the-loop-Simulation konnte der ZustandsschĂ€tzer mit Hilfe simulierter Sensoren und zuvor aufgenommener DatensĂ€tze verbessert werden. Eine Hardware-in-the-loop Simulation ermöglichte uns, den Roboter in Gazebo (ein bekannter frei verfĂŒgbarer ROS-Simulator) mit zusĂ€tzlicher auf dem Roboter installierter Hardware in Simulation zu bewegen. Ebenso können wir damit die EchtzeitfĂ€higkeit der Algorithmen direkt auf der Hardware validieren und verifizieren. Zu guter Letzt analysierten wir den Einfluss der Roboterbewegung auf das visuelle Feedback des Operators. Obwohl einige Drohnen die Möglichkeit einer mechanischen Stabilisierung der Kamera besitzen, können unsere Drohnen aufgrund von GewichtsbeschrĂ€nkungen nicht auf diese UnterstĂŒtzung zurĂŒckgreifen. Eine Fixierung der Kamera verursacht, wĂ€hrend der Roboter sich bewegt, oft unstetige Bewegungen des Bildes und beeintrĂ€chtigt damit negativ die Manövrierbarkeit des Roboters. Viele wissenschaftliche Arbeiten beschĂ€ftigen sich mit der Lösung dieses Problems durch Feature-Tracking. Damit kann die Bewegung der Kamera rekonstruiert und das Videosignal stabilisiert werden. Wir zeigen, dass diese Methode stark vereinfacht werden kann, durch die Verwendung der Roboter-internen IMU. Unsere Ergebnisse belegen, dass unser Algorithmus das Kamerabild erfolgreich stabilisieren und der rechnerische Aufwand deutlich reduziert werden kann. Ebenso prĂ€sentieren wir ein neues Design eines Quadcopters, um dessen Ausrichtung von der lateralen Bewegung zu entkoppeln. Unser Konzept erlaubt die Neigung der PropellerblĂ€tter unabhĂ€ngig von der Ausrichtung des Roboters mit Hilfe zweier zusĂ€tzlicher Aktuatoren. Nachdem wir das dynamische Modell dieses Systems hergeleitet haben, synthetisierten wir einen auf Feedback-Linearisierung basierten Regler. Simulationen bestĂ€tigen unsere Überlegungen und heben die Verbesserung der ManövrierfĂ€higkeit dieses neuartigen Designs hervor.The teleoperation of unmanned aerial vehicles (UAVs), especially in cramped, GPS-restricted, environments, poses many challenges. The presence of obstacles in an unfamiliar environment requires reliable state estimation and active algorithms to prevent collisions. In this dissertation, we present a collision-free indoor navigation system for a teleoperated quadrotor UAV. The platform is equipped with an on-board miniature computer and a minimal set of sensors for this task and is self-sufficient with respect to external tracking systems and computation. The platform is capable of highly accurate state-estimation, tracking of the velocity commanded by the user and collision-free navigation. The robot estimates its state in a cascade architecture. The attitude of the platform is calculated with a complementary filter and its linear velocity through a Kalman filter integration of inertial and optical flow measurements. An RGB-D camera serves the purpose of providing visual feedback to the operator and depth measurements to build a probabilistic, robot-centric obstacle state with a bin-occupancy filter. The algorithm tracks the obstacles when they leave the field of view of the sensor by updating their positions with the estimate of the robot's motion. The avoidance part of our navigation system is based on the Model Predictive Control approach. By predicting the possible future obstacles states, the UAV filters the operator commands by altering them to prevent collisions. Experiments in obstacle-rich indoor and outdoor environments validate the efficiency of the proposed setup. Flying robots are highly prone to damage in cases of control errors, as these most likely will cause them to fall to the ground. Therefore, the development of algorithm for UAVs entails considerable amount of time and resources. In this dissertation we present two simulation methods, i.e. software- and hardware-in-the-loop simulations, to facilitate this process. The software-in-the-loop testing was used for the development and tuning of the state estimator for our robot using both the simulated sensors and pre-recorded datasets of sensor measurements, e.g., from real robotic experiments. With hardware-in-the-loop simulations, we are able to command the robot simulated in Gazebo, a popular open source ROS-enabled physical simulator, using computational units that are embedded on our quadrotor UAVs. Hence, we can test in simulation not only the correct execution of algorithms, but also the computational feasibility directly on the robot's hardware. Lastly, we analyze the influence of the robot's motion on the visual feedback provided to the operator. While some UAVs have the capacity to carry mechanically stabilized camera equipment, weight limits or other problems may make mechanical stabilization impractical. With a fixed camera, the video stream is often unsteady due to the multirotor's movement and can impair the operator's situation awareness. There has been significant research on how to stabilize videos using feature tracking to determine camera movement, which in turn is used to manipulate frames and stabilize the camera stream. However, we believe that this process could be greatly simplified by using data from a UAV’s on-board inertial measurement unit to stabilize the camera feed. Our results show that our algorithm successfully stabilizes the camera stream with the added benefit of requiring less computational power. We also propose a novel quadrotor design concept to decouple its orientation from the lateral motion of the quadrotor. In our design the tilt angles of the propellers with respect to the quadrotor body are being simultaneously controlled with two additional actuators by employing the parallelogram principle. After deriving the dynamic model of this design, we propose a controller for this platform based on feedback linearization. Simulation results confirm our theoretical findings, highlighting the improved motion capabilities of this novel design with respect to standard quadrotors

    SAFER: Search and Find Emergency Rover

    Get PDF
    When disaster strikes and causes a structure to collapse, it poses a unique challenge to search and rescue teams as they assess the situation and search for survivors. Currently there are very few tools that can be used by these teams to aid them in gathering important information about the situation that allow members to stay at a safe distance. SAFER, Search and Find Emergency Rover, is an unmanned, remotely operated vehicle that can provide early reconnaissance to search and rescue teams so they may have more information to prepare themselves for the dangers that lay inside the wreckage. Over the past year, this team has restored a bare, non-operational chassis inherited from Roverwerx 2012 into a rugged and operational rover with increased functionality and reliability. SAFER uses a 360-degree camera to deliver real time visual reconnaissance to the operator who can remain safely stationed on the outskirts of the disaster. With strong drive motors providing enough torque to traverse steep obstacles and enough power to travel at up to 3 ft/s, SAFER can cover ground quickly and effectively over its 1-3 hour battery life, maximizing reconnaissance for the team. Additionally, SAFER contains 3 flashing beacons that can be dropped by the operator in the event a victim is found so that when team members do enter the scene they may easily locate victims. In the future, other teams may wish to improve upon this iteration by adding thermal imaging, air quality sensors, and potentially a robotic arm with a camera that can see in spaces too small for the entire rover to enter

    Virtual and Mixed Reality in Telerobotics: A Survey

    Get PDF

    Model-Based Environmental Visual Perception for Humanoid Robots

    Get PDF
    The visual perception of a robot should answer two fundamental questions: What? and Where? In order to properly and efficiently reply to these questions, it is essential to establish a bidirectional coupling between the external stimuli and the internal representations. This coupling links the physical world with the inner abstraction models by sensor transformation, recognition, matching and optimization algorithms. The objective of this PhD is to establish this sensor-model coupling
    • 

    corecore