1,105 research outputs found

    Adaptive Sampling-based Particle Filter for Visual-inertial Gimbal in the Wild

    Full text link
    In this paper, we present a Computer Vision (CV) based tracking and fusion algorithm, dedicated to a 3D printed gimbal system on drones operating in nature. The whole gimbal system can stabilize the camera orientation robustly in a challenging nature scenario by using skyline and ground plane as references. Our main contributions are the following: a) a light-weight Resnet-18 backbone network model was trained from scratch, and deployed onto the Jetson Nano platform to segment the image into binary parts (ground and sky); b) our geometry assumption from nature cues delivers the potential for robust visual tracking by using the skyline and ground plane as a reference; c) a spherical surface-based adaptive particle sampling, can fuse orientation from multiple sensor sources flexibly. The whole algorithm pipeline is tested on our customized gimbal module including Jetson and other hardware components. The experiments were performed on top of a building in the real landscape.Comment: content in 6 pages, 9 figures, 2 pseudo codes, one table, accepted by ICRA 202

    Push recovery with stepping strategy based on time-projection control

    Get PDF
    In this paper, we present a simple control framework for on-line push recovery with dynamic stepping properties. Due to relatively heavy legs in our robot, we need to take swing dynamics into account and thus use a linear model called 3LP which is composed of three pendulums to simulate swing and torso dynamics. Based on 3LP equations, we formulate discrete LQR controllers and use a particular time-projection method to adjust the next footstep location on-line during the motion continuously. This adjustment, which is found based on both pelvis and swing foot tracking errors, naturally takes the swing dynamics into account. Suggested adjustments are added to the Cartesian 3LP gaits and converted to joint-space trajectories through inverse kinematics. Fixed and adaptive foot lift strategies also ensure enough ground clearance in perturbed walking conditions. The proposed structure is robust, yet uses very simple state estimation and basic position tracking. We rely on the physical series elastic actuators to absorb impacts while introducing simple laws to compensate their tracking bias. Extensive experiments demonstrate the functionality of different control blocks and prove the effectiveness of time-projection in extreme push recovery scenarios. We also show self-produced and emergent walking gaits when the robot is subject to continuous dragging forces. These gaits feature dynamic walking robustness due to relatively soft springs in the ankles and avoiding any Zero Moment Point (ZMP) control in our proposed architecture.Comment: 20 pages journal pape

    Imprecise dynamic walking with time-projection control

    Get PDF
    We present a new walking foot-placement controller based on 3LP, a 3D model of bipedal walking that is composed of three pendulums to simulate falling, swing and torso dynamics. Taking advantage of linear equations and closed-form solutions of the 3LP model, our proposed controller projects intermediate states of the biped back to the beginning of the phase for which a discrete LQR controller is designed. After the projection, a proper control policy is generated by this LQR controller and used at the intermediate time. This control paradigm reacts to disturbances immediately and includes rules to account for swing dynamics and leg-retraction. We apply it to a simulated Atlas robot in position-control, always commanded to perform in-place walking. The stance hip joint in our robot keeps the torso upright to let the robot naturally fall, and the swing hip joint tracks the desired footstep location. Combined with simple Center of Pressure (CoP) damping rules in the low-level controller, our foot-placement enables the robot to recover from strong pushes and produce periodic walking gaits when subject to persistent sources of disturbance, externally or internally. These gaits are imprecise, i.e., emergent from asymmetry sources rather than precisely imposing a desired velocity to the robot. Also in extreme conditions, restricting linearity assumptions of the 3LP model are often violated, but the system remains robust in our simulations. An extensive analysis of closed-loop eigenvalues, viable regions and sensitivity to push timings further demonstrate the strengths of our simple controller

    Design and Autonomous Stabilization of a Ballistically Launched Multirotor

    Get PDF
    Aircraft that can launch ballistically and convert to autonomous, free flying drones have applications in many areas such as emergency response, defense, and space exploration, where they can gather critical situational data using onboard sensors. This paper presents a ballistically launched, autonomously stabilizing multirotor prototype (SQUID, Streamlined Quick Unfolding Investigation Drone) with an onboard sensor suite, autonomy pipeline, and passive aerodynamic stability. We demonstrate autonomous transition from passive to vision based, active stabilization, confirming the ability of the multirotor to autonomously stabilize after a ballistic launch in a GPS denied environment.Comment: Accepted to 2020 International Conference on Robotics and Automatio

    On-board Obstacle Avoidance in the Teleoperation of Unmanned Aerial Vehicles

    Get PDF
    Teleoperation von Drohnen in Umgebungen ohne GPS-Verbindung und wenig Bewegungsspielraum stellt den Operator vor besondere Herausforderungen. Hindernisse in einer unbekannten Umgebung erfordern eine zuverlĂ€ssige ZustandsschĂ€tzung und Algorithmen zur Vermeidung von Kollisionen. In dieser Dissertation prĂ€sentieren wir ein System zur kollisionsfreien Navigation einer ferngesteuerten Drohne mit vier Propellern (Quadcopter) in abgeschlossenen RĂ€umen. Die Plattform ist mit einem Miniaturcomputer und dem Minimum an Sensoren ausgestattet. Diese Ausstattung genĂŒgt den Anforderungen an die Rechenleistung. Dieses Setup ermöglicht des Weiteren eine hochgenaue ZustandsschĂ€tzung mit Hilfe einer Kaskaden-Architektur, sehr gutes Folgeverhalten bezĂŒglich der kommandierten Geschwindigkeit, sowie eine kollisionsfreie Navigation. Ein KomplementĂ€rfilter berechnet die Höhe der Drohne, wĂ€hrend ein Kalman-Filter Beschleunigung durch eine IMU und Messungen eines Optical-Flow Sensors fusioniert und in die Softwarearchitektur integriert. Eine RGB-D Kamera stellt dem Operator ein visuelles Feedback, sowie Distanzmessungen zur VerfĂŒgung, um ein Roboter-zentriertes Modell umliegender Hindernisse mit Hilfe eines Bin-Occupancy-Filters zu erstellen. Der Algorithmus speichert die Position dieser Hindernisse, auch wenn sie das Sehfeld des Sensors verlassen, mit Hilfe des geschĂ€tzten Zustandes des Roboters. Das Prinzip des Ausweich-Algorithmus basiert auf dem Ansatz einer modell-prĂ€diktiven Regelung. Durch Vorhersage der wahrscheinlichen Position eines Hindernisses werden die durch den Operator kommandierten Sollwerte gefiltert, um eine mögliche Kollision mit einem Hindernis zu vermeiden. Die Plattform wurde experimentell sowohl in einer rĂ€umlich abgeschlossenen Umgebung mit zahlreichen Hindernissen als auch bei TestflĂŒgen in offener Umgebung mit natĂŒrlichen Hindernissen wie z.B. BĂ€ume getestet. Fliegende Roboter bergen das Risiko, im Fall eines Fehlers, sei es ein Bedienungs- oder Berechnungsfehler, durch einen Aufprall am Boden oder an Hindernissen Schaden zu nehmen. Aus diesem Grund nimmt die Entwicklung von Algorithmen dieser Roboter ein hohes Maß an Zeit und Ressourcen in Anspruch. In dieser Arbeit prĂ€sentieren wir zwei Methoden (Software-in-the-loop- und Hardware-in-the-loop-Simulation) um den Entwicklungsprozess zu vereinfachen. Via Software-in-the-loop-Simulation konnte der ZustandsschĂ€tzer mit Hilfe simulierter Sensoren und zuvor aufgenommener DatensĂ€tze verbessert werden. Eine Hardware-in-the-loop Simulation ermöglichte uns, den Roboter in Gazebo (ein bekannter frei verfĂŒgbarer ROS-Simulator) mit zusĂ€tzlicher auf dem Roboter installierter Hardware in Simulation zu bewegen. Ebenso können wir damit die EchtzeitfĂ€higkeit der Algorithmen direkt auf der Hardware validieren und verifizieren. Zu guter Letzt analysierten wir den Einfluss der Roboterbewegung auf das visuelle Feedback des Operators. Obwohl einige Drohnen die Möglichkeit einer mechanischen Stabilisierung der Kamera besitzen, können unsere Drohnen aufgrund von GewichtsbeschrĂ€nkungen nicht auf diese UnterstĂŒtzung zurĂŒckgreifen. Eine Fixierung der Kamera verursacht, wĂ€hrend der Roboter sich bewegt, oft unstetige Bewegungen des Bildes und beeintrĂ€chtigt damit negativ die Manövrierbarkeit des Roboters. Viele wissenschaftliche Arbeiten beschĂ€ftigen sich mit der Lösung dieses Problems durch Feature-Tracking. Damit kann die Bewegung der Kamera rekonstruiert und das Videosignal stabilisiert werden. Wir zeigen, dass diese Methode stark vereinfacht werden kann, durch die Verwendung der Roboter-internen IMU. Unsere Ergebnisse belegen, dass unser Algorithmus das Kamerabild erfolgreich stabilisieren und der rechnerische Aufwand deutlich reduziert werden kann. Ebenso prĂ€sentieren wir ein neues Design eines Quadcopters, um dessen Ausrichtung von der lateralen Bewegung zu entkoppeln. Unser Konzept erlaubt die Neigung der PropellerblĂ€tter unabhĂ€ngig von der Ausrichtung des Roboters mit Hilfe zweier zusĂ€tzlicher Aktuatoren. Nachdem wir das dynamische Modell dieses Systems hergeleitet haben, synthetisierten wir einen auf Feedback-Linearisierung basierten Regler. Simulationen bestĂ€tigen unsere Überlegungen und heben die Verbesserung der ManövrierfĂ€higkeit dieses neuartigen Designs hervor.The teleoperation of unmanned aerial vehicles (UAVs), especially in cramped, GPS-restricted, environments, poses many challenges. The presence of obstacles in an unfamiliar environment requires reliable state estimation and active algorithms to prevent collisions. In this dissertation, we present a collision-free indoor navigation system for a teleoperated quadrotor UAV. The platform is equipped with an on-board miniature computer and a minimal set of sensors for this task and is self-sufficient with respect to external tracking systems and computation. The platform is capable of highly accurate state-estimation, tracking of the velocity commanded by the user and collision-free navigation. The robot estimates its state in a cascade architecture. The attitude of the platform is calculated with a complementary filter and its linear velocity through a Kalman filter integration of inertial and optical flow measurements. An RGB-D camera serves the purpose of providing visual feedback to the operator and depth measurements to build a probabilistic, robot-centric obstacle state with a bin-occupancy filter. The algorithm tracks the obstacles when they leave the field of view of the sensor by updating their positions with the estimate of the robot's motion. The avoidance part of our navigation system is based on the Model Predictive Control approach. By predicting the possible future obstacles states, the UAV filters the operator commands by altering them to prevent collisions. Experiments in obstacle-rich indoor and outdoor environments validate the efficiency of the proposed setup. Flying robots are highly prone to damage in cases of control errors, as these most likely will cause them to fall to the ground. Therefore, the development of algorithm for UAVs entails considerable amount of time and resources. In this dissertation we present two simulation methods, i.e. software- and hardware-in-the-loop simulations, to facilitate this process. The software-in-the-loop testing was used for the development and tuning of the state estimator for our robot using both the simulated sensors and pre-recorded datasets of sensor measurements, e.g., from real robotic experiments. With hardware-in-the-loop simulations, we are able to command the robot simulated in Gazebo, a popular open source ROS-enabled physical simulator, using computational units that are embedded on our quadrotor UAVs. Hence, we can test in simulation not only the correct execution of algorithms, but also the computational feasibility directly on the robot's hardware. Lastly, we analyze the influence of the robot's motion on the visual feedback provided to the operator. While some UAVs have the capacity to carry mechanically stabilized camera equipment, weight limits or other problems may make mechanical stabilization impractical. With a fixed camera, the video stream is often unsteady due to the multirotor's movement and can impair the operator's situation awareness. There has been significant research on how to stabilize videos using feature tracking to determine camera movement, which in turn is used to manipulate frames and stabilize the camera stream. However, we believe that this process could be greatly simplified by using data from a UAV’s on-board inertial measurement unit to stabilize the camera feed. Our results show that our algorithm successfully stabilizes the camera stream with the added benefit of requiring less computational power. We also propose a novel quadrotor design concept to decouple its orientation from the lateral motion of the quadrotor. In our design the tilt angles of the propellers with respect to the quadrotor body are being simultaneously controlled with two additional actuators by employing the parallelogram principle. After deriving the dynamic model of this design, we propose a controller for this platform based on feedback linearization. Simulation results confirm our theoretical findings, highlighting the improved motion capabilities of this novel design with respect to standard quadrotors

    Implementation of control algorithm for mechanical image stabilization

    Get PDF
    Cameras mounted on boats and in other similar environments can be hard to use if waves and wind cause unwanted motions of the camera which disturbs the desired image. However, this is a problem that can be fixed by applying mechanical image stabilization which is the goal of this thesis. The mechanical image stabilization is achieved by controlling two stepper motors in a pan-tilt-zoom (PTZ) camera provided by Axis Communications. Pan and tilt indicates that the camera can be rotated around two axes that are perpendicular to one another. The thesis begins with the problem of orientation estimation, i.e. finding out how the camera is oriented with respect to e.g., a fixed coordinate system. Sensor fusion is used for fusing accelerometer and gyroscope data to get a better estimate. Both the Kalman and Complementary filters are investigated and compared for this purpose. However, the Kalman filter is the one that is used in the final implementation, due to its better performance. In order to hold a desired camera orientation a compensation generator is used, in this thesis called reference generator. The name comes from the fact that it provides reference signals for the pan and tilt motors in order to compensate for external disturbances. The generator gets information from both pan and tilt encoders and the Kalman filter. The encoders provide camera position relative to the camera’s own chassi. If the compensation signals, also seen as reference values to the inner pan-tilt control, are tracked by the pan and tilt motors, disturbances are suppressed. In the control design a model obtained from system identification is used. The design and control simulations were carried out in the MATLAB extensions Control System Designer and Simulink. The choice of controller fell on the PID. The final part of the thesis describes the result from experiments that were carried out with the real process, i.e. the camera mounted in different setups, including a robotic arm simulating sea conditions. The result shows that the pan motor manages to track reference signals up to the required frequency of 1Hz. However, the tilt motor only manages to track 0.5Hz and is thereby below the required frequency. The result, however, proves that the concept of the thesis is possible

    Multi-Sensor Methods for Mobile Radar Motion Capture and Compensation.

    Get PDF
    Ph.D. Thesis. University of Hawaiʻi at Mānoa 2017

    Momentum Control with Hierarchical Inverse Dynamics on a Torque-Controlled Humanoid

    Full text link
    Hierarchical inverse dynamics based on cascades of quadratic programs have been proposed for the control of legged robots. They have important benefits but to the best of our knowledge have never been implemented on a torque controlled humanoid where model inaccuracies, sensor noise and real-time computation requirements can be problematic. Using a reformulation of existing algorithms, we propose a simplification of the problem that allows to achieve real-time control. Momentum-based control is integrated in the task hierarchy and a LQR design approach is used to compute the desired associated closed-loop behavior and improve performance. Extensive experiments on various balancing and tracking tasks show very robust performance in the face of unknown disturbances, even when the humanoid is standing on one foot. Our results demonstrate that hierarchical inverse dynamics together with momentum control can be efficiently used for feedback control under real robot conditions.Comment: 21 pages, 11 figures, 4 tables in Autonomous Robots (2015
    • 

    corecore