915 research outputs found

    The Probabilistic Robot Kinematics Model and Its Application to Sensor Fusion

    Get PDF
    Robots with elasticity in structural components can suffer from undesired end-effector positioning imprecision, which exceeds the accuracy requirements for successful manipulation. We present the Probabilistic-Product-Of-Exponentials robot model, a novel approach for kinematic modeling of robots. It does not only consider the robot's deterministic geometry but additionally models time-varying and configuration-dependent errors in a probabilistic way. Our robot model allows to propagate the errors along the kinematic chain and to compute their influence on the end-effector pose. We apply this model in the context of sensor fusion for manipulator pose correction for two different robotic systems. The results of a simulation study, as well as of an experiment, demonstrate that probabilistic configuration-dependent error modeling of the robot kinematics is crucial in improving pose estimation results

    Surround-view Fisheye BEV-Perception for Valet Parking: Dataset, Baseline and Distortion-insensitive Multi-task Framework

    Full text link
    Surround-view fisheye perception under valet parking scenes is fundamental and crucial in autonomous driving. Environmental conditions in parking lots perform differently from the common public datasets, such as imperfect light and opacity, which substantially impacts on perception performance. Most existing networks based on public datasets may generalize suboptimal results on these valet parking scenes, also affected by the fisheye distortion. In this article, we introduce a new large-scale fisheye dataset called Fisheye Parking Dataset(FPD) to promote the research in dealing with diverse real-world surround-view parking cases. Notably, our compiled FPD exhibits excellent characteristics for different surround-view perception tasks. In addition, we also propose our real-time distortion-insensitive multi-task framework Fisheye Perception Network (FPNet), which improves the surround-view fisheye BEV perception by enhancing the fisheye distortion operation and multi-task lightweight designs. Extensive experiments validate the effectiveness of our approach and the dataset's exceptional generalizability.Comment: 12 pages, 11 figure

    Robot Assisted Object Manipulation for Minimally Invasive Surgery

    Get PDF
    Robotic systems have an increasingly important role in facilitating minimally invasive surgical treatments. In robot-assisted minimally invasive surgery, surgeons remotely control instruments from a console to perform operations inside the patient. However, despite the advanced technological status of surgical robots, fully autonomous systems, with decision-making capabilities, are not yet available. In 2017, a structure to classify the research efforts toward autonomy achievable with surgical robots was proposed by Yang et al. Six different levels were identified: no autonomy, robot assistance, task autonomy, conditional autonomy, high autonomy, and full autonomy. All the commercially available platforms in robot-assisted surgery is still in level 0 (no autonomy). Despite increasing the level of autonomy remains an open challenge, its adoption could potentially introduce multiple benefits, such as decreasing surgeons’ workload and fatigue and pursuing a consistent quality of procedures. Ultimately, allowing the surgeons to interpret the ample and intelligent information from the system will enhance the surgical outcome and positively reflect both on patients and society. Three main aspects are required to introduce automation into surgery: the surgical robot must move with high precision, have motion planning capabilities and understand the surgical scene. Besides these main factors, depending on the type of surgery, there could be other aspects that might play a fundamental role, to name some compliance, stiffness, etc. This thesis addresses three technological challenges encountered when trying to achieve the aforementioned goals, in the specific case of robot-object interaction. First, how to overcome the inaccuracy of cable-driven systems when executing fine and precise movements. Second, planning different tasks in dynamically changing environments. Lastly, how the understanding of a surgical scene can be used to solve more than one manipulation task. To address the first challenge, a control scheme relying on accurate calibration is implemented to execute the pick-up of a surgical needle. Regarding the planning of surgical tasks, two approaches are explored: one is learning from demonstration to pick and place a surgical object, and the second is using a gradient-based approach to trigger a smoother object repositioning phase during intraoperative procedures. Finally, to improve scene understanding, this thesis focuses on developing a simulation environment where multiple tasks can be learned based on the surgical scene and then transferred to the real robot. Experiments proved that automation of the pick and place task of different surgical objects is possible. The robot was successfully able to autonomously pick up a suturing needle, position a surgical device for intraoperative ultrasound scanning and manipulate soft tissue for intraoperative organ retraction. Despite automation of surgical subtasks has been demonstrated in this work, several challenges remain open, such as the capabilities of the generated algorithm to generalise over different environment conditions and different patients

    Pyörivien monilaserkeilainjärjestelmien geometrinen kalibrointi

    Get PDF
    The introduction of light-weight and low-cost multi-beam laser scanners provides ample opportunities in positioning and mapping as well as automation and robotics. The fields of view (FOV) of these sensors can be further expanded by actuation, for example by rotation. These rotating multi-beam lidar (RMBL) systems can provide fast and expansive coverage of the geometries of spaces, but the nature of the sensors and their actuation leave room for improvement in accuracy and precision. Geometric calibration methods addressing this space have been proposed, and this thesis reviews a selection of these methods and evaluates their performance when applied to a set of data samples collected using a custom RMBL platform and six Velodyne multi-beam sensors (one VLP-16 Lite, four VLP-16s and one VLP-32C). The calibration algorithms under inspection are unsupervised and data-based, and they are quantitatively compared to a target-based calibration performed using a high-accuracy point cloud obtained using a terrestrial laser scanner as a reference. The data-based calibration methods are automatic plane detection and fitting, a method based on local planarity and a method based on the information-theoretic concept of information entropy. It is found that of these, the plane-fitting and entropy-based measures for point cloud quality obtain the best calibration results.Kevyet ja edulliset monilaserkeilaimet tuovat uusia mahdollisuuksia paikannus- ja kartoitusaloille mutta myös automaatioon ja robotiikkaan. Näiden sensorien näköaloja voidaan kasvattaa entisestään esimerkiksi pyörittämällä, ja näin toteutettavat pyörivät monilaserkeilainjärjestelmät tuottavat nopeasti kattavaa geometriaa niitä ympäröivistä tiloista. Sensorien rakenne ja järjestelmän liikkuvuus lisäävät kuitenkin kohinaa ja epävarmuutta mittauksissa, minkä vuoksi erilaisia geometrisia kalibrointimenetelmiä onkin ehdotettu aiemmassa tutkimuksessa. Tässä diplomityössä esitellään valikoituja kalibrointimenetelmiä ja arvioidaan niiden tuloksia koeasetelmassa, jossa pyörivälle alustalle asennetuilla Velodyne-monilaserkeilaimilla (yksi VLP-16 Lite, neljä VLP-16:aa ja yksi VLP-32C) mitataan liikuntasalin geometriaa. Tarkasteltavat menetelmät ovat valvomattomia ja vain mittauksiin perustuvia ja niitä verrataan samasta tilasta hankittuun tarkkaan maalaserkeilausaineistoon. Menetelmiä ovat tasojen automaattinen etsintä ja sovitus, paikalliseen tasomaisuuteen perustuva menetelmä sekä informaatioteoreettiseen entropiaan perustuva menetelmä. Näistä tasojen sovitus ja entropiamenetelmä saavuttivat parhaat kalibrointitulokset referenssikalibraatioon verrattaessa

    Planetary Rover Inertial Navigation Applications: Pseudo Measurements and Wheel Terrain Interactions

    Get PDF
    Accurate localization is a critical component of any robotic system. During planetary missions, these systems are often limited by energy sources and slow spacecraft computers. Using proprioceptive localization (e.g., using an inertial measurement unit and wheel encoders) without external aiding is insufficient for accurate localization. This is mainly due to the integrated and unbounded errors of the inertial navigation solutions and the drifted position information from wheel encoders caused by wheel slippage. For this reason, planetary rovers often utilize exteroceptive (e.g., vision-based) sensors. On the one hand, localization with proprioceptive sensors is straightforward, computationally efficient, and continuous. On the other hand, using exteroceptive sensors for localization slows rover driving speed, reduces rover traversal rate, and these sensors are sensitive to the terrain features. Given the advantages and disadvantages of both methods, this thesis focuses on two objectives. First, improving the proprioceptive localization performance without significant changes to the rover operations. Second, enabling adaptive traversability rate based on the wheel-terrain interactions while keeping the localization reliable. To achieve the first objective, we utilized the zero-velocity, zero-angular rate updates, and non-holonomicity of a rover to improve rover localization performance even with the limited available sensor usage in a computationally efficient way. Pseudo-measurements generated from proprioceptive sensors when the rover is stationary conditions and the non-holonomic constraints while traversing can be utilized to improve the localization performance without any significant changes to the rover operations. Through this work, it is observed that a substantial improvement in localization performance, without the aid of additional exteroceptive sensor information. To achieve the second objective, the relationship between the estimation of localization uncertainty and wheel-terrain interactions through slip-ratio was investigated. This relationship was exposed with a Gaussian process with time series implementation by using the slippage estimation while the rover is moving. Then, it is predicted when to change from moving to stationary conditions by mapping the predicted slippage into localization uncertainty prediction. Instead of a periodic stopping framework, the method introduced in this work is a slip-aware localization method that enables the rover to stop more frequently in high-slip terrains whereas stops rover less frequently for low-slip terrains while keeping the proprioceptive localization reliable

    Autonomous model building using vision and manipulation

    Get PDF
    It is often the case that robotic systems require models, in order to successfully control themselves, and to interact with the world. Models take many forms and include kinematic models to plan motions, dynamics models to understand the interaction of forces, and models of 3D geometry to check for collisions, to name but a few. Traditionally, models are provided to the robotic system by the designers that build the system. However, for long-term autonomy it becomes important for the robot to be able to build and maintain models of itself, and of objects it might encounter. In this thesis, the argument for enabling robotic systems to autonomously build models is advanced and explored. The main contribution of this research is to show how a layered approach can be taken to building models. Thus a robot, starting with a limited amount of information, can autonomously build a number of models, including a kinematic model, which describes the robot’s body, and allows it to plan and perform future movements. Key to the incremental, autonomous approach is the use of exploratory actions. These are actions that the robot can perform in order to gain some more information, either about itself, or about an object with which it is interacting. A method is then presented whereby a robot, after being powered on, can home its joints using just vision, i.e. traditional methods such as absolute encoders, or limit switches are not required. The ability to interact with objects in order to extract information is one of the main advantages that a robotic system has over a purely passive system, when attempting to learn about or build models of objects. In light of this, the next contribution of this research is to look beyond the robot’s body and to present methods with which a robot can autonomously build models of objects in the world around it. The first class of objects examined are flat pack cardboard boxes, a class of articulated objects with a number of interesting properties. It is shown how exploratory actions can be used to build a model of a flat pack cardboard box and to locate any hinges the box may have. Specifically, it is shown how when interacting with an object, a robot can combine haptic feedback from force sensors, with visual feedback from a camera to get more information from an object than would be possible using just a single sensor modality. The final contribution of this research is to present a series of exploratory actions for a robotic text reading system that allow text to be found and read from an object. The text reading system highlights how models of objects can take many forms, from a representation of their physical extents, to the text that is written on them

    Self-Localization of Humanoid Robot in a Soccer Field

    Get PDF
    Master'sMASTER OF ENGINEERIN

    Insect inspired visual motion sensing and flying robots

    Get PDF
    International audienceFlying insects excellently master visual motion sensing techniques. They use dedicated motion processing circuits at a low energy and computational costs. Thanks to observations obtained on insect visual guidance, we developed visual motion sensors and bio-inspired autopilots dedicated to flying robots. Optic flow-based visuomotor control systems have been implemented on an increasingly large number of sighted autonomous robots. In this chapter, we present how we designed and constructed local motion sensors and how we implemented bio-inspired visual guidance scheme on-board several micro-aerial vehicles. An hyperacurate sensor in which retinal micro-scanning movements are performed via a small piezo-bender actuator was mounted onto a miniature aerial robot. The OSCAR II robot is able to track a moving target accurately by exploiting the microscan-ning movement imposed to its eye's retina. We also present two interdependent control schemes driving the eye in robot angular position and the robot's body angular position with respect to a visual target but without any knowledge of the robot's orientation in the global frame. This "steering-by-gazing" control strategy, which is implemented on this lightweight (100 g) miniature sighted aerial robot, demonstrates the effectiveness of this biomimetic visual/inertial heading control strategy

    Toward autonomous underwater mapping in partially structured 3D environments

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Master of Science at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution February 2014Motivated by inspection of complex underwater environments, we have developed a system for multi-sensor SLAM utilizing both structured and unstructured environmental features. We present a system for deriving planar constraints from sonar data, and jointly optimizing the vehicle and plane positions as nodes in a factor graph. We also present a system for outlier rejection and smoothing of 3D sonar data, and for generating loop closure constraints based on the alignment of smoothed submaps. Our factor graph SLAM backend combines loop closure constraints from sonar data with detections of visual fiducial markers from camera imagery, and produces an online estimate of the full vehicle trajectory and landmark positions. We evaluate our technique on an inspection of a decomissioned aircraft carrier, as well as synthetic data and controlled indoor experiments, demonstrating improved trajectory estimates and reduced reprojection error in the final 3D map

    Simultaneous Localization and Calibration for Cooperative Radio Navigation

    Get PDF
    Cooperative radio localization and navigation systems can be used in scenarios where the reception of global navigation satellite system (GNSS) signals is not possible or impaired. While the benefit of cooperation has been highlighted by many papers, calibration is not widely considered, but equally important in practice. Utilizing the signal propagation time requires group delay or ranging bias calibration and estimating the direction-of-arrival (DoA) requires antenna response calibration. Often, calibration parameters are determined only once before operation. However, the calibration parameters are influenced by e.g. changing temperatures of radio frequency (RF) components or changing surroundings of antennas. To cope with that, we derive a cooperative simultaneous localization and calibration (SLAC) algorithm based on Bayesian filtering, which estimates antenna responses and ranging biases simultaneously with positions and orientations. By simulations, we show that the calibration parameters can be estimated during operation without additional sensors. We further proof practical applicability of SLAC by evaluating measurement data from robotic rovers. With SLAC, both ranging and DoA estimation performance is improved, resulting in better position and orientation estimation accuracy. SLAC is thus able to provide reliable calibration and to mitigate model mismatch. Finally, we discuss open research questions and possible extensions of SLAC
    corecore