220 research outputs found

    Cameras and Inertial/Magnetic Sensor Units Alignment Calibration

    Get PDF
    Due to the external acceleration interference/ magnetic disturbance, the inertial/magnetic measurements are usually fused with visual data for drift-free orientation estimation, which plays an important role in a wide variety of applications, ranging from virtual reality, robot, and computer vision to biomotion analysis and navigation. However, in order to perform data fusion, alignment calibration must be performed in advance to determine the difference between the sensor coordinate system and the camera coordinate system. Since orientation estimation performance of the inertial/magnetic sensor unit is immune to the selection of the inertial/magnetic sensor frame original point, we therefore ignore the translational difference by assuming the sensor and camera coordinate systems sharing the same original point and focus on the rotational alignment difference only in this paper. By exploiting the intrinsic restrictions among the coordinate transformations, the rotational alignment calibration problem is formulated by a simplified hand–eye equation AX = XB (A, X, and B are all rotation matrices). A two-step iterative algorithm is then proposed to solve such simplified handeye calibration task. Detailed laboratory validation has been performed and the good experimental results have illustrated the effectiveness of the proposed alignment calibration method

    Self-Contained Calibration of an Elastic Humanoid Upper Body with a Single Head-Mounted RGB Camera

    Get PDF
    When a humanoid robot performs a manipulation task, it first makes a model of the world using its visual sensors and then plans the motion of its body in this model. For this, precise calibration of the camera parameters and the kinematic tree is needed. Besides the accuracy of the calibrated model, the calibration process should be fast and self-contained, i.e., no external measurement equipment should be used. Therefore, we extend our prior work on calibrating the elastic upper body of DLR's Agile Justin by now using only its internal head-mounted RGB camera. We use simple visual markers at the ends of the kinematic chain and one in front of the robot, mounted on a pole, to get measurements for the whole kinematic tree. To ensure that the task-relevant cartesian error at the end-effectors is minimized, we introduce virtual noise to fit our imperfect robot model so that the pixel error has a higher weight if the marker is further away from the camera. This correction reduces the cartesian error by more than 20%, resulting in a final accuracy of 3.9mm on average and 9.1mm in the worst case. This way, we achieve the same precision as in our previous work, where an external cartesian tracking system was used

    Real-to-Sim: Deep Learning with Auto-Tuning to Predict Residual Errors using Sparse Data

    Full text link
    Achieving highly accurate kinematic or simulator models that are close to the real robot can facilitate model-based controls (e.g., model predictive control or linear-quadradic regulators), model-based trajectory planning (e.g., trajectory optimization), and decrease the amount of learning time necessary for reinforcement learning methods. Thus, the objective of this work is to learn the residual errors between a kinematic and/or simulator model and the real robot. This is achieved using auto-tuning and neural networks, where the parameters of a neural network are updated using an auto-tuning method that applies equations from an Unscented Kalman Filter (UKF) formulation. Using this method, we model these residual errors with only small amounts of data - a necessity as we improve the simulator/kinematic model by learning directly from hardware operation. We demonstrate our method on robotic hardware (e.g., manipulator arm), and show that with the learned residual errors, we can further close the reality gap between kinematic models, simulations, and the real robot

    Development of new intelligent autonomous robotic assistant for hospitals

    Get PDF
    Continuous technological development in modern societies has increased the quality of life and average life-span of people. This imposes an extra burden on the current healthcare infrastructure, which also creates the opportunity for developing new, autonomous, assistive robots to help alleviate this extra workload. The research question explored the extent to which a prototypical robotic platform can be created and how it may be implemented in a hospital environment with the aim to assist the hospital staff with daily tasks, such as guiding patients and visitors, following patients to ensure safety, and making deliveries to and from rooms and workstations. In terms of major contributions, this thesis outlines five domains of the development of an actual robotic assistant prototype. Firstly, a comprehensive schematic design is presented in which mechanical, electrical, motor control and kinematics solutions have been examined in detail. Next, a new method has been proposed for assessing the intrinsic properties of different flooring-types using machine learning to classify mechanical vibrations. Thirdly, the technical challenge of enabling the robot to simultaneously map and localise itself in a dynamic environment has been addressed, whereby leg detection is introduced to ensure that, whilst mapping, the robot is able to distinguish between people and the background. The fourth contribution is geometric collision prediction into stabilised dynamic navigation methods, thus optimising the navigation ability to update real-time path planning in a dynamic environment. Lastly, the problem of detecting gaze at long distances has been addressed by means of a new eye-tracking hardware solution which combines infra-red eye tracking and depth sensing. The research serves both to provide a template for the development of comprehensive mobile assistive-robot solutions, and to address some of the inherent challenges currently present in introducing autonomous assistive robots in hospital environments.Open Acces

    Memory-Based Active Visual Search for Humanoid Robots

    Get PDF

    Study on Perception-Action Scheme for Human-Robot Musical Interaction in Wind Instrumental Play

    Get PDF
    制度:新 ; 報告番号:甲3337号 ; 学位の種類:博士(工学) ; 授与年月日:2011/2/25 ; 早大学位記番号:新564
    corecore