370 research outputs found
Humanoid odometric localization integrating kinematic, inertial and visual information
We present a method for odometric localization of humanoid robots using standard sensing equipment, i.e., a monocular camera, an inertial measurement unit (IMU), joint encoders and foot pressure sensors. Data from all these sources are integrated using the prediction-correction paradigm of the Extended Kalman Filter. Position and orientation of the torso, defined as the representative body of the robot, are predicted through kinematic computations based on joint encoder readings; an asynchronous mechanism triggered by the pressure sensors is used to update the placement of the support foot. The correction step of the filter uses as measurements the torso orientation, provided by the IMU, and the head pose, reconstructed by a VSLAM algorithm. The proposed method is validated on the humanoid NAO through two sets of experiments: open-loop motions aimed at assessing the accuracy of localization with respect to a ground truth, and closed-loop motions where the humanoid pose estimates are used in real-time as feedback signals for trajectory control
Humanoid robot navigation: getting localization information from vision
International audienceIn this article, we present our work to provide a navigation and localization system on a constrained humanoid platform, the NAO robot, without modifying the robot sensors. First we try to implement a simple and light version of classical monocular Simultaneous Localization and Mapping (SLAM) algorithms, while adapting to the CPU and camera quality, which turns out to be insufficient on the platform for the moment. From our work on keypoints tracking, we identify that some keypoints can be still accurately tracked at little cost, and use them to build a visual compass. This compass is then used to correct the robot walk, because it makes it possible to control the robot orientation accurately
Simultaneous Parameter Calibration, Localization, and Mapping
The calibration parameters of a mobile robot play a substantial role in navigation tasks. Often these parameters are subject to variations that depend either on changes in the environment or on the load of the robot. In this paper, we propose an approach to simultaneously estimate a map of the environment, the position of the on-board sensors of the robot, and its kinematic parameters. Our method requires no prior knowledge about the environment and relies only on a rough initial guess of the parameters of the platform. The proposed approach estimates the parameters online and it is able to adapt to non-stationary changes of the configuration. We tested our approach in simulated environments and on a wide range of real-world data using different types of robotic platforms. (C) 2012 Taylor & Francis and The Robotics Society of Japa
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
積算状態推定に基づくヒューマノイドロボットの継続的タスク実行システムの構成法
学位の種別: 課程博士審査委員会委員 : (主査)東京大学准教授 岡田 慧, 東京大学教授 中村 仁彦, 東京大学教授 稲葉 雅幸, 東京大学教授 國吉 康夫, 東京大学准教授 高野 渉University of Tokyo(東京大学
Agent and object aware tracking and mapping methods for mobile manipulators
The age of the intelligent machine is upon us. They exist in our factories, our warehouses, our military, our hospitals, on our roads, and on the moon. Most of these things we call robots. When placed in a
controlled or known environment such as an automotive factory or a distribution warehouse they perform their given roles with exceptional efficiency, achieving far more than is within reach of a humble human being. Despite the remarkable success of intelligent machines in such domains, they have yet to make a full-hearted deployment into our homes. The missing link between the robots we have now and the robots that are soon to come to our houses is perception.
Perception as we mean it here refers to a level of understanding beyond the collection and aggregation of sensory data. Much of the available sensory information is noisy and unreliable, our homes contain many reflective surfaces, repeating textures on large flat surfaces, and many disruptive moving elements, including humans. These environments change over time, with objects frequently moving within and between rooms.
This idea of change in an environment is fundamental to robotic applications, as in most cases we expect them to be effectors of such change. We can identify two particular challenges1 that must be solved for robots to make the jump to less structured environments - how to manage noise and disruptive elements in observational data, and how to understand the world as a set of changeable elements (objects) which move over time within a wider environment. In this thesis we look at one possible approach to solving each of these problems.
For the first challenge we use proprioception aboard a robot with an articulated arm to handle difficult
and unreliable visual data caused both by the robot and the environment. We use sensor data aboard the robot to improve the pose tracking of a visual system when the robot moves rapidly, with high jerk, or when observing a scene with little visual variation.
For the second challenge, we build a model of the world on the level of rigid objects, and relocalise them both as they change location between different sequences and as they move. We use semantics, image keypoints, and 3D geometry to register and align objects between sequences, showing how their position has moved between disparate observations.Open Acces
MonoSLAM: Real-time single camera SLAM
Published versio
Robust dense visual SLAM using sensor fusion and motion segmentation
Visual simultaneous localisation and mapping (SLAM) is an important technique for
enabling mobile robots to navigate autonomously within their environments. Using
cameras, robots reconstruct a representation of their environment and simultaneously
localise themselves within it. A dense visual SLAM system produces a high-resolution
and detailed reconstruction of the environment which can be used for obstacle avoidance or semantic reasoning.
State-of-the-art dense visual SLAM systems demonstrate robust performance and
impressive accuracy in ideal conditions. However, these techniques are based on requirements which limit the extent to which they can be deployed in real applications.
Fundamentally, they require constant scene illumination, smooth camera motion and
no moving objects being present in the scene. Overcoming these requirements is not
trivial and significant effort is needed to make dense visual SLAM approaches more
robust to real-world conditions.
The objective of this thesis is to develop dense visual SLAM systems which are
more robust to real-world visually challenging conditions. For this, we leverage sensor
fusion and motion segmentation for situations where camera data is unsuitable.
The first contribution is a visual SLAM system for the NASA Valkyrie humanoid
robot which is robust to the robot’s operation. It is based on a sensor fusion approach
which combines visual SLAM and leg odometry to demonstrate increased robustness
to illumination changes and fast camera motion.
Second, we research methods for robust visual odometry in the presence of moving
objects. We propose a formulation for joint visual odometry and motion segmentation
that demonstrates increased robustness in scenes with moving objects compared to
state-of-the-art approaches.
We then extend this method using inertial information from a gyroscope to compare the contributions of motion segmentation and motion prior integration for robustness to scene dynamics. As part of this study we provide a dataset recorded in
scenes with different numbers of moving objects.
In conclusion, we find that both motion segmentation and motion prior integration
are necessary for achieving significantly better results in real-world conditions. While
motion priors increase robustness, motion segmentation increases the accuracy of the
reconstruction results through filtering of moving objects.Edinburgh Centre for RoboticsEngineering and Physical Sciences Research Council (EPSRC
- …