166 research outputs found

    Robot Vision in the Language of Geometric Algebra

    Get PDF

    Visually-guided walking reference modification for humanoid robots

    Get PDF
    Humanoid robots are expected to assist humans in the future. As for any robot with mobile characteristics, autonomy is an invaluable feature for a humanoid interacting with its environment. Autonomy, along with components from artificial intelligence, requires information from sensors. Vision sensors are widely accepted as the source of richest information about the surroundings of a robot. Visual information can be exploited in tasks ranging from object recognition, localization and manipulation to scene interpretation, gesture identification and self-localization. Any autonomous action of a humanoid, trying to accomplish a high-level goal, requires the robot to move between arbitrary waypoints and inevitably relies on its selflocalization abilities. Due to the disturbances accumulating over the path, it can only be achieved by gathering feedback information from the environment. This thesis proposes a path planning and correction method for bipedal walkers based on visual odometry. A stereo camera pair is used to find distinguishable 3D scene points and track them over time, in order to estimate the 6 degrees-of-freedom position and orientation of the robot. The algorithm is developed and assessed on a benchmarking stereo video sequence taken from a wheeled robot, and then tested via experiments with the humanoid robot SURALP (Sabanci University Robotic ReseArch Laboratory Platform)

    Estimating Epipolar Geometry With The Use of a Camera Mounted Orientation Sensor

    Get PDF
    Context: Image processing and computer vision are rapidly becoming more and more commonplace, and the amount of information about a scene, such as 3D geometry, that can be obtained from an image, or multiple images of the scene is steadily increasing due to increasing resolutions and availability of imaging sensors, and an active research community. In parallel, advances in hardware design and manufacturing are allowing for devices such as gyroscopes, accelerometers and magnetometers and GPS receivers to be included alongside imaging devices at a consumer level. Aims: This work aims to investigate the use of orientation sensors in the field of computer vision as sources of data to aid with image processing and the determination of a scene’s geometry, in particular, the epipolar geometry of a pair of images - and devises a hybrid methodology from two sets of previous works in order to exploit the information available from orientation sensors alongside data gathered from image processing techniques. Method: A readily available consumer-level orientation sensor was used alongside a digital camera to capture images of a set of scenes and record the orientation of the camera. The fundamental matrix of these pairs of images was calculated using a variety of techniques - both incorporating data from the orientation sensor and excluding its use Results: Some methodologies could not produce an acceptable result for the Fundamental Matrix on certain image pairs, however, a method described in the literature that used an orientation sensor always produced a result - however in cases where the hybrid or purely computer vision methods also produced a result - this was found to be the least accurate. Conclusion: Results from this work show that the use of an orientation sensor to capture information alongside an imaging device can be used to improve both the accuracy and reliability of calculations of the scene’s geometry - however noise from the orientation sensor can limit this accuracy and further research would be needed to determine the magnitude of this problem and methods of mitigation

    3D points recover from stereo video sequences based on open CV 2.1 libraries

    Get PDF
    Mestrado em Engenharia MecânicaThe purpose of this study was to implement a program in C++ using OpenCV image processing platform's algorithms and Microsoft Visual Studio 2008 development environment to perform cameras calibration and calibration parameters optimization, stereo rectification, stereo correspondence and recover sets of 3D points from a pair of synchronized video sequences obtained from a stereo configuration. The study utilized two pretest laboratory sessions and one intervention laboratory session. Measurements included setting different stereo configurations with two Phantom v9.1 high-speed cameras to: capture video sequences of a MELFA RV-2AJ robot executing a simple 3D path, and additionally capture video sequences of a planar calibration object, being moved by a person, to calibrate each stereo configuration. Significant improvements were made from pretest to intervention laboratory session on minimizing procedures errors and choosing the best camera capture settings. Cameras intrinsic and extrinsic parameters, stereo relations, and disparity-to-depth matrix were better estimated for the last measurements and the comparison between the obtained sets of 3D points (3D path) with the robot's 3D path proved to be similar
    • …
    corecore