316 research outputs found

    A Featured-Based Strategy for Stereovision Matching in Sensors with Fish-Eye Lenses for Forest Environments

    Get PDF
    This paper describes a novel feature-based stereovision matching process based on a pair of omnidirectional images in forest stands acquired with a stereovision sensor equipped with fish-eye lenses. The stereo analysis problem consists of the following steps: image acquisition, camera modelling, feature extraction, image matching and depth determination. Once the depths of significant points on the trees are obtained, the growing stock volume can be estimated by considering the geometrical camera modelling, which is the final goal. The key steps are feature extraction and image matching. This paper is devoted solely to these two steps. At a first stage a segmentation process extracts the trunks, which are the regions used as features, where each feature is identified through a set of attributes of properties useful for matching. In the second step the features are matched based on the application of the following four well known matching constraints, epipolar, similarity, ordering and uniqueness. The combination of the segmentation and matching processes for this specific kind of sensors make the main contribution of the paper. The method is tested with satisfactory results and compared against the human expert criterion

    Real Time UAV Altitude, Attitude and Motion Estimation form Hybrid Stereovision

    Get PDF
    International audienceKnowledge of altitude, attitude and motion is essential for an Unmanned Aerial Vehicle during crit- ical maneuvers such as landing and take-off. In this paper we present a hybrid stereoscopic rig composed of a fisheye and a perspective camera for vision-based navigation. In contrast to classical stereoscopic systems based on feature matching, we propose methods which avoid matching between hybrid views. A plane-sweeping approach is proposed for estimating altitude and de- tecting the ground plane. Rotation and translation are then estimated by decoupling: the fisheye camera con- tributes to evaluating attitude, while the perspective camera contributes to estimating the scale of the trans- lation. The motion can be estimated robustly at the scale, thanks to the knowledge of the altitude. We propose a robust, real-time, accurate, exclusively vision-based approach with an embedded C++ implementation. Although this approach removes the need for any non-visual sensors, it can also be coupled with an Inertial Measurement Unit

    Under vehicle perception for high level safety measures using a catadioptric camera system

    Get PDF
    In recent years, under vehicle surveillance and the classification of the vehicles become an indispensable task that must be achieved for security measures in certain areas such as shopping centers, government buildings, army camps etc. The main challenge to achieve this task is to monitor the under frames of the means of transportations. In this paper, we present a novel solution to achieve this aim. Our solution consists of three main parts: monitoring, detection and classification. In the first part we design a new catadioptric camera system in which the perspective camera points downwards to the catadioptric mirror mounted to the body of a mobile robot. Thanks to the catadioptric mirror the scenes against the camera optical axis direction can be viewed. In the second part we use speeded up robust features (SURF) in an object recognition algorithm. Fast appearance based mapping algorithm (FAB-MAP) is exploited for the classification of the means of transportations in the third part. Proposed technique is implemented in a laboratory environment

    MINHO@home

    Get PDF
    This paper briefly describes the development of a mobile robot to participate on RoboCup@Home. The focus of this project is to integrate robotic knowledge into home applications and human interaction. The robot has the ability to move in all directions due to its omnidirectional system with 3 Swedish wheels at 120Âş angle and can handle objects using an articulated arm with six degrees of freedom. It incorporates several vision systems allowing the robot to recognize faces and objects and to move autonomously on a domestic environment. Voice recognition and speech capabilities are also present

    Inter-Row Tree Detection and Tracking Schemes For Structural Plantation Area

    Get PDF
    In this work, an inter-row tree detection and tracking techniques based on Simultaneous Localization and Mapping (SLAM) method is developed specifically for a well-structures agricultural field where the trees are planted uniformly with certain distance that leaves it with a number of inter-row spaces. The existing rows has created opportunities for an autonomous vehicle to navigate in between the trees to perform the plantation activities such as scouting, monitoring, rowing, pesticide spraying and others. A new approach to detect the landmarks and navigate in the farm based on the lightweight sensors and less computation effort is proposed. In this method, the tree detection and diameter estimation techniques implement the modified tree-triangle diameter technique by using innovative technique based on infrared sensors. Then, in substituting the GPS signal problems during the navigation and localization problems, a curve-based navigation approach is formulated. The path is planned based on the third-polynomial Bezier curve by projecting series of waypoints to create a solid path from one point to another. Then, the trajectory plan is derived for the autonomous vehicle to follow these waypoints during the navigation. At the same time, the mapping technique implements the memory utilization method in order to ease the localization process as well as landmarks mapping in the visual map which is oriented in two-dimensional coordinate format. All of these functions are created, formulated and tested thoroughly in the embedded microcontroller development board platform by using dsPIC30F6014A chip on the omnidirectional vehicle platform

    Catadioptric stereo-vision system using a spherical mirror

    Get PDF
    Abstract In the computer vision field, the reconstruction of target surfaces is usually achieved by using 3D optical scanners assembled integrating digital cameras and light emitters. However, these solutions are limited by the low field of view, which requires multiple acquisition from different views to reconstruct complex free-form geometries. The combination of mirrors and lenses (catadioptric systems) can be adopted to overcome this issue. In this work, a stereo catadioptric optical scanner has been developed by assembling two digital cameras, a spherical mirror and a multimedia white light projector. The adopted configuration defines a non-single viewpoint system, thus a non-central catadioptric camera model has been developed. An analytical solution to compute the projection of a scene point onto the image plane (forward projection) and vice-versa (backward projection) is presented. The proposed optical setup allows omnidirectional stereo vision thus allowing the reconstruction of target surfaces with a single acquisition. Preliminary results, obtained measuring a hollow specimen, demonstrated the effectiveness of the described approach

    Panoramic Stereovision and Scene Reconstruction

    Get PDF
    With advancement of research in robotics and computer vision, an increasingly high number of applications require the understanding of a scene in three dimensions. A variety of systems are deployed to do the same. This thesis explores a novel 3D imaging technique. This involves the use of catadioptric cameras in a stereoscopic arrangement. A secondary system aims to stabilize the system in the event that the cameras are misaligned during operation. The system provides a stark advantage due to it being a cost effective alternative to present day standard state-of-the-art systems that achieve the same goal of 3D imaging. The compromise lies in the quality of depth estimation, which can be overcome with a different imager and calibration. The result was a panoramic disparity map generated by the system
    • …
    corecore