26,555 research outputs found

    A Novel Algorithm of Distance Calculation Based-on Grid-Edge-Depth-Map and Gyroscope for Visually-Impaired

    Get PDF
    This paper presented a new algorithm for determining the distance of an object in front of a stereo camera placed on a helmet. By using a stereo camera with a Sum of Absolute Difference with a Sobel edge detector, our previous Grid-Edge-Depth map algorithm could calculate the objects’ distance up to 500 cm. The problem started when a vision disability person used the device with an unfixed stereo camera angle. The unspecified angle caused by the helmet’s movement influenced the distance calculation result. This novel process started with calculating the distance from a Grid-Edge-Depth map considering unfixed angle data of the x-axis from a gyroscope sensor placed on the stereo camera using the trigonometry formula. The angle data used was the x-axis data. The distance measurement results by the system were then computed based on the unfixed angle compared to the actual distance. The test was carried out with three scenarios which required the user to stand at a distance of 100 cm, 125 cm, and 150 cm from a table, chair, or wall, with 30 tests for each scenario. The test results showed an average accuracy of 96.05% with three experimental scenarios, which meant that this machine was feasible to implement

    The Modelling of Stereoscopic 3D Scene Acquisition

    Get PDF
    The main goal of this work is to find a suitable method for calculating the best setting of a stereo pair of cameras that are viewing the scene to enable spatial imaging. The method is based on a geometric model of a stereo pair cameras currently used for the acquisition of 3D scenes. Based on selectable camera parameters and object positions in the scene, the resultant model allows calculating the parameters of the stereo pair of images that influence the quality of spatial imaging. For the purpose of presenting the properties of the model of a simple 3D scene, an interactive application was created that allows, in addition to setting the cameras and scene parameters and displaying the calculated parameters, also displaying the modelled scene using perspective views and the stereo pair modelled with the aid of anaglyphic images. The resulting modelling method can be used in practice to determine appropriate parameters of the camera configuration based on the known arrangement of the objects in the scene. Analogously, it can, for a given camera configuration, determine appropriate geometrical limits of arranging the objects in the scene being displayed. This method ensures that the resulting stereoscopic recording will be of good quality and observer-friendly

    Reliable fusion of ToF and stereo depth driven by confidence measures

    Get PDF
    In this paper we propose a framework for the fusion of depth data produced by a Time-of-Flight (ToF) camera and stereo vision system. Initially, depth data acquired by the ToF camera are upsampled by an ad-hoc algorithm based on image segmentation and bilateral filtering. In parallel a dense disparity map is obtained using the Semi- Global Matching stereo algorithm. Reliable confidence measures are extracted for both the ToF and stereo depth data. In particular, ToF confidence also accounts for the mixed-pixel effect and the stereo confidence accounts for the relationship between the pointwise matching costs and the cost obtained by the semi-global optimization. Finally, the two depth maps are synergically fused by enforcing the local consistency of depth data accounting for the confidence of the two data sources at each location. Experimental results clearly show that the proposed method produces accurate high resolution depth maps and outperforms the compared fusion algorithms

    Structured Light-Based 3D Reconstruction System for Plants.

    Get PDF
    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
    corecore