5,877 research outputs found

    Occlusions as a Guide for Planning the Next View

    Get PDF
    To resolve the ambiguities that are caused by occlusions in images, we need to take sensor measurements from several different views. The task addressed in this paper deals with a strategy for acquiring 3-D data of an unknown scene. We must first answer the question: What knowledge is adequate to perform a specific task? Thinking in the spirit of purposive vision, to accomplish its task, a system does not need to understand the complete scene but must be able to recognize patterns and situations that are necessary for accomplishing the task. We have limited ourselves to range images obtained by a light stripe range finder. A priori knowledge given to the system is the knowledge of the sensor geometry. The foci of attention are occluded regions, i.e., only the scene at the borders of the occlusions is modeled to compute the next move. Since the system has knowledge of the sensor geometry, it can resolve the appearance of occlusions by analyzing them. The problem of 3-D data acquisition is divided in two subproblems due to two types of occlusions. An occlusion arises either when the reflected laser light does not reach the camera or when the directed laser light does not reach the scene surface. After taking the range image of a scene the regions of no data due to the first kind of occlusion are extracted. The missing data are acquired by rotating the sensor system in the scanning plane, which is defined by the first scan. After a complete image of the surface illuminated from the first scanning plane has been built, the regions of missing data which are due to the second kind of occlusions are located. Then the directions of the next scanning planes for further 3-D data acquisition are computed

    Toward automated earned value tracking using 3D imaging tools

    Get PDF

    Autonomous Sweet Pepper Harvesting for Protected Cropping Systems

    Full text link
    In this letter, we present a new robotic harvester (Harvey) that can autonomously harvest sweet pepper in protected cropping environments. Our approach combines effective vision algorithms with a novel end-effector design to enable successful harvesting of sweet peppers. Initial field trials in protected cropping environments, with two cultivar, demonstrate the efficacy of this approach achieving a 46% success rate for unmodified crop, and 58% for modified crop. Furthermore, for the more favourable cultivar we were also able to detach 90% of sweet peppers, indicating that improvements in the grasping success rate would result in greatly improved harvesting performance

    Exploring the Design Space of Immersive Urban Analytics

    Full text link
    Recent years have witnessed the rapid development and wide adoption of immersive head-mounted devices, such as HTC VIVE, Oculus Rift, and Microsoft HoloLens. These immersive devices have the potential to significantly extend the methodology of urban visual analytics by providing critical 3D context information and creating a sense of presence. In this paper, we propose an theoretical model to characterize the visualizations in immersive urban analytics. Further more, based on our comprehensive and concise model, we contribute a typology of combination methods of 2D and 3D visualizations that distinguish between linked views, embedded views, and mixed views. We also propose a supporting guideline to assist users in selecting a proper view under certain circumstances by considering visual geometry and spatial distribution of the 2D and 3D visualizations. Finally, based on existing works, possible future research opportunities are explored and discussed.Comment: 23 pages,11 figure
    • …
    corecore