59,120 research outputs found

    Stereo disparity facilitates view generalization during shape recognition for solid multipart objects

    Get PDF
    Current theories of object recognition in human vision make different predictions about whether the recognition of complex, multipart objects should be influenced by shape information about surface depth orientation and curvature derived from stereo disparity. We examined this issue in five experiments using a recognition memory paradigm in which observers (N = 134) memorized and then discriminated sets of 3D novel objects at trained and untrained viewpoints under either mono or stereo viewing conditions. In order to explore the conditions under which stereo-defined shape information contributes to object recognition we systematically varied the difficulty of view generalization by increasing the angular disparity between trained and untrained views. In one series of experiments, objects were presented from either previously trained views or untrained views rotated (15°, 30°, or 60°) along the same plane. In separate experiments we examined whether view generalization effects interacted with the vertical or horizontal plane of object rotation across 40° viewpoint changes. The results showed robust viewpoint-dependent performance costs: Observers were more efficient in recognizing learned objects from trained than from untrained views, and recognition was worse for extrapolated than for interpolated untrained views. We also found that performance was enhanced by stereo viewing but only at larger angular disparities between trained and untrained views. These findings show that object recognition is not based solely on 2D image information but that it can be facilitated by shape information derived from stereo disparity

    Robust 3-Dimensional Object Recognition using Stereo Vision and Geometric Hashing

    Get PDF
    We propose a technique that combines geometric hashing with stereo vision. The idea is to use the robustness of geometric hashing to spurious data to overcome the correspondence problem, while the stereo vision setup enables direct model matching using the 3-D object models. Furthermore, because the matching technique relies on the relative positions of local features, we should be able to perform robust recognition even with partially occluded objects. We tested this approach with simple geometric objects using a corner point detector. We successfully recognized objects even in scenes where the objects were partially occluded by other objects. For complicated scenes, however, the limited set of model features and required amount of computing time, sometimes became a proble

    Stereo viewing modulates three-dimensional shape processing during object recognition: a high-density ERP study

    Get PDF
    The role of stereo disparity in the recognition of 3-dimensional (3D) object shape remains an unresolved issue for theoretical models of the human visual system. We examined this issue using high-density (128 channel) recordings of event-related potentials (ERPs). A recognition memory task was used in which observers were trained to recognize a subset of complex, multipart, 3D novel objects under conditions of either (bi-) monocular or stereo viewing. In a subsequent test phase they discriminated previously trained targets from untrained distractor objects that shared either local parts, 3D spatial configuration, or neither dimension, across both previously seen and novel viewpoints. The behavioral data showed a stereo advantage for target recognition at untrained viewpoints. ERPs showed early differential amplitude modulations to shape similarity defined by local part structure and global 3D spatial configuration. This occurred initially during an N1 component around 145–190 ms poststimulus onset, and then subsequently during an N2/P3 component around 260–385 ms poststimulus onset. For mono viewing, amplitude modulation during the N1 was greatest between targets and distracters with different local parts for trained views only. For stereo viewing, amplitude modulation during the N2/P3 was greatest between targets and distracters with different global 3D spatial configurations and generalized across trained and untrained views. The results show that image classification is modulated by stereo information about the local part, and global 3D spatial configuration of object shape. The findings challenge current theoretical models that do not attribute functional significance to stereo input during the computation of 3D object shape

    Long-range concealed object detection through active covert illumination

    Get PDF
    © 2015 SPIE. When capturing a scene for surveillance, the addition of rich 3D data can dramatically improve the accuracy of object detection or face recognition. Traditional 3D techniques, such as geometric stereo, only provide a coarse grained reconstruction of the scene and are ill-suited to fine analysis. Photometric stereo is a well established technique providing dense, high-resolution, reconstructions, using active artificial illumination of an object from multiple directions to gather surface information. It is typically used indoors, at short range

    Fast Multi-frame Stereo Scene Flow with Motion Segmentation

    Full text link
    We propose a new multi-frame method for efficiently computing scene flow (dense depth and optical flow) and camera ego-motion for a dynamic scene observed from a moving stereo camera rig. Our technique also segments out moving objects from the rigid scene. In our method, we first estimate the disparity map and the 6-DOF camera motion using stereo matching and visual odometry. We then identify regions inconsistent with the estimated camera motion and compute per-pixel optical flow only at these regions. This flow proposal is fused with the camera motion-based flow proposal using fusion moves to obtain the final optical flow and motion segmentation. This unified framework benefits all four tasks - stereo, optical flow, visual odometry and motion segmentation leading to overall higher accuracy and efficiency. Our method is currently ranked third on the KITTI 2015 scene flow benchmark. Furthermore, our CPU implementation runs in 2-3 seconds per frame which is 1-3 orders of magnitude faster than the top six methods. We also report a thorough evaluation on challenging Sintel sequences with fast camera and object motion, where our method consistently outperforms OSF [Menze and Geiger, 2015], which is currently ranked second on the KITTI benchmark.Comment: 15 pages. To appear at IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017). Our results were submitted to KITTI 2015 Stereo Scene Flow Benchmark in November 201

    CAD-based 3-D object recognition

    Get PDF
    Journal ArticleWe propose an approach to 3-D object recognition using CAD-based geometry models for freeform surfaces. Geometry is modeled with rational B-splines by defining surface patches and then combining these into a volumetric model of the object. Characteristic features are then extracted from this model and subjected to a battery of tests to select an "optimal" subset of surface features which are robust with respect to the sensor being used (e.g. laser range finder versus passive stereo) and permit recognition of the object from any viewing position. These features are then organized into a "strategy tree" which defines the order in which the features are sought, and any corroboration required to justify issuing a hypotheses. We propose the use of geometric sensor data integration techniques as a means for formally selecting surface features on free-form objects in order to build recognition strategies. Previous work has dealt with polyhedra and generalized cylinders, whereas here we propose to apply the method to more general surfaces
    • …
    corecore