4 research outputs found

    3D Reconstruction from a Single Still Image Based on Monocular Vision of an Uncalibrated Camera

    Full text link
    we propose a framework of combining Machine Learning with Dynamic Optimization for reconstructing scene in 3D automatically from a single still image of unstructured outdoor environment based on monocular vision of an uncalibrated camera. After segmenting image first time, a kind of searching tree strategy based on Bayes rule is used to identify the hierarchy of all areas on occlusion. After superpixel segmenting image second time, the AdaBoost algorithm is applied in the integration detection to the depth of lighting, texture and material. Finally, all the factors above are optimized with constrained conditions, acquiring the whole depthmap of an image. Integrate the source image with its depthmap in point-cloud or bilinear interpolation styles, realizing 3D reconstruction. Experiment in comparisons with typical methods in associated database demonstrates our method improves the reasonability of estimation to the overall 3D architecture of image’s scene to a certain extent. And it does not need any manual assist and any camera model information

    Three dimensional information estimation and tracking for moving objects detection using two cameras framework

    Get PDF
    Calibration, matching and tracking are major concerns to obtain 3D information consisting of depth, direction and velocity. In finding depth, camera parameters and matched points are two necessary inputs. Depth, direction and matched points can be achieved accurately if cameras are well calibrated using manual traditional calibration. However, most of the manual traditional calibration methods are inconvenient to use because markers or real size of an object in the real world must be provided or known. Self-calibration can solve the traditional calibration limitation, but not on depth and matched points. Other approaches attempted to match corresponding object using 2D visual information without calibration, but they suffer low matching accuracy under huge perspective distortion. This research focuses on achieving 3D information using self-calibrated tracking system. In this system, matching and tracking are done under self-calibrated condition. There are three contributions introduced in this research to achieve the objectives. Firstly, orientation correction is introduced to obtain better relationship matrices for matching purpose during tracking. Secondly, after having relationship matrices another post-processing method, which is status based matching, is introduced for improving object matching result. This proposed matching algorithm is able to achieve almost 90% of matching rate. Depth is estimated after the status based matching. Thirdly, tracking is done based on x-y coordinates and the estimated depth under self-calibrated condition. Results show that the proposed self-calibrated tracking system successfully differentiates the location of objects even under occlusion in the field of view, and is able to determine the direction and the velocity of multiple moving objects

    Vanishing Point Detection By Segment Clustering On The Projective Space

    No full text
    The analysis of vanishing points on digital images provides strong cues for inferring the 3D structure of the depicted scene and can be exploited in a variety of computer vision applications. In this paper, we propose a method for estimating vanishing points in images of architectural environments that can be used for camera calibration and pose estimation, important tasks in large-scale 3D reconstruction. Our method performs automatic segment clustering in projective space - a direct transformation from the image space - instead of the traditional bounded accumulator space. Since it works in projective space, it handles finite and infinite vanishing points, without any special condition or threshold tuning. Experiments on real images show the effectiveness of the proposed method. We identify three orthogonal vanishing points and compute the estimation error based on their relation with the Image of the Absolute Conic (IAC) and based on the computation of the camera focal length. © 2012 Springer-Verlag.6554 LNCSPART 2324337Zeng, X., Wang, Q., Xu, J., MAP Model for Large-scale 3D Reconstruction and Coarse Matching for Unordered Wide-baseline Photos British Machine Vision Conference (2008)Jang, K.H., Jung, S.K., Practical modeling technique for large-scale 3D building models from ground images (2009) Pattern Recognition Letters, 30 (10), pp. 861-869Lee, S.C., Jung, S.K., Nevatia, R., Automatic Integration of Facade Textures into 3D Building Models with a Projective Geometry Based Line Clustering (2002) USC Computer VisionTeller, S., Antone, M., Bodnar, Z., Bosse, M., Coorg, S., Jethwa, M., Master, N., Calibrated, Registered Images of an Extended Urban Area (2003) International Journal of Computer Vision, 53 (1), pp. 93-107Wilczkowiak, M., Sturm, P., Boyer, E., Using Geometric Constraints through Parallelepipeds for Calibration and 3D Modeling (2005) IEEE Transactions on Pattern Analysis and Machine Intelligence, 27 (2), pp. 194-207Wang, G., Tsui, H.-T., Hu, Z., Wu, F., Camera calibration and 3D reconstruction from a single view based on scene constraints (2005) Image and Vision Computing, 23 (3), pp. 311-323Wang, G., Tsu, H.-T., Wu, Q.M.J., What can we learn about the scene structure from three orthogonal vanishing points in images (2009) Pattern Recognition Letters, 30 (3), pp. 192-202Canny, J., A computational approach to edge detection (1986) IEEE Transactions on Pattern Analysis and Machine Intelligence, 8 (6), pp. 679-698Duda, R.O., Hart, P.E., Use of the Hough transformation to detect lines and curves in pictures (1972) Communications of the ACM, 15 (1), pp. 11-15Barnard, S.T., Interpreting perspective images (1983) Artificial Intelligence, 21 (4), pp. 435-462Tuytelaars, T., Van Gool, L.J., Proesmans, M., Moons, T., A Cascaded Hough Transform as an Aid in Aerial Image Interpretation (1998) International Conference on Computer Vision, pp. 67-72Shufelt, J.A., Performance Evaluation and Analysis of Vanishing Point Detection Techniques (1999) IEEE Transactions on Pattern Analysis and Machine Intelligence, 21 (3), pp. 282-288Almansa, A., Desolneux, A., Vamech, S., Vanishing Point Detection without Any A Priori Information (2003) IEEE Transactions on Pattern Analysis and Machine Intelligence, 25 (4), pp. 502-507Rother, C., A New Approach for Vanishing Point Detection in Architectural Environments British Machine Vision Conference (2000)McLean, G.F., Kotturi, D., Vanishing Point Detection by Line Clustering (1995) IEEE Transactions on Pattern Analysis and Machine Intelligence, 17 (11), pp. 1090-1095Tardif, J.-P., Non-Iterative Approach for Fast and Accurate Vanishing Point Detection (2009) International Conference on Computer Vision, pp. 1250-1257Desolneux, A., Moisan, L., Morel, J.-M., Edge Detection by Helmholtz Principle (2001) Journal of Mathematical Imaging and Vision, 14 (3), pp. 271-284Stolfi, J., (1991) Oriented Projective Geometry: A Framework for Geometric Computations, , Academic PressMardia, K.V., Jupp, P.E., (1999) Directional Statistics, , John Wiley and SonsDenis, P., Elder, J.H., Estrada, F.J., Efficient Edge-Based Methods for Estimating Manhattan Frames in Urban Imagery (2008) LNCS, 5303, pp. 197-210. , Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part II. Springer, Heidelber
    corecore