11 research outputs found

    Recovery of 3-D shape of curved objects from multiple views

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.Includes bibliographical references (p. 65-66).by Eugene S. Lin.M.Eng

    Variable viewpoint reality : a protoype for realtime 3D reconstruction

    Get PDF
    Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.Includes bibliographical references (p. 70-72).by Owen W. Ozier.M.Eng

    3D Face Modelling, Analysis and Synthesis

    Get PDF
    Human faces have always been of a special interest to researchers in the computer vision and graphics areas. There has been an explosion in the number of studies around accurately modelling, analysing and synthesising realistic faces for various applications. The importance of human faces emerges from the fact that they are invaluable means of effective communication, recognition, behaviour analysis, conveying emotions, etc. Therefore, addressing the automatic visual perception of human faces efficiently could open up many influential applications in various domains, e.g. virtual/augmented reality, computer-aided surgeries, security and surveillance, entertainment, and many more. However, the vast variability associated with the geometry and appearance of human faces captured in unconstrained videos and images renders their automatic analysis and understanding very challenging even today. The primary objective of this thesis is to develop novel methodologies of 3D computer vision for human faces that go beyond the state of the art and achieve unprecedented quality and robustness. In more detail, this thesis advances the state of the art in 3D facial shape reconstruction and tracking, fine-grained 3D facial motion estimation, expression recognition and facial synthesis with the aid of 3D face modelling. We give a special attention to the case where the input comes from monocular imagery data captured under uncontrolled settings, a.k.a. \textit{in-the-wild} data. This kind of data are available in abundance nowadays on the internet. Analysing these data pushes the boundaries of currently available computer vision algorithms and opens up many new crucial applications in the industry. We define the four targeted vision problems (3D facial reconstruction &\& tracking, fine-grained 3D facial motion estimation, expression recognition, facial synthesis) in this thesis as the four 3D-based essential systems for the automatic facial behaviour understanding and show how they rely on each other. Finally, to aid the research conducted in this thesis, we collect and annotate a large-scale videos dataset of monocular facial performances. All of our proposed methods demonstarte very promising quantitative and qualitative results when compared to the state-of-the-art methods

    3D object reconstruction using computer vision : reconstruction and characterization applications for external human anatomical structures

    Get PDF
    Tese de doutoramento. Engenharia Informática. Faculdade de Engenharia. Universidade do Porto. 201

    Shape recovery from reflection.

    Get PDF
    by Yingli Tian.Thesis (Ph.D.)--Chinese University of Hong Kong, 1996.Includes bibliographical references (leaves 202-222).Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Physics-Based Shape Recovery Techniques --- p.3Chapter 1.2 --- Proposed Approaches to Shape Recovery in this Thesis --- p.9Chapter 1.3 --- Thesis Outline --- p.13Chapter 2 --- Camera Model in Color Vision --- p.15Chapter 2.1 --- Introduction --- p.15Chapter 2.2 --- Spectral Linearization --- p.17Chapter 2.3 --- Image Balancing --- p.21Chapter 2.4 --- Spectral Sensitivity --- p.24Chapter 2.5 --- Color Clipping and Blooming --- p.24Chapter 3 --- Extended Light Source Models --- p.27Chapter 3.1 --- Introduction --- p.27Chapter 3.2 --- A Spherical Light Model in 2D Coordinate System --- p.30Chapter 3.2.1 --- Basic Photometric Function for Hybrid Surfaces under a Point Light Source --- p.32Chapter 3.2.2 --- Photometric Function for Hybrid Surfaces under the Spher- ical Light Source --- p.34Chapter 3.3 --- A Spherical Light Model in 3D Coordinate System --- p.36Chapter 3.3.1 --- Radiance of the Spherical Light Source --- p.36Chapter 3.3.2 --- Surface Brightness Illuminated by One Point of the Spher- ical Light Source --- p.38Chapter 3.3.3 --- Surface Brightness Illuminated by the Spherical Light Source --- p.39Chapter 3.3.4 --- Rotating the Source-Object Coordinate to the Camera- Object Coordinate --- p.41Chapter 3.3.5 --- Surface Reflection Model --- p.44Chapter 3.4 --- Rectangular Light Model in 3D Coordinate System --- p.45Chapter 3.4.1 --- Radiance of a Rectangular Light Source --- p.45Chapter 3.4.2 --- Surface Brightness Illuminated by One Point of the Rect- angular Light Source --- p.47Chapter 3.4.3 --- Surface Brightness Illuminated by a Rectangular Light Source --- p.47Chapter 4 --- Shape Recovery from Specular Reflection --- p.54Chapter 4.1 --- Introduction --- p.54Chapter 4.2 --- Theory of the First Method --- p.57Chapter 4.2.1 --- Torrance-Sparrow Reflectance Model --- p.57Chapter 4.2.2 --- Relationship Between Surface Shapes from Different Images --- p.60Chapter 4.3 --- Theory of the Second Method --- p.65Chapter 4.3.1 --- Getting the Depth of a Reference Point --- p.65Chapter 4.3.2 --- Recovering the Depth and Normal of a Specular Point Near the Reference Point --- p.67Chapter 4.3.3 --- Recovering Local Shape of the Object by Specular Reflection --- p.69Chapter 4.4 --- Experimental Results and Discussions --- p.71Chapter 4.4.1 --- Experimental System and Results of the First Method --- p.71Chapter 4.4.2 --- Experimental System and Results of the Second Method --- p.76Chapter 5 --- Shape Recovery from One Sequence of Color Images --- p.81Chapter 5.1 --- Introduction --- p.81Chapter 5.2 --- Temporal-color Space Analysis of Reflection --- p.84Chapter 5.3 --- Estimation of Illuminant Color Ks --- p.88Chapter 5.4 --- Estimation of the Color Vector of the Body-reflection Component Kl --- p.89Chapter 5.5 --- Separating Specular and Body Reflection Components and Re- covering Surface Shape and Reflectance --- p.91Chapter 5.6 --- Experiment Results and Discussions --- p.92Chapter 5.6.1 --- Results with Interreflection --- p.93Chapter 5.6.2 --- Results Without Interreflection --- p.93Chapter 5.6.3 --- Simulation Results --- p.95Chapter 5.7 --- Analysis of Various Factors on the Accuracy --- p.96Chapter 5.7.1 --- Effects of Number of Samples --- p.96Chapter 5.7.2 --- Effects of Noise --- p.99Chapter 5.7.3 --- Effects of Object Size --- p.99Chapter 5.7.4 --- Camera Optical Axis Not in Light Source Plane --- p.102Chapter 5.7.5 --- Camera Optical Axis Not Passing Through Object Center --- p.105Chapter 6 --- Shape Recovery from Two Sequences of Images --- p.107Chapter 6.1 --- Introduction --- p.107Chapter 6.2 --- Method for 3D Shape Recovery from Two Sequences of Images --- p.109Chapter 6.3 --- Genetics-Based Method --- p.111Chapter 6.4 --- Experimental Results and Discussions --- p.115Chapter 6.4.1 --- Simulation Results --- p.115Chapter 6.4.2 --- Real Experimental Results --- p.118Chapter 7 --- Shape from Shading for Non-Lambertian Surfaces --- p.120Chapter 7.1 --- Introduction --- p.120Chapter 7.2 --- Reflectance Map for Non-Lambertian Color Surfaces --- p.123Chapter 7.3 --- Recovering Non-Lambertian Surface Shape from One Color Image --- p.127Chapter 7.3.1 --- Segmenting Hybrid Areas from Diffuse Areas Using Hue Information --- p.127Chapter 7.3.2 --- Calculating Intensities of Specular and Diffuse Compo- nents on Hybrid Areas --- p.128Chapter 7.3.3 --- Recovering Shape from Shading --- p.129Chapter 7.4 --- Experimental Results and Discussions --- p.131Chapter 7.4.1 --- Simulation Results --- p.131Chapter 7.4.2 --- Real Experimental Results --- p.136Chapter 8 --- Shape from Shading under Multiple Extended Light Sources --- p.142Chapter 8.1 --- Introduction --- p.142Chapter 8.2 --- Reflectance Map for Lambertian Surface Under Multiple Rectan- gular Light Sources --- p.144Chapter 8.3 --- Recovering Surface Shape Under Multiple Rectangular Light Sources --- p.148Chapter 8.4 --- Experimental Results and Discussions --- p.150Chapter 8.4.1 --- Synthetic Image Results --- p.150Chapter 8.4.2 --- Real Image Results --- p.152Chapter 9 --- Shape from Shading in Unknown Environments by Neural Net- works --- p.167Chapter 9.1 --- Introduction --- p.167Chapter 9.2 --- Shape Estimation --- p.169Chapter 9.2.1 --- Shape Recovery Problem under Multiple Rectangular Ex- tended Light Sources --- p.169Chapter 9.2.2 --- Forward Network Representation of Surface Normals --- p.170Chapter 9.2.3 --- Shape Estimation --- p.174Chapter 9.3 --- Application of the Neural Network in Shape Recovery --- p.174Chapter 9.3.1 --- Structure of the Neural Network --- p.174Chapter 9.3.2 --- Normalization of the Input and Output Patterns --- p.175Chapter 9.4 --- Experimental Results and Discussions --- p.178Chapter 9.4.1 --- Results for Lambertian Surface under One Rectangular Light --- p.178Chapter 9.4.2 --- Results for Lambertian Surface under Four Rectangular Light Sources --- p.180Chapter 9.4.3 --- Results for Hybrid Surface under One Rectangular Light Sources --- p.190Chapter 9.4.4 --- Discussions --- p.190Chapter 10 --- Summary and Conclusions --- p.191Chapter 10.1 --- Summary Results and Contributions --- p.192Chapter 10.2 --- Directions of Future Research --- p.199Bibliography --- p.20

    Advancements in multi-view processing for reconstruction, registration and visualization.

    Get PDF
    The ever-increasing diffusion of digital cameras and the advancements in computer vision, image processing and storage capabilities have lead, in the latest years, to the wide diffusion of digital image collections. A set of digital images is usually referred as a multi-view images set when the pictures cover different views of the same physical object or location. In multi-view datasets, correlations between images are exploited in many different ways to increase our capability to gather enhanced understanding and information on a scene. For example, a collection can be enhanced leveraging on the camera position and orientation, or with information about the 3D structure of the scene. The range of applications of multi-view data is really wide, encompassing diverse fields such as image-based reconstruction, image-based localization, navigation of virtual environments, collective photographic retouching, computational photography, object recognition, etc. For all these reasons, the development of new algorithms to effectively create, process, and visualize this type of data is an active research trend. The thesis will present four different advancements related to different aspects of the multi-view data processing: - Image-based 3D reconstruction: we present a pre-processing algorithm, that is a special color-to-gray conversion. This was developed with the aim to improve the accuracy of image-based reconstruction algorithms. In particular, we show how different dense stereo matching results can be enhanced by application of a domain separation approach that pre-computes a single optimized numerical value for each image location. - Image-based appearance reconstruction: we present a multi-view processing algorithm, this can enhance the quality of the color transfer from multi-view images to a geo-referenced 3D model of a location of interest. The proposed approach computes virtual shadows and allows to automatically segment shadowed regions from the input images preventing to use those pixels in subsequent texture synthesis. - 2D to 3D registration: we present an unsupervised localization and registration system. This system can recognize a site that has been framed in a multi-view data and calibrate it on a pre-existing 3D representation. The system has a very high accuracy and it can validate the result in a completely unsupervised manner. The system accuracy is enough to seamlessly view input images correctly super-imposed on the 3D location of interest. - Visualization: we present PhotoCloud, a real-time client-server system for interactive exploration of high resolution 3D models and up to several thousand photographs aligned over this 3D data. PhotoCloud supports any 3D models that can be rendered in a depth-coherent way and arbitrary multi-view image collections. Moreover, it tolerates 2D-to-2D and 2D-to-3D misalignments, and it provides scalable visualization of generic integrated 2D and 3D datasets by exploiting data duality. A set of effective 3D navigation controls, tightly integrated with innovative thumbnail bars, enhances the user navigation. These advancements have been developed in tourism and cultural heritage application contexts, but they are not limited to these

    Three-dimensional visualisation and quantitative characterisation of fossil fuel flames using tomography and digital imaging techniques

    Get PDF
    This thesis describes the design, implementation and experimental evaluation of a prototype instrumentation system for the three-dimensional (3-D) visualisation and quantitative characterisation of fossil fuel flames. A review of methodologies and technologies for the 3-D visualisation and characterisation of combustion flames is given, together with a discussion of main difficulties and technical requirements in their applications. A strategy incorporating optical sensing, digital image processing and tomographic reconstruction techniques is proposed. The strategy was directed towards the reconstruction of 3-D models of a flame and the subsequent quantification of its 3-D geometric, luminous and fluid dynamic parameters. Based on this strategy, a flame imaging system employing three identical synchronised RG B cameras has been developed. The three cameras, placed equidistantly and equiangular on a semicircle around the flame, captured six simultaneous images of the flame from six different directions. Dedicated computing algorithms, based on image processing and tomographic reconstruction techniques have been developed to reconstruct the 3-D models of a flame. A set of geometric, luminous and fluid dynamic parameters, including surface area, volume, length, circularity, luminosity and temperature are determined from the 3-D models generated. Systematic design and experimental evaluation of the system on a gas-fired combustion rig are reported. The accuracy, resolution and validation of the system were also evaluated using purpose-designed templates including a high precision laboratory ruler, a colour flat panel and a tungsten lamp. The results obtained from the experimental evaluation are presented and the relationship between the measured parameters and the corresponding operational conditions are quantified. Preliminary investigations were conducted on a coal-fired industry-scale combustion test facility. The multi-camera system was reconfigured to use only one camera due to the restrictions at the site facility. Therefore the property of rotational symmetry of the flame had to be assumed. Under such limited conditions, the imaging system proved to provide a good reconstruction of the internal structures and luminosity variations inside the This thesis describes the design, implementation and experimental evaluation of a prototype instrumentation system for the three-dimensional (3-D) visualisation and quantitative characterisation of fossil fuel flames. A review of methodologies and technologies for the 3-D visualisation and characterisation of combustion flames is given, together with a discussion of main difficulties and technical requirements in their applications. A strategy incorporating optical sensing, digital image processing and tomographic reconstruction techniques is proposed. The strategy was directed towards the reconstruction of 3-D models of a flame and the subsequent quantification of its 3-D geometric, luminous and fluid dynamic parameters. Based on this strategy, a flame imaging system employing three identical synchronised RG B cameras has been developed. The three cameras, placed equidistantly and equiangular on a semicircle around the flame, captured six simultaneous images of the flame from six different directions. Dedicated computing algorithms, based on image processing and tomographic reconstruction techniques have been developed to reconstruct the 3-D models of a flame. A set of geometric, luminous and fluid dynamic parameters, including surface area, volume, length, circularity, luminosity and temperature are determined from the 3-D models generated. Systematic design and experimental evaluation of the system on a gas-fired combustion rig are reported. The accuracy, resolution and validation of the system were also evaluated using purpose-designed templates including a high precision laboratory ruler, a colour flat panel and a tungsten lamp. The results obtained from the experimental evaluation are presented and the relationship between the measured parameters and the corresponding operational conditions are quantified. Preliminary investigations were conducted on a coal-fired industry-scale combustion test facility. The multi-camera system was reconfigured to use only one camera due to the restrictions at the site facility. Therefore the property of rotational symmetry of the flame had to be assumed. Under such limited conditions, the imaging system proved to provide a good reconstruction of the internal structures and luminosity variations inside the This thesis describes the design, implementation and experimental evaluation of a prototype instrumentation system for the three-dimensional (3-D) visualisation and quantitative characterisation of fossil fuel flames. A review of methodologies and technologies for the 3-D visualisation and characterisation of combustion flames is given, together with a discussion of main difficulties and technical requirements in their applications. A strategy incorporating optical sensing, digital image processing and tomographic reconstruction techniques is proposed. The strategy was directed towards the reconstruction of 3-D models of a flame and the subsequent quantification of its 3-D geometric, luminous and fluid dynamic parameters. Based on this strategy, a flame imaging system employing three identical synchronised RG B cameras has been developed. The three cameras, placed equidistantly and equiangular on a semicircle around the flame, captured six simultaneous images of the flame from six different directions. Dedicated computing algorithms, based on image processing and tomographic reconstruction techniques have been developed to reconstruct the 3-D models of a flame. A set of geometric, luminous and fluid dynamic parameters, including surface area, volume, length, circularity, luminosity and temperature are determined from the 3-D models generated. Systematic design and experimental evaluation of the system on a gas-fired combustion rig are reported. The accuracy, resolution and validation of the system were also evaluated using purpose-designed templates including a high precision laboratory ruler, a colour flat panel and a tungsten lamp. The results obtained from the experimental evaluation are presented and the relationship between the measured parameters and the corresponding operational conditions are quantified. Preliminary investigations were conducted on a coal-fired industry-scale combustion test facility. The multi-camera system was reconfigured to use only one camera due to the restrictions at the site facility. Therefore the property of rotational symmetry of the flame had to be assumed. Under such limited conditions, the imaging system proved to provide a good reconstruction of the internal structures and luminosity variations inside the flame. Suggestions for future development of the technology are also reported

    Characterization of normal facial features and their association with genes

    Get PDF
    ABSTRACT Background: Craniofacial morphology has been reported to be highly heritable, but little is known about which genetic variants influence normal facial variation in the general population. Aim: To identify facial variation and explore phenotype-genotype associations in a 15-year-old population (2514 females and 2233 males). Subjects and Methods: The subjects involved in this study were recruited from the Avon Longitudinal Study of Parents and Children (ALSPAC). Three-dimensional (3D) facial images were obtained for each subject using two high-resolution Konica Minolta laser scanners. Twenty-one reproducible facial soft tissue landmarks and one constructed mid-endocanthion point (men) were identified and their coordinates were recorded. The 3D facial images were registered using Procrustes analysis (with and without scaling). Principal Component Analysis (PCA) was then employed to identify independent groups ‘principal components, PCs’ of correlated landmark coordinates that represent key facial features contributing to normal facial variation. A novel surface-based method of facial averaging was employed to visualize facial variation. Facial parameters (distances, angles, and ratios) were also generated using facial landmarks. Sex prediction based on facial parameters was explored using discriminant function analysis. A discovery-phase genome-wide association analysis (GWAS) was carried out for 2,185 ALSPAC subjects and replication was undertaken in a further 1,622 ALSPAC individuals. Results: 14 (unscaled) and 17 (scaled) PCs were identified explaining 82% of the total variance in facial form and shape. 250 facial parameters were derived (90 distances, 118 angles, 42 ratios). 24 facial parameters were found to provide sex prediction efficiency of over 70%, 23 of these parameters are distances that describe variation in face height, nose width, and prominence of various facial structures. 54 distances associated with previous reported high heritability and the 14 (unscaled) PCs were included in the discovery-phase GWAS. Four genetic associations with the distances were identified in the discovery analysis, and one of these, the association between the common ‘intronic’ SNP (rs7559271) in PAX3 gene on chromosome (2) and the nasion to mid-endocanthion 3D distance (n-men) was replicated strongly (p = 4 x 10-7). PAX3 gene encodes a transcription factor that plays crucial role in fetal development including craniofacial bones. PAX3 contains two DNA-binding domains, a paired-box domain and a homeodomain. The protein made from PAX3 gene directs the activity of other genes that signal neural crest cells to form specialized tissues such as craniofacial bones. PAX3 different mutations may lead to non-functional PAX3 polypeptides and destroy the ability of the PAX3 proteins to bind to DNA and regulate the activity of other genes to form bones and other specific tissues. Conclusions: The variation in facial form and shape can be accurately quantified and visualized as a multidimensional statistical continuum with respect to the principal components. The derived PCs may be useful to identify and classify faces according to a scale of normality. A strong genetic association was identified between the common SNP (rs7559271) in PAX3 gene on chromosome (2) and the nasion to mid-endocanthion 3D distance (n-men). Variation in this distance leads to nasal bridge prominence

    Spatial data acquisition from motion video

    Get PDF
    Part of the GeoComputation '96 Special Issue 96/25; follow the "related link" to download the entire collection as a single document.Geographic information systems are an important tool for the field of geocomputing. A key component of every system is the data—spatial data has traditionally been labour-intensive to collect, and hence expensive. This paper establishes a new method of acquiring spatial data from motion video. The proposed method is based upon the principles of photogrammetry, but allows position to be calculated with feature tracking rather than point correspondence. By doing so, it avoids many constraints imposed by previous solutions. The new method is demonstrated with linear and rotational motion.UnpublishedArnold, R.D. and Binford, T.O., 1980. Geometric Constraints in Stereo Vision. SPIE Vol 238 Image Processing for Missile Guidance (1980), 281—292. Baker, H.H., Bolles, R.C. and Marimont, D.H., 1986. A new technique for obtaining depth information from a moving sensor. Proceedings of the ISPRS Commission II Symposium on Photogrammetric and Remote Sensing Systems for Data Processing and Analysis, 1986. Barnard, S. and Fischer, M., 1982. Computational Stereo. Computing Surveys, Vol 14, Number 4, 553—572. Barron, J., Fleet, D., Beauchemin, S. and Burkitt, T., 1992. Performance of Optical Flow Techniques. IEEE CV&IP 3, 1992, 236—242. Beyer, H., 1992. Automated Dimensional Inspection with Real-Time Photogrammetry. International Archive of Photogrammetry and Remote Sensing, 1992, 722—727. Bolles, R., Baker, H. and Marimont, D., 1987. Epipolar-Plane Image Analysis: An Approach to Determining Structure from Motion. International Journal of Computer Vision, 1, 1987, 7—55. Bruckstein, A., 1988. On Shape from Shading. Computer Vision, Graphics and Image Processing 44, 139—154. Cappellini, V., Mecocci, A., Renault, S., 1987. Digital Processing of Stereo Images and Three Dimensional Reconstruction Techniques. Time-Varying Image Processing and Moving Object Recognition, 105—115. Chin, R. and Dyer, C., 1986. Model-Based Recognition in Robot Vision. Computing Surveys, Vol 18 No 1, March 1986, 67—108. Clarke, T.A., Cooper, M.A.R., Chen, J. and Robson, S., 1995. Automated Three Dimensional Measurement Using Multiple CCD Camera Views. Photogrammetric Record, Vol 15 No 85, April 1995, 27—42. Erez, M.T. and Dorrer, E., 1984. Photogrammetic Data Acquistion using an Interactive Computer Graphics Systems. Photogrammetric Engineering and Remote Sensing, Vol 50, February 1984, 183—188. Fraser, C., 1988. State of the Art in Industrial Photogrammetry. International Archive of Photogrammetry and Remote Sensing, 27(B5), 166—181. Fraser, C. and Shortis, M., 1994. Industrial Inspection Using a Still Video Camera. Proceedings Resource Technology ’94, Melbourne, Australia, 362—375. Klette, R., Mehren, D. and Rodehorst, V., 1995. An Application of Surface Reconstruction from Rotational Motion. Real-Time Imaging, 1 (1995) 127—138. Koch, R., 1992. Model-Based 3D Scene Analysis from Stereoscopic Image Sequences. International Archive of Photogrammetry and Remote Sensing, 1992, 427—433. Kölbl, O., Chardonnens, B., Gilliéron, P.-Y., Hersch, R.D. and Lutz, S., 1991. The DSR15T A System for Automatic Image Correlation. Proceedings of ASPRS/ACSM/ AUTO-CARTO 10, Baltimore 1991. Lee, K. and Jay Kuo, C.-C., 1992. Shape Reconstruction from Photometric Stereo. IEEE CV&IP 3, 1992, 479—484. Marr, D., 1982. Section 3.3: Stereopsis. Vision, 111—159. WH Freeman 1982. Peipe, J. and Schneider, C.-T., 1995. High Resolution Still Video Camera for Industrial Photogrammetry. Photogrammetric Record, Vol 15 No 85, April 1995, 135—139. Rivett, L.J., 1983. The Assessment of Post-Operative Swelling by Analytical Photogrammetry. Australiasian Physical & Engineering Science in Medicine (1983), Vol 6 No 3, 138—142. Rosenholm, D., 1987a. Least Squares Matching Method: Some Experimental Results. Photogrammetric Record, 12(70): 493—512 October 1987. Rosenholm, D., 1987b. Empirical Investigation of Optimal Window Size Using the Least Squares Image Matching Method. Photogrammetria 42 (1987), 113—125. Sheng Jin and Yunming Li, 1988. Acquiring Range Data in the Binocular 3-D Computer Vision using Edge-Based Hierarchical Matching. IAPR Workshop on CV, October 12-14, 1988, Tokyo. Trivedi, H., 1985. A Computational Theory of Stereo Vision. IEEE 1985 277—282. Ulupinar, F. and Nevatia, R., 1995. Shape from Contour: Straight Homogeneous Generalized Cylinders and Constant Cross Section Generalized Cylinders. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol 17 No 2, February 1995, 120—135. Ulupinar, F. and Nevatia, R., 1992. Recovery of 3-D Objects with Multiple Curved Surfaces from 2-D Contours. IEEE CV&IP 3, 1992. Zheng, J. Y., 1994. Acquiring 3-D Models from Sequences of Contours. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol 16 No 2, February 1994. Zheng, Y., Jones, D.G., Billings, S.A., Mayhew, J.E.W. and Frisby, J.P., 1990. SWITCHER: A Stereo Algorithm for Ground Plane Obstacle Detection. Image and Vision Computing, Vol 8 Num 1, February 1990
    corecore