91 research outputs found

    Low-cost single-pixel 3D imaging by using an LED array

    Get PDF
    We propose a method to perform color imaging with a single photodiode by using light structured illumination generated with a low-cost color LED array. The LED array is used to generate a sequence of color Hadamard patterns which are projected onto the object by a simple optical system while the photodiode records the light intensity. A field programmable gate array (FPGA) controls the LED panel allowing us to obtain high refresh rates up to 10 kHz. The system is extended to 3D imaging by simply adding a low number of photodiodes at different locations. The 3D shape of the object is obtained by using a noncalibrated photometric stereo technique. Experimental results are provided for an LED array with 32 × 32 elements

    A PCA approach to the object constancy for faces using view-based models of the face

    Get PDF
    The analysis of object and face recognition by humans attracts a great deal of interest, mainly because of its many applications in various fields, including psychology, security, computer technology, medicine and computer graphics. The aim of this work is to investigate whether a PCA-based mapping approach can offer a new perspective on models of object constancy for faces in human vision. An existing system for facial motion capture and animation developed for performance-driven animation of avatars is adapted, improved and repurposed to study face representation in the context of viewpoint and lighting invariance. The main goal of the thesis is to develop and evaluate a new approach to viewpoint invariance that is view-based and allows mapping of facial variation between different views to construct a multi-view representation of the face. The thesis describes a computer implementation of a model that uses PCA to generate example- based models of the face. The work explores the joint encoding of expression and viewpoint using PCA and the mapping between viewspecific PCA spaces. The simultaneous, synchronised video recording of 6 views of the face was used to construct multi-view representations, which helped to investigate how well multiple views could be recovered from a single view via the content addressable memory property of PCA. A similar approach was taken to lighting invariance. Finally, the possibility of constructing a multi-view representation from asynchronous view-based data was explored. The results of this thesis have implications for a continuing research problem in computer vision – the problem of recognising faces and objects from different perspectives and in different lighting. It also provides a new approach to understanding viewpoint invariance and lighting invariance in human observers

    3D Reconstruction using Active Illumination

    Get PDF
    In this thesis we present a pipeline for 3D model acquisition. Generating 3D models of real-world objects is an important task in computer vision with many applications, such as in 3D design, archaeology, entertainment, and virtual or augmented reality. The contribution of this thesis is threefold: we propose a calibration procedure for the cameras, we describe an approach for capturing and processing photometric normals using gradient illuminations in the hardware set-up, and finally we present a multi-view photometric stereo 3D reconstruction method. In order to obtain accurate results using multi-view and photometric stereo reconstruction, the cameras are calibrated geometrically and photometrically. For acquiring data, a light stage is used. This is a hardware set-up that allows to control the illumination during acquisition. The procedure used to generate appropriate illuminations and to process the acquired data to obtain accurate photometric normals is described. The core of the pipeline is a multi-view photometric stereo reconstruction method. In this method, we first generate a sparse reconstruction using the acquired images and computed normals. In the second step, the information from the normal maps is used to obtain a dense reconstruction of an object’s surface. Finally, the reconstructed surface is filtered to remove artifacts introduced by the dense reconstruction step

    Photo-Realistic Facial Details Synthesis from Single Image

    Full text link
    We present a single-image 3D face synthesis technique that can handle challenging facial expressions while recovering fine geometric details. Our technique employs expression analysis for proxy face geometry generation and combines supervised and unsupervised learning for facial detail synthesis. On proxy generation, we conduct emotion prediction to determine a new expression-informed proxy. On detail synthesis, we present a Deep Facial Detail Net (DFDN) based on Conditional Generative Adversarial Net (CGAN) that employs both geometry and appearance loss functions. For geometry, we capture 366 high-quality 3D scans from 122 different subjects under 3 facial expressions. For appearance, we use additional 20K in-the-wild face images and apply image-based rendering to accommodate lighting variations. Comprehensive experiments demonstrate that our framework can produce high-quality 3D faces with realistic details under challenging facial expressions

    HOLOGRAPHICS: Combining Holograms with Interactive Computer Graphics

    Get PDF
    Among all imaging techniques that have been invented throughout the last decades, computer graphics is one of the most successful tools today. Many areas in science, entertainment, education, and engineering would be unimaginable without the aid of 2D or 3D computer graphics. The reason for this success story might be its interactivity, which is an important property that is still not provided efficiently by competing technologies – such as holography. While optical holography and digital holography are limited to presenting a non-interactive content, electroholography or computer generated holograms (CGH) facilitate the computer-based generation and display of holograms at interactive rates [2,3,29,30]. Holographic fringes can be computed by either rendering multiple perspective images, then combining them into a stereogram [4], or simulating the optical interference and calculating the interference pattern [5]. Once computed, such a system dynamically visualizes the fringes with a holographic display. Since creating an electrohologram requires processing, transmitting, and storing a massive amount of data, today’s computer technology still sets the limits for electroholography. To overcome some of these performance issues, advanced reduction and compression methods have been developed that create truly interactive electroholograms. Unfortunately, most of these holograms are relatively small, low resolution, and cover only a small color spectrum. However, recent advances in consumer graphics hardware may reveal potential acceleration possibilities that can overcome these limitations [6]. In parallel to the development of computer graphics and despite their non-interactivity, optical and digital holography have created new fields, including interferometry, copy protection, data storage, holographic optical elements, and display holograms. Especially display holography has conquered several application domains. Museum exhibits often use optical holograms because they can present 3D objects with almost no loss in visual quality. In contrast to most stereoscopic or autostereoscopic graphics displays, holographic images can provide all depth cues—perspective, binocular disparity, motion parallax, convergence, and accommodation—and theoretically can be viewed simultaneously from an unlimited number of positions. Displaying artifacts virtually removes the need to build physical replicas of the original objects. In addition, optical holograms can be used to make engineering, medical, dental, archaeological, and other recordings—for teaching, training, experimentation and documentation. Archaeologists, for example, use optical holograms to archive and investigate ancient artifacts [7,8]. Scientists can use hologram copies to perform their research without having access to the original artifacts or settling for inaccurate replicas. Optical holograms can store a massive amount of information on a thin holographic emulsion. This technology can record and reconstruct a 3D scene with almost no loss in quality. Natural color holographic silver halide emulsion with grain sizes of 8nm is today’s state-of-the-art [14]. Today, computer graphics and raster displays offer a megapixel resolution and the interactive rendering of megabytes of data. Optical holograms, however, provide a terapixel resolution and are able to present an information content in the range of terabytes in real-time. Both are dimensions that will not be reached by computer graphics and conventional displays within the next years – even if Moore’s law proves to hold in future. Obviously, one has to make a decision between interactivity and quality when choosing a display technology for a particular application. While some applications require high visual realism and real-time presentation (that cannot be provided by computer graphics), others depend on user interaction (which is not possible with optical and digital holograms). Consequently, holography and computer graphics are being used as tools to solve individual research, engineering, and presentation problems within several domains. Up until today, however, these tools have been applied separately. The intention of the project which is summarized in this chapter is to combine both technologies to create a powerful tool for science, industry and education. This has been referred to as HoloGraphics. Several possibilities have been investigated that allow merging computer generated graphics and holograms [1]. The goal is to combine the advantages of conventional holograms (i.e. extremely high visual quality and realism, support for all depth queues and for multiple observers at no computational cost, space efficiency, etc.) with the advantages of today’s computer graphics capabilities (i.e. interactivity, real-time rendering, simulation and animation, stereoscopic and autostereoscopic presentation, etc.). The results of these investigations are presented in this chapter

    Photometric stereo and appearance capture

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Automatic face recognition using stereo images

    Get PDF
    Face recognition is an important pattern recognition problem, in the study of both natural and artificial learning problems. Compaxed to other biometrics, it is non-intrusive, non- invasive and requires no paxticipation from the subjects. As a result, it has many applications varying from human-computer-interaction to access control and law-enforcement to crowd surveillance. In typical optical image based face recognition systems, the systematic vaxiability arising from representing the three-dimensional (3D) shape of a face by a two-dimensional (21)) illumination intensity matrix is treated as random vaxiability. Multiple examples of the face displaying vaxying pose and expressions axe captured in different imaging conditions. The imaging environment, pose and expressions are strictly controlled and the images undergo rigorous normalisation and pre-processing. This may be implemented in a paxtially or a fully automated system. Although these systems report high classification accuracies (>90%), they lack versatility and tend to fail when deployed outside laboratory conditions. Recently, more sophisticated 3D face recognition systems haxnessing the depth information have emerged. These systems usually employ specialist equipment such as laser scanners and structured light projectors. Although more accurate than 2D optical image based recognition, these systems are equally difficult to implement in a non-co-operative environment. Existing face recognition systems, both 2D and 3D, detract from the main advantages of face recognition and fail to fully exploit its non-intrusive capacity. This is either because they rely too much on subject co-operation, which is not always available, or because they cannot cope with noisy data. The main objective of this work was to investigate the role of depth information in face recognition in a noisy environment. A stereo-based system, inspired by the human binocular vision, was devised using a pair of manually calibrated digital off-the-shelf cameras in a stereo setup to compute depth information. Depth values extracted from 2D intensity images using stereoscopy are extremely noisy, and as a result this approach for face recognition is rare. This was cofirmed by the results of our experimental work. Noise in the set of correspondences, camera calibration and triangulation led to inaccurate depth reconstruction, which in turn led to poor classifier accuracy for both 3D surface matching and 211) 2 depth maps. Recognition experiments axe performed on the Sheffield Dataset, consisting 692 images of 22 individuals with varying pose, illumination and expressions

    Distortion Correction for Non-Planar Deformable Projection Displays through Homography Shaping and Projected Image Warping

    Get PDF
    Video projectors have advanced from being tools for only delivering presentations on flat or planar surfaces to tools for delivering media content in such applications as augmented reality, simulated sports practice and invisible displays. With the use of non-planar surfaces for projection comes geometric and radiometric distortions. This work dwells on correcting geometric distortions occurring when images or video frames are projected onto static and deformable non-planar display surfaces. The distortion-correction process involves (i) detecting feature points from the camera images and creating a desired shape of the undistorted view through a 2D homography, (ii) transforming the feature points on the camera images to control points on the projected images, (iii) calculating Radial Basis Function (RBF) warping coefficients from the control points, and warping the projected image to obtain an undistorted image of the projection on the projection surface. Several novel aspects of this work have emerged and include (i) developing a theoretical framework that explains the cause of distortion and provides a general warping pattern to be applied to the projection, (ii) carrying out the distortion-correction process without the use of a distortion-measuring calibration image or structured light pattern, (iii) carrying out the distortioncorrection process on a projection display that deforms with time with a single uncalibrated projector and uncalibrated camera, and (iv) performing an optimisation of the distortioncorrection processes to operate in real-time. The geometric distortion correction process designed in this work has been tested for both static projection systems in which the components remain fixed in position, and dynamic projection systems in which the positions of components or shape of the display change with time. The results of these tests show that the geometric distortion-correction technique developed in this work improves the observed image geometry by as much as 31% based on normalised correlation measure. The optimisation of the distortion-correction process resulted in a 98% improvement of its speed of operation thereby demonstrating the applicability of the proposed approach to real projection systems with deformable projection displays

    Fuentes de color mejoradas para el modelado tridimensional de artefactos arqueológicos de tamaño medio localizados in situ.

    Get PDF
    [EN] The paper describes a color enhanced processing system - applied as case study on an artifact of the Pompeii archaeological area - developed in order to enhance different techniques for reality-based 3D models construction and visualization of archaeological artifacts. This processing allows rendering reflectance properties with perceptual fidelity on a consumer display and presents two main improvements over existing techniques: a. the color definition of the archaeological artifacts; b. the comparison between the range-based and photogrammetry-based pipelines to understand the limits of use and suitability to specific objects.[ES] El documento describe un sistema mejorado de procesamiento de color, aplicado como caso de estudio sobre un artefacto de la zona arqueológica de Pompeya. Este sistema se ha desarrollado con la finalidad de mejorar las diferentes técnicas para la construcción de modelos 3D basados sobre datos de la realidad y para la visualización de artefactos arqueológicos. Este proceso permite visualizar las propiedades de reflectancia con fidelidad perceptible en una pantalla de usuario y presenta dos mejoras principales respecto a las técnicas existentes:a. la definición del color de los artefactos arqueológicos;b. la comparación entre los flujos de trabajo basados en range-based-modeling y en fotogrametría, para entender los límites de uso y la adecuación a los objetos específicos.Apollonio, FI.; Ballabeni, M.; Gaiani, M. (2014). Color enhanced pipelines for reality-based 3D modeling of on site medium sized archeological artifacts. Virtual Archaeology Review. 5(10):59-76. https://doi.org/10.4995/var.2014.4218OJS5976510AGISOFT PHOTOSCAN (2014), http://www.agisoft.ru.ALLEN P., FEINER S., et al. (2004): "Seeing into the past: Creating a 3D modeling pipeline for archaeological visualization", in Proceedings of 3DPVT '04, 2004, pp. 751-758.BERALDIN J.-A., PICARD M., et al. (2002): "Virtualizing a byzantine crypt by combining high-resolution textures with laser scanner 3D data", in Proceedings of VSMM 2002, pp. 3-14.BERNARDINI F., RUSHMEIER H. (2000): "The 3D model acquisition pipeline", in Eurographics 2000 State of the Art Reports.BLAIS F. (2004): "A review of 20 years of Range Sensors Development", in Journal of Electronic Imaging, Vol. 13, N. 1, pp. 231-40. http://dx.doi.org/10.1117/1.1631921BLAIS F., BERALDIN J.A. (2006): "Recent Developments in 3D Multi-modal Laser Imaging Applied to Cultural Heritage, in Machine Vision and Applications, Vol. 17, N. 6, pp. 395-409. http://dx.doi.org/10.1007/s00138-006-0025-3BOEHLER W. (2005): "Comparison of 3D scanning and other 3D measurement techniques", in Baltsavias E., Gruen, A., et al. (eds), Recording, Modeling and Visualization of Cultural Heritage, Taylor & Francis.BOOCHS F., BENTKOWSKA-KAFEL A., et al. (2013): "Towards optimal spectral and spatial documentation of Cultural Heritage. COSCH - an interdisciplinary action in the COST framework", in ISPRS Arch., Vol. XL-5/W2, 2013, pp. 109-113.CALLIERI M., CIGNONI P., et al. (2008): "Masked photo blending: mapping dense photographic dataset on high-resolution 3D models", in Computer & Graphics, Vol. 32, N. 4, 2008, pp. 464 - 473.CALLIERI M., DELLEPIANE M., et al. (2011): "Processing Sampled 3D Data: Reconstruction and Visualization Technologies", in F. Stanco, S. Battiato, G. Gallo (eds.), Digital Imaging for Cultural Heritage Preservation: Analysis, Restoration and Reconstruction of Ancient Artworks, Taylor and Francis, pp. 105-136.CORSINI M., DELLEPIANE M., et al. (2009):"Image-to-geometry registration: a mutual information method exploiting illumination-related geometric properties", in Computer Graphics Forum, Vol. 28, N. 7, 2009, pp. 1755-1764. http://dx.doi.org/10.1111/j.1467-8659.2009.01552.xDANA K.J., VAN GINNEKEN B., et al.. (1999): "Reflectance and texture of real-world surfaces", in ACM Transaction on Graphics, Vol. 18, N. 1, 1999, pp. 1-34. http://dx.doi.org/10.1145/300776.300778DE LUCA L., VERON P., FLORENZANO M. (2006): "Reverse engineering of architectural buildings based on a hybrid modeling approach", Computer & Graphics, Vol. 30, N. 2, pp. 160-76. http://dx.doi.org/10.1016/j.cag.2006.01.020DEBEVEC P. et al. (2004): "Estimating surface reflectance properties of a complex scene under captured natural illumination", in USC ICT Technical Report ICT-TR, 06/2004.DELLEPIANE M., MARROQUIM R., et al. (2012): "Flow-Based Local Optimization for Image-to-Geometry Projection", in IEEE Transactions on Visualization and Computer Graphics, Vol. 18, N. 3, 2012, pp. 463-474. http://dx.doi.org/10.1109/TVCG.2011.75DELLEPIANE M., DELL'UNTO N., et al. (2013a): "Archeological excavation monitoring using dense stereo matching techniques", in Journal of Cultural Heritage, Vol. 14, N. 3, 2013, pp. 201-210. http://dx.doi.org/10.1016/j.culher.2012.01.011DELLEPIANE M., SCOPIGNO R. (2013b): "Global refinement of image-to-geometry registration for color projection", in DigitalHeritage 2013 Proceedings, 2013, Vol. 1, pp. 39-46.DXO (2014), http://www.dxo.com/intl/photography/dxo-optics-pro/EL-HAKIM S.F., BRENNER C., ROTH G. (1998): "A multi-sensor approach to creating accurate virtual environments", in ISPRS Journal of Photogrammetry and Remote Sensing, Vol. 53, N. 6, pp. 379-391. http://dx.doi.org/10.1016/S0924-2716(98)00021-5EL-HAKIM S.F., BERALDIN J.-A., et al. (2004): "Detailed 3D reconstruction of large-scale heritage sites with integrated techniques", in Computer Graphics and Applications, Vol. 24, N. 3, 2004, pp. 21-29. http://dx.doi.org/10.1109/MCG.2004.1318815EL-HAKIM S.F., BERALDIN J.-A. (2007): "Sensor integration and visualization", in Fryer, Mitchell & Chandler (eds.), Applications of 3D Measurement from Images, Whittles Publishing, pp. 259-298.ENGLISH HERITAGE (2005): Metric Survey Specifications for English Heritage. English Heritage Report.ENGLISH HERITAGE (2011), 3D Laser Scanning for Heritage (second edition), English Heritage Publishing.FURUKAWA Y., PONCE J. (2010): "Accurate, dense, and robust multi-view stereopsis", in IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 32, N. 8, pp. 1362-1376. http://dx.doi.org/10.1109/TPAMI.2009.161GAIANI M., MICOLI L.L. (2005): "A framework to build and visualize 3D models from real world data for historical architecture and archaeology as a base for a 3D information system", in Forte M. (a cura di), The reconstruction of Archaeological Landscapes through Digital Technologies, BAR International series, 1379, pp. 103-125.GAIANI M., ROSSI M., RIZZI A. (2003): "Percezione delle immagini virtuali", in M. Gaiani (ed.), Metodi di Prototipazione Digitale e Visualizzazione per il Disegno Industriale, l'Architettura degli Interni e i Beni Culturali, Polidesign, Milano, 2003.GAIANI M., BENEDETTI B., REMONDINO F. (eds) (2010): Modelli digitali 3D in archeologia: il caso di Pompei, Edizioni della Normale, Pisa, 2010.GAŠPAROVIC M., MALARIC I. (2012): "Increase of readability and accuracy of 3D models using fusion of Close Range Photogrammetry and Laser Scanning", in ISPRS Arch. Photogramm. Remote Sens., Vol. XXXIX-B5, pp. 93-98.GODIN G., BORGEAT L., et al. (2010): "Issues in Acquiring, Processing and Visualizing Large and Detailed 3D Models", in Information Sciences and Systems (CISS), 44th Annual Conference on, pp.1-6. http://dx.doi.org/10.1109/ciss.2010.5464966GONIZZI BARSANTI S., MICOLI L.L., GUIDI G. (2013a): "Quick textured mesh generation for massive 3D digitization of museum artifacts", in DigitalHeritage 2013, Vol. 1, pp. 197-200.GONIZZI BARSANTI S., REMONDINO F., VISINTINI D. (2013b): "3D surveying and modeling of archaeological sites - some critical issues", in ISPRS Ann. Photogramm. Remote Sens., Vol. II-5/W1, 2013, pp. 145-150.GRUSSENMEYER P., LANDES T., et al. (2008): "Comparison methods of terrestrial laser scanning, photogrammetry and tacheometry data for recording of cultural heritage buildings", in ISPRS Arch. Photogramm. Remote Sens., Vol. XXXVII/W5, pp. 213-218.GUARNIERI A., REMONDINO F., VETTORE A. (2006): "Digital photogrammetry and TLS data fusion applied to Cultural Heritage 3D modeling", in ISPRS Arch., Vol. XXXVI/W6, pp. 6.HAPPA J., BASHFORD-ROGERS T., et al. (2012): "Cultural Heritage Predictive Rendering", in Computer Graphics Forum, Vol. 31, N. 6, 2012, pp. 1823-1836. http://dx.doi.org/10.1111/j.1467-8659.2012.02098.xHIRSCHMÜLLER H. (2005): "Accurate and efficient stereo processing by semi-global matching and mututal information", in CVPR 2005 proceedings, Vol. 2, pp. 807-814.HIRSCHMUELLER H. (2008): "Stereo processing by semi- global matching and mutual information", in IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 30, N. 2, pp. 328-41. http://dx.doi.org/10.1109/TPAMI.2007.1166KARSIDAG G., ALKAN R.M. (2012): "Analysis of The Accuracy of Terrestrial Laser Scanning Measurements", in FIG Working Week 2012 - Knowing to manage the territory, protect the environment, evaluate the cultural heritage proceedings, TS07A - Laser Scanners I, 6097.KAWAKAMI R., IKEUCHI K., TAN R.T. (2005): "Consistent surface color for texturing large objects in outdoor scenes", in ICCV 2005 proceedings, Vol. 2, 2005, pp. 1200-1207.IMATEST (2014), http://www.imatest.com/homeIMAGENOMIC LLC (2012): Noiseware 5 Plug-In User's Guide, 2012INNOVMETRIC POLYWORKS (2014): http://www.innovmetric.com/polyworks/Surveying/LENSCH H.P.A., KAUTZ J., et al. (2003): "Image-based reconstruction of spatial appearance and geometric detail", in ACM Trans. Graph., Vol. 22, N. 2, 2003, pp. 234-257. http://dx.doi.org/10.1145/636886.636891LOWE D. (2004): "Distinctive image features from scale-invariant keypoints", in IJCV, Vol. 60, N. 2, 2004, pp. 91-110.MESHLAB (2014): http://meshlab.sourceforge.net/MUDGE M., SCHROER C., et al. (2010): "Principles and Practices of Robust, Photography based Digital Imaging Techniques for Museums", in VAST 2010 Proceedings, 2010, pp. 111-137.NICODEMUS F. (1965): "Directional reflectance and emissivity of an opaque surface", in Applied Optics, Vol. 4, N. 7, 1965, pp. 767-775. http://dx.doi.org/10.1364/AO.4.000767OPENGL (2014): http://www.opengl.orgPASCALE D. (2006): "RGB coordinates of the Macbeth ColorChecker", in Technical report, The BabelColor Company, Jun 2006.PETROSYAN A., GHAZARYAN A. (2006): "Method and System for Digital Image Enhancement", in US Patent Application, #11/116, 408, 2006.PIERROT-DESEILLIGNY M., PAPARODITIS N. (2006): A multiresolution and optimization-based image matching approach: an application to surface reconstruction from SPOT5-HRS stereo imagery, in ISPRS Arch., Vol. XXXVI-1/W41.PIETRONI N., TARINI M., CIGNONI P. (2010): "Almost isometric mesh parameterization through abstract domains", in IEEE Trans. on Visualization and Computer Graphics, Vol. 16, N. 4, 2010, pp. 621-635. http://dx.doi.org/10.1109/TVCG.2009.96REINHARD E., ARIF KHAN E., OGUZ AKYÜZ A., JOHNSON G. (2008): Color Imaging Fundamentals and Applications, A. K. Peters, Wellesley.REMONDINO F., EL-HAKIM S. (2006): "Image-based 3D modelling: a review", in The Photogrammetric Record, Vol. 21, N.115, 2006, pp. 269-291. http://dx.doi.org/10.1111/j.1477-9730.2006.00383.xREMONDINO F., CAMPANA S., (eds.) (2014): 3D Recording and Modelling in Archaeology and Cultural Heritage, BAR International Series 2598, Archaeopress.REMONDINO F., GUARNIERI A., VETTORE A. (2005): "3D modeling of close-range objects: photogrammetry or laser scanning?", in Procceedings of Videometrics VIII, SPIE-IS&T Electronic Imaging, Vol. 5665, pp. 216-225. http://dx.doi.org/10.1117/12.586294REMONDINO F., EL-HAKIM S., et al. (2008a): "Development and performance analysis of image matching for detailed surface reconstruction of heritage objects", in IEEE Signal Processing Magazine, Vol. 25, N.4, pp. 55-65. http://dx.doi.org/10.1109/MSP.2008.923093REMONDINO F., EL-HAKIM S., GRUEN A., ZHANG L. (2008b): " Turning images into 3D models - Development and performance analysis of image matching for detailed surface reconstruction of heritage objects", in IEEE Signal Processing Magazine, Vol. 25, N.4, 2008, pp. 55-65. http://dx.doi.org/10.1109/MSP.2008.923093REMONDINO F., et al. (2012): "Low-Cost and Open-Source Solutions for Automated Image Orientation - A Critical Overview", in Euromed 2012 Proceedings, pp. 40-54. http://dx.doi.org/10.1007/978-3-642-34234-9_5RUSHMEIER H., BERNARDINI F. (1999): "Computing consistent normals and colors from photometric data," in 3DIM 1999 proceedings, pp. 99-108. http://dx.doi.org/10.1109/im.1999.805339SANTOPUOLI N., SECCIA L. (2008): "Il rilievo del colore nel campo dei beni culturali", in Trattato di Restauro Architettonico, UTET, 2008, Vol. X, pp. 141-163.SCOPIGNO R., CALLIERI M., et al. (2011): "3D Models for Cultural Heritage: Beyond Plain Visualization", in Computer , Vol. 44, N. 7, pp. 48-55.SCHWARTZ C., WEINMANN M., RUITERS R., KLEIN R. (2011): "Integrated High-Quality Acquisition of Geometry and Appearance for Cultural Heritage", in VAST 2011 Proceedings, 2011, pp. 25-32.SEITZ S.M., et al. (2006): "A comparison and evaluation of multi-view stereo reconstruction algorithms", in CVPR Proceedings, Vol. 1, 2006, pp. 519-528. http://dx.doi.org/10.1109/cvpr.2006.19SINHA S.N., POLLEFEYS M. (2005): "Multi-view reconstruction using photo-consistency and exact silhouette constraints: a maximum-flow formulation", in Proc. 10th ICCV, pp. 349-356. http://dx.doi.org/10.1109/iccv.2005.159STAMOS I., LIU L., et al. (2008): "Integrating automated range registration with multiview geometry for the photorealistic modelling of large-scale scenes", in International Journal of Computer Vision Vol. 78, N. 2-3, pp. 237-60. http://dx.doi.org/10.1007/s11263-007-0089-1STUMPFEL J., TCHOU C., et al. (2003): "Digital reunification of the Parthenon and its sculptures", in Proceedings of Virtual Reality, Archaeology and Cultural Heritage (VAST) 2003, pp. 41-50.TOMASI C., KANADE T. (1992): "Shape and motion from image streams under orthography - a factorization method", in IJCV, Vol. 9, N. 2, 1992, pp. 137-154.TROCCOLI A., ALLEN P.K. (2005): "Relighting acquired models of outdoor scenes", in 3DIM 2005 proceedings, pp. 245-252. http://dx.doi.org/10.1109/3dim.2005.69VOSSELMAN G., MAAS H. (2010): Airborne and Terrestrial Laser Scanning, CRC Press.VOGIATZIS G., HERNANDEZ C., et al. (2007): "Multi-view stereo via volumetric graph-cuts and occlusion robust photo-consistency", in IEEE Trans. PAMI, Vol. 29, N. 12, pp. 2241-2246. http://dx.doi.org/10.1109/TPAMI.2007.70712VU H.H., LABATUT P., et al. (2012): "High accuracy and visibility-consistent dense multiview stereo", in IEEE Trans. PAMI, Vol. 34, N. 5, pp. 889-901. http://dx.doi.org/10.1109/TPAMI.2011.172WENZEL K., ABDEL-WAHAB M., et al., (2012): "High-Resolution Surface Reconstruction from Imagery for Close Range Cultural Heritage Applications", in ISPRS Arch., Vol. XXXIX-B5, pp. 133-138.WU C. (2013): "Towards Linear-Time Incremental Structure from Motion", in 3D Vision, 2013 International Conference on, 2013, pp.127-134. http://dx.doi.org/10.1109/3dv.2013.25XU C., A. GEORGHIADES S., et al. (2006): "A System for Reconstructing Integrated Texture Maps for Large Structures", in 3DPVT'06 proceedings, 2006, pp.822-829.YU Y., MALIK J. (1998): "Recovering photometric properties of architectural scenes from photographs", in SIGGRAPH '98 proceedings, 1998, pp. 207-217.ZHANG L. (2005): "Automatic Digital Surface Model (DSM) generation from linear array images", Ph.D. Thesis, Institute of Geodesy and Photogrammetry, ETH Zurich, Switzerland.ZHAO X., ZHOU Z., WU W. (2012): "Radiance-based color calibration for image-based modeling with multiple cameras", in Science China Information Sciences, Vol. 55, N. 7, 2012, pp. 1509-1519
    corecore