1,886 research outputs found

    Morphing a Stereogram into Hologram

    Full text link
    This paper develops a simple and fast method to reconstruct reality from stereoscopic images. We bring together ideas from robust optical flow techniques, morphing deformations and lightfield 3D rendering in order to create unsupervised multiview images of a scene. The reconstruction algorithm provides a good visualization of the virtual 3D imagery behind stereograms upon display on a headset-free Looking Glass 3D monitor. We discuss the possibility of applying the method for live 3D streaming optimized via an associated lookup table.Comment: PDF, 8 pages, 4 Fig

    Statistical Modeling of Craniofacial Shape and Texture

    Get PDF
    We present a fully-automatic statistical 3D shape modeling approach and apply it to a large dataset of 3D images, the Headspace dataset, thus generating the first public shape-and-texture 3D Morphable Model (3DMM) of the full human head. Our approach is the first to employ a template that adapts to the dataset subject before dense morphing. This is fully automatic and achieved using 2D facial landmarking, projection to 3D shape, and mesh editing. In dense template morphing, we improve on the well-known Coherent Point Drift algorithm, by incorporating iterative data-sampling and alignment. Our evaluations demonstrate that our method has better performance in correspondence accuracy and modeling ability when compared with other competing algorithms. We propose a texture map refinement scheme to build high quality texture maps and texture model. We present several applications that include the first clinical use of craniofacial 3DMMs in the assessment of different types of surgical intervention applied to a craniosynostosis patient group

    Expression Morphing Between Different Orientations

    Get PDF
    How to generate new views based on given reference images has been an important and interesting topic in the area of image-based rendering. Two important algorithms that can be used are field morphing and view morphing. Field morphing, which is an algorithm of image morphing, generates new views based on two reference images which were taken at the same viewpoint. The most successful result of field morphing is morphing from one person\u27s face to the other one\u27s face. View morphing, which is an algorithm of view synthesis, generates in between views based on two reference views which were taken at different viewpoints for the same object. The result of view morphing is often an animation of moving one object from the viewpoint of one reference image to the viewpoint of the other one. In this thesis, we proposed a new framework that integrates field morphing and view morphing to solve the problem of expression morphing. Based on four reference images, we successfully generate the morphing from one viewpoint with one expression to another viewpoint with a different expression. We also proposed a new approach to eliminate artifacts that frequently occur in view morphing due to occlusions and in field morphing due to some unforeseen combination of feature lines. We solve these problems by relaxing the monotonicity assumption to piece-wise monotonicity along the epipolar lines. Our experimental results demonstrate the efficiency of this approach in handling occlusions for more realistic synthesis of novel views

    Creation of Large Scale Face Dataset Using Single Training Image

    Get PDF
    Face recognition (FR) has become one of the most successful applications of image analysis and understanding in computer vision. The learning-based model in FR is considered as one of the most favorable problem-solving methods to this issue, which leads to the requirement of large training data sets in order to achieve higher recognition accuracy. However, the availability of only a limited number of face images for training a FR system is always a common problem in practical applications. A new framework to create a face database from a single input image for training purposes is proposed in this dissertation research. The proposed method employs the integration of 3D Morphable Model (3DMM) and Differential Evolution (DE) algorithms. Benefitting from DE\u27s successful performance, 3D face models can be created based on a single 2D image with respect to various illumination and pose contexts. An image deformation technique is also introduced to enhance the quality of synthesized images. The experimental results demonstrate that the proposed method is able to automatically create a virtual 3D face dataset from a single 2D image with high performance. Moreover the new dataset is capable of providing large number of face images equipped with abundant variations. The validation process shows that there is only an insignificant difference between the input image and the 2D face image projected by the 3D model. Research work is progressing to consider a nonlinear manifold learning methodology to embed the synthetically created dataset of an individual so that a test image of the person will be attracted to the respective manifold for accurate recognition

    3D face structure extraction from images at arbitrary poses and under arbitrary illumination conditions

    Get PDF
    With the advent of 9/11, face detection and recognition is becoming an important tool to be used for securing homeland safety against potential terrorist attacks by tracking and identifying suspects who might be trying to indulge in such activities. It is also a technology that has proven its usefulness for law enforcement agencies by helping identifying or narrowing down a possible suspect from surveillance tape on the crime scene, or quickly by finding a suspect based on description from witnesses.In this thesis we introduce several improvements to morphable model based algorithms and make use of the 3D face structures extracted from multiple images to conduct illumination analysis and face recognition experiments. We present an enhanced Active Appearance Model (AAM), which possesses several sub-models that are independently updated to introduce more model flexibility to achieve better feature localization. Most appearance based models suffer from the unpredictability of facial background, which might result in a bad boundary extraction. To overcome this problem we propose a local projection models that accurately locates face boundary landmarks. We also introduce a novel and unbiased cost function that casts the face alignment as an optimization problem, where shape constraints obtained from direct motion estimation are incorporated to achieve a much higher convergence rate and more accurate alignment. Viewing angles are roughly categorized to four different poses, and the customized view-based AAMs align face images in different specific pose categories. We also attempt at obtaining individual 3D face structures by morphing a 3D generic face model to fit the individual faces. Face contour is dynamically generated so that the morphed face looks realistic. To overcome the correspondence problem between facial feature points on the generic and the individual face, we use an approach based on distance maps. With the extracted 3D face structure we study the illumination effects on the appearance based on the spherical harmonic illumination analysis. By normalizing the illumination conditions on different facial images, we extract a global illumination-invariant texture map, which jointly with the extracted 3D face structure in the form of cubic morphing parameters completely encode an individual face, and allow for the generation of images at arbitrary pose and under arbitrary illumination.Face recognition is conducted based on the face shape matching error, texture error and illumination-normalized texture error. Experiments show that a higher face recognition rate is achieved by compensating for illumination effects. Furthermore, it is observed that the fusion of shape and texture information result in a better performance than using either shape or texture information individually.Ph.D., Electrical Engineering -- Drexel University, 200

    The Tracking Performance of Distributed Recoverable Flight Control Systems Subject to High Intensity Radiated Fields

    Get PDF
    It is known that high intensity radiated fields (HIRF) can produce upsets in digital electronics, and thereby degrade the performance of digital flight control systems. Such upsets, either from natural or man-made sources, can change data values on digital buses and memory and affect CPU instruction execution. HIRF environments are also known to trigger common-mode faults, affecting nearly-simultaneously multiple fault containment regions, and hence reducing the benefits of n-modular redundancy and other fault-tolerant computing techniques. Thus, it is important to develop models which describe the integration of the embedded digital system, where the control law is implemented, as well as the dynamics of the closed-loop system. In this dissertation, theoretical tools are presented to analyze the relationship between the design choices for a class of distributed recoverable computing platforms and the tracking performance degradation of a digital flight control system implemented on such a platform while operating in a HIRF environment. Specifically, a tractable hybrid performance model is developed for a digital flight control system implemented on a computing platform inspired largely by the NASA family of fault-tolerant, reconfigurable computer architectures known as SPIDER (scalable processor-independent design for enhanced reliability). The focus will be on the SPIDER implementation, which uses the computer communication system known as ROBUS-2 (reliable optical bus). A physical HIRF experiment was conducted at the NASA Langley Research Center in order to validate the theoretical tracking performance degradation predictions for a distributed Boeing 747 flight control system subject to a HIRF environment. An extrapolation of these results for scenarios that could not be physically tested is also presented

    Wearable performance

    Get PDF
    This is the post-print version of the article. The official published version can be accessed from the link below - Copyright @ 2009 Taylor & FrancisWearable computing devices worn on the body provide the potential for digital interaction in the world. A new stage of computing technology at the beginning of the 21st Century links the personal and the pervasive through mobile wearables. The convergence between the miniaturisation of microchips (nanotechnology), intelligent textile or interfacial materials production, advances in biotechnology and the growth of wireless, ubiquitous computing emphasises not only mobility but integration into clothing or the human body. In artistic contexts one expects such integrated wearable devices to have the two-way function of interface instruments (e.g. sensor data acquisition and exchange) worn for particular purposes, either for communication with the environment or various aesthetic and compositional expressions. 'Wearable performance' briefly surveys the context for wearables in the performance arts and distinguishes display and performative/interfacial garments. It then focuses on the authors' experiments with 'design in motion' and digital performance, examining prototyping at the DAP-Lab which involves transdisciplinary convergences between fashion and dance, interactive system architecture, electronic textiles, wearable technologies and digital animation. The concept of an 'evolving' garment design that is materialised (mobilised) in live performance between partners originates from DAP Lab's work with telepresence and distributed media addressing the 'connective tissues' and 'wearabilities' of projected bodies through a study of shared embodiment and perception/proprioception in the wearer (tactile sensory processing). Such notions of wearability are applied both to the immediate sensory processing on the performer's body and to the processing of the responsive, animate environment. Wearable computing devices worn on the body provide the potential for digital interaction in the world. A new stage of computing technology at the beginning of the 21st Century links the personal and the pervasive through mobile wearables. The convergence between the miniaturisation of microchips (nanotechnology), intelligent textile or interfacial materials production, advances in biotechnology and the growth of wireless, ubiquitous computing emphasises not only mobility but integration into clothing or the human body. In artistic contexts one expects such integrated wearable devices to have the two-way function of interface instruments (e.g. sensor data acquisition and exchange) worn for particular purposes, either for communication with the environment or various aesthetic and compositional expressions. 'Wearable performance' briefly surveys the context for wearables in the performance arts and distinguishes display and performative/interfacial garments. It then focuses on the authors' experiments with 'design in motion' and digital performance, examining prototyping at the DAP-Lab which involves transdisciplinary convergences between fashion and dance, interactive system architecture, electronic textiles, wearable technologies and digital animation. The concept of an 'evolving' garment design that is materialised (mobilised) in live performance between partners originates from DAP Lab's work with telepresence and distributed media addressing the 'connective tissues' and 'wearabilities' of projected bodies through a study of shared embodiment and perception/proprioception in the wearer (tactile sensory processing). Such notions of wearability are applied both to the immediate sensory processing on the performer's body and to the processing of the responsive, animate environment
    • …
    corecore