6,476 research outputs found

    Assessing the match performance of non-ideal operational facial images using 3D image data.

    Get PDF
    Biometric attributes are unique characteristics specific to an individual, which can be used in automated identification schemes. There have been considerable advancements in the field of face recognition recently, but challenges still exist. One of these challenges is pose-variation, specifically, roll, pitch, and yaw variations away from a frontal image. The goal of this problem report is to assess the improvement of facial recognition performance obtainable by commercial pose-correction software. This was done using pose-corrected images obtained in two ways: 1) non-frontal images generated and corrected using 3D facial scans (pseudo-pose-correction) and 2) the same non-frontal images corrected using FaceVACs DBScan. Two matchers were used to evaluate matching performance namely Cognitec FaceVACs and MegaMatcher 5.0 SDK. A set of matching experiments were conducted using frontal, non-frontal and pose-corrected images to assess the improvement in matching performance, including: 1. Frontal (probe) to Frontal (gallery) images, to generate the baseline 2. Non-ideal pose-varying (probe) to frontal (gallery) 3. Pseudo-pose-corrected (probe) to frontal (gallery) 4. Auto-pose-corrected (probe) to frontal (gallery). Cumulative match characteristics curves (CMC) are used to evaluate the performance of the match scores generated. These matching results have shown better performance in case of pseudo-pose-corrected images compared to the non-frontal images, where the rank accuracy is 100% for the angles which were not detected by the matchers in the non-frontal case. Of the two commercial matchers, Cognitec, which is software optimized for non-frontal models, has shown a better performance in detection of face with angular rotations. MegaMatcher, which is not a pose-correction matcher, was unable to detect greater angles of rotation which are 50° and 60° in pitch, greater than 40° for yaw and for coupled pitch/yaw it was unable to detect 4 out of 8 combinations. The requirements of the facial recognition application will influence the decision to implement pose correction tools

    Function-based Intersubject Alignment of Human Cortical Anatomy

    Get PDF
    Making conclusions about the functional neuroanatomical organization of the human brain requires methods for relating the functional anatomy of an individual's brain to population variability. We have developed a method for aligning the functional neuroanatomy of individual brains based on the patterns of neural activity that are elicited by viewing a movie. Instead of basing alignment on functionally defined areas, whose location is defined as the center of mass or the local maximum response, the alignment is based on patterns of response as they are distributed spatially both within and across cortical areas. The method is implemented in the two-dimensional manifold of an inflated, spherical cortical surface. The method, although developed using movie data, generalizes successfully to data obtained with another cognitive activation paradigm—viewing static images of objects and faces—and improves group statistics in that experiment as measured by a standard general linear model (GLM) analysis

    Canonical Source Reconstruction for MEG

    Get PDF
    We describe a simple and efficient solution to the problem of reconstructing electromagnetic sources into a canonical or standard anatomical space. Its simplicity rests upon incorporating subject-specific anatomy into the forward model in a way that eschews the need for cortical surface extraction. The forward model starts with a canonical cortical mesh, defined in a standard stereotactic space. The mesh is warped, in a nonlinear fashion, to match the subject's anatomy. This warping is the inverse of the transformation derived from spatial normalization of the subject's structural MRI image, using fully automated procedures that have been established for other imaging modalities. Electromagnetic lead fields are computed using the warped mesh, in conjunction with a spherical head model (which does not rely on individual anatomy). The ensuing forward model is inverted using an empirical Bayesian scheme that we have described previously in several publications. Critically, because anatomical information enters the forward model, there is no need to spatially normalize the reconstructed source activity. In other words, each source, comprising the mesh, has a predetermined and unique anatomical attribution within standard stereotactic space. This enables the pooling of data from multiple subjects and the reporting of results in stereotactic coordinates. Furthermore, it allows the graceful fusion of fMRI and MEG data within the same anatomical framework

    Evaluating the anticipated outcomes of MRI seizure image from open-source tool- Prototype approach

    Full text link
    Epileptic Seizure is an abnormal neuronal exertion in the brain, affecting nearly 70 million of the world's population (Ngugi et al., 2010). So many open-source neuroimaging tools are used for metabolism checkups and analysis purposes. The scope of open-source tools like MATLAB, Slicer 3D, Brain Suite21a, SPM, and MedCalc are explained in this paper. MATLAB was used by 60% of the researchers for their image processing and 10% of them use their proprietary software. More than 30% of the researchers use other open-source software tools with their processing techniques for the study of magnetic resonance seizure image

    Three Dimensional Nonlinear Statistical Modeling Framework for Morphological Analysis

    Get PDF
    This dissertation describes a novel three-dimensional (3D) morphometric analysis framework for building statistical shape models and identifying shape differences between populations. This research generalizes the use of anatomical atlases on more complex anatomy as in case of irregular, flat bones, and bones with deformity and irregular bone growth. The foundations for this framework are: 1) Anatomical atlases which allow the creation of homologues anatomical models across populations; 2) Statistical representation for output models in a compact form to capture both local and global shape variation across populations; 3) Shape Analysis using automated 3D landmarking and surface matching. The proposed framework has various applications in clinical, forensic and physical anthropology fields. Extensive research has been published in peer-reviewed image processing, forensic anthropology, physical anthropology, biomedical engineering, and clinical orthopedics conferences and journals. The forthcoming discussion of existing methods for morphometric analysis, including manual and semi-automatic methods, addresses the need for automation of morphometric analysis and statistical atlases. Explanations of these existing methods for the construction of statistical shape models, including benefits and limitations of each method, provide evidence of the necessity for such a novel algorithm. A novel approach was taken to achieve accurate point correspondence in case of irregular and deformed anatomy. This was achieved using a scale space approach to detect prominent scale invariant features. These features were then matched and registered using a novel multi-scale method, utilizing both coordinate data as well as shape descriptors, followed by an overall surface deformation using a new constrained free-form deformation. Applications of output statistical atlases are discussed, including forensic applications for the skull sexing, as well as physical anthropology applications, such as asymmetry in clavicles. Clinical applications in pelvis reconstruction and studying of lumbar kinematics and studying thickness of bone and soft tissue are also discussed

    State of the Art in Face Recognition

    Get PDF
    Notwithstanding the tremendous effort to solve the face recognition problem, it is not possible yet to design a face recognition system with a potential close to human performance. New computer vision and pattern recognition approaches need to be investigated. Even new knowledge and perspectives from different fields like, psychology and neuroscience must be incorporated into the current field of face recognition to design a robust face recognition system. Indeed, many more efforts are required to end up with a human like face recognition system. This book tries to make an effort to reduce the gap between the previous face recognition research state and the future state

    Methods for the acquisition and analysis of volume electron microscopy data

    Get PDF

    Fully automated landmarking and facial segmentation on 3D photographs

    Full text link
    Three-dimensional facial stereophotogrammetry provides a detailed representation of craniofacial soft tissue without the use of ionizing radiation. While manual annotation of landmarks serves as the current gold standard for cephalometric analysis, it is a time-consuming process and is prone to human error. The aim in this study was to develop and evaluate an automated cephalometric annotation method using a deep learning-based approach. Ten landmarks were manually annotated on 2897 3D facial photographs by a single observer. The automated landmarking workflow involved two successive DiffusionNet models and additional algorithms for facial segmentation. The dataset was randomly divided into a training and test dataset. The training dataset was used to train the deep learning networks, whereas the test dataset was used to evaluate the performance of the automated workflow. The precision of the workflow was evaluated by calculating the Euclidean distances between the automated and manual landmarks and compared to the intra-observer and inter-observer variability of manual annotation and the semi-automated landmarking method. The workflow was successful in 98.6% of all test cases. The deep learning-based landmarking method achieved precise and consistent landmark annotation. The mean precision of 1.69 (+/-1.15) mm was comparable to the inter-observer variability (1.31 +/-0.91 mm) of manual annotation. The Euclidean distance between the automated and manual landmarks was within 2 mm in 69%. Automated landmark annotation on 3D photographs was achieved with the DiffusionNet-based approach. The proposed method allows quantitative analysis of large datasets and may be used in diagnosis, follow-up, and virtual surgical planning.Comment: 13 pages, 4 figures, 7 tables, repository https://github.com/rumc3dlab/3dlandmarkdetection
    corecore