9,365 research outputs found

    Automatic landmark annotation and dense correspondence registration for 3D human facial images

    Full text link
    Dense surface registration of three-dimensional (3D) human facial images holds great potential for studies of human trait diversity, disease genetics, and forensics. Non-rigid registration is particularly useful for establishing dense anatomical correspondences between faces. Here we describe a novel non-rigid registration method for fully automatic 3D facial image mapping. This method comprises two steps: first, seventeen facial landmarks are automatically annotated, mainly via PCA-based feature recognition following 3D-to-2D data transformation. Second, an efficient thin-plate spline (TPS) protocol is used to establish the dense anatomical correspondence between facial images, under the guidance of the predefined landmarks. We demonstrate that this method is robust and highly accurate, even for different ethnicities. The average face is calculated for individuals of Han Chinese and Uyghur origins. While fully automatic and computationally efficient, this method enables high-throughput analysis of human facial feature variation.Comment: 33 pages, 6 figures, 1 tabl

    Pbx loss in cranial neural crest, unlike in epithelium, results in cleft palate only and a broader midface.

    Get PDF
    Orofacial clefting represents the most common craniofacial birth defect. Cleft lip with or without cleft palate (CL/P) is genetically distinct from cleft palate only (CPO). Numerous transcription factors (TFs) regulate normal development of the midface, comprising the premaxilla, maxilla and palatine bones, through control of basic cellular behaviors. Within the Pbx family of genes encoding Three Amino-acid Loop Extension (TALE) homeodomain-containing TFs, we previously established that in the mouse, Pbx1 plays a preeminent role in midfacial morphogenesis, and Pbx2 and Pbx3 execute collaborative functions in domains of coexpression. We also reported that Pbx1 loss from cephalic epithelial domains, on a Pbx2- or Pbx3-deficient background, results in CL/P via disruption of a regulatory network that controls apoptosis at the seam of frontonasal and maxillary process fusion. Conversely, Pbx1 loss in cranial neural crest cell (CNCC)-derived mesenchyme on a Pbx2-deficient background results in CPO, a phenotype not yet characterized. In this study, we provide in-depth analysis of PBX1 and PBX2 protein localization from early stages of midfacial morphogenesis throughout development of the secondary palate. We further establish CNCC-specific roles of PBX TFs and describe the developmental abnormalities resulting from their loss in the murine embryonic secondary palate. Additionally, we compare and contrast the phenotypes arising from PBX1 loss in CNCC with those caused by its loss in the epithelium and show that CNCC-specific Pbx1 deletion affects only later secondary palate morphogenesis. Moreover, CNCC mutants exhibit perturbed rostro-caudal organization and broadening of the midfacial complex. Proliferation defects are pronounced in CNCC mutants at gestational day (E)12.5, suggesting altered proliferation of mutant palatal progenitor cells, consistent with roles of PBX factors in maintaining progenitor cell state. Although the craniofacial skeletal abnormalities in CNCC mutants do not result from overt patterning defects, osteogenesis is delayed, underscoring a critical role of PBX factors in CNCC morphogenesis and differentiation. Overall, the characterization of tissue-specific Pbx loss-of-function mouse models with orofacial clefting establishes these strains as unique tools to further dissect the complexities of this congenital craniofacial malformation. This study closely links PBX TALE homeodomain proteins to the variation in maxillary shape and size that occurs in pathological settings and during evolution of midfacial morphology

    Deep learning approach to Fourier ptychographic microscopy

    Full text link
    Convolutional neural networks (CNNs) have gained tremendous success in solving complex inverse problems. The aim of this work is to develop a novel CNN framework to reconstruct video sequences of dynamic live cells captured using a computational microscopy technique, Fourier ptychographic microscopy (FPM). The unique feature of the FPM is its capability to reconstruct images with both wide field-of-view (FOV) and high resolution, i.e. a large space-bandwidth-product (SBP), by taking a series of low resolution intensity images. For live cell imaging, a single FPM frame contains thousands of cell samples with different morphological features. Our idea is to fully exploit the statistical information provided by these large spatial ensembles so as to make predictions in a sequential measurement, without using any additional temporal dataset. Specifically, we show that it is possible to reconstruct high-SBP dynamic cell videos by a CNN trained only on the first FPM dataset captured at the beginning of a time-series experiment. Our CNN approach reconstructs a 12800×10800 pixel phase image using only ∼25 seconds, a 50× speedup compared to the model-based FPM algorithm. In addition, the CNN further reduces the required number of images in each time frame by ∼ 6×. Overall, this significantly improves the imaging throughput by reducing both the acquisition and computational times. The proposed CNN is based on the conditional generative adversarial network (cGAN) framework. We further propose a mixed loss function that combines the standard image domain loss and a weighted Fourier domain loss, which leads to improved reconstruction of the high frequency information. Additionally, we also exploit transfer learning so that our pre-trained CNN can be further optimized to image other cell types. Our technique demonstrates a promising deep learning approach to continuously monitor large live-cell populations over an extended time and gather useful spatial and temporal information with sub-cellular resolution.We would like to thank NVIDIA Corporation for supporting us with the GeForce Titan Xp through the GPU Grant Program. (NVIDIA Corporation; GeForce Titan Xp through the GPU Grant Program)First author draf

    Deep learning approach to Fourier ptychographic microscopy

    Full text link
    Convolutional neural networks (CNNs) have gained tremendous success in solving complex inverse problems. The aim of this work is to develop a novel CNN framework to reconstruct video sequence of dynamic live cells captured using a computational microscopy technique, Fourier ptychographic microscopy (FPM). The unique feature of the FPM is its capability to reconstruct images with both wide field-of-view (FOV) and high resolution, i.e. a large space-bandwidth-product (SBP), by taking a series of low resolution intensity images. For live cell imaging, a single FPM frame contains thousands of cell samples with different morphological features. Our idea is to fully exploit the statistical information provided by this large spatial ensemble so as to make predictions in a sequential measurement, without using any additional temporal dataset. Specifically, we show that it is possible to reconstruct high-SBP dynamic cell videos by a CNN trained only on the first FPM dataset captured at the beginning of a time-series experiment. Our CNN approach reconstructs a 12800X10800 pixels phase image using only ~25 seconds, a 50X speedup compared to the model-based FPM algorithm. In addition, the CNN further reduces the required number of images in each time frame by ~6X. Overall, this significantly improves the imaging throughput by reducing both the acquisition and computational times. The proposed CNN is based on the conditional generative adversarial network (cGAN) framework. Additionally, we also exploit transfer learning so that our pre-trained CNN can be further optimized to image other cell types. Our technique demonstrates a promising deep learning approach to continuously monitor large live-cell populations over an extended time and gather useful spatial and temporal information with sub-cellular resolution

    A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

    Get PDF
    In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-looking conclusion

    Calibration and segmentation of skin areas in hyperspectral imaging for the needs of dermatology

    Get PDF
    Introduction: Among the currently known imaging methods, there exists hyperspectral imaging. This imaging fills the gap in visible light imaging with conventional, known devices that use classical CCDs. A major problem in the study of the skin is its segmentation and proper calibration of the results obtained. For this purpose, a dedicated automatic image analysis algorithm is proposed by the paper's authors. Material and method: The developed algorithm was tested on data acquired with the Specim camera. Images were related to different body areas of healthy patients. The resulting data were anonymized and stored in the output format, source dat (ENVI File) and raw. The frequency. of the data obtained ranged from 397 to 1030 nm. Each image was recorded every 0.79 nm, which in total gave 800 2D images for each subject. A total of 36' 000 2D images in dat format and the same number of images in the raw format were obtained for 45 full hyperspectral measurement sessions. As part of the paper, an image analysis algorithm using known analysis methods as well as new ones developed by the authors was proposed. Among others, filtration with a median filter, the Canny filter, conditional opening and closing operations and spectral analysis were used. The algorithm was implemented in Matlab and C and is used in practice. Results: The proposed method enables accurate segmentation for 36' 000 measured 2D images at the level of 7.8%. Segmentation is carried out fully automatically based on the reference ray spectrum. In addition, brightness calibration of individual 2D images is performed for the subsequent wavelengths. For a few segmented areas, the analysis time using Intel Core i5 CPU RAM [email protected] 4GB does not exceed 10 s. Conclusions: The obtained results confirm the usefulness of the applied method for image analysis and processing in dermatological practice. In particular, it is useful in the quantitative evaluation of skin lesions. Such analysis can be performed fully automatically without operator's intervention

    Introduction to Facial Micro Expressions Analysis Using Color and Depth Images: A Matlab Coding Approach (Second Edition, 2023)

    Full text link
    The book attempts to introduce a gentle introduction to the field of Facial Micro Expressions Recognition (FMER) using Color and Depth images, with the aid of MATLAB programming environment. FMER is a subset of image processing and it is a multidisciplinary topic to analysis. So, it requires familiarity with other topics of Artifactual Intelligence (AI) such as machine learning, digital image processing, psychology and more. So, it is a great opportunity to write a book which covers all of these topics for beginner to professional readers in the field of AI and even without having background of AI. Our goal is to provide a standalone introduction in the field of MFER analysis in the form of theorical descriptions for readers with no background in image processing with reproducible Matlab practical examples. Also, we describe any basic definitions for FMER analysis and MATLAB library which is used in the text, that helps final reader to apply the experiments in the real-world applications. We believe that this book is suitable for students, researchers, and professionals alike, who need to develop practical skills, along with a basic understanding of the field. We expect that, after reading this book, the reader feels comfortable with different key stages such as color and depth image processing, color and depth image representation, classification, machine learning, facial micro-expressions recognition, feature extraction and dimensionality reduction. The book attempts to introduce a gentle introduction to the field of Facial Micro Expressions Recognition (FMER) using Color and Depth images, with the aid of MATLAB programming environment.Comment: This is the second edition of the boo
    • …
    corecore