78 research outputs found

    Artificial intelligence (AI) in rare diseases: is the future brighter?

    Get PDF
    The amount of data collected and managed in (bio)medicine is ever-increasing. Thus, there is a need to rapidly and efficiently collect, analyze, and characterize all this information. Artificial intelligence (AI), with an emphasis on deep learning, holds great promise in this area and is already being successfully applied to basic research, diagnosis, drug discovery, and clinical trials. Rare diseases (RDs), which are severely underrepresented in basic and clinical research, can particularly benefit from AI technologies. Of the more than 7000 RDs described worldwide, only 5% have a treatment. The ability of AI technologies to integrate and analyze data from different sources (e.g., multi-omics, patient registries, and so on) can be used to overcome RDs' challenges (e.g., low diagnostic rates, reduced number of patients, geographical dispersion, and so on). Ultimately, RDs' AI-mediated knowledge could significantly boost therapy development. Presently, there are AI approaches being used in RDs and this review aims to collect and summarize these advances. A section dedicated to congenital disorders of glycosylation (CDG), a particular group of orphan RDs that can serve as a potential study model for other common diseases and RDs, has also been included.info:eu-repo/semantics/publishedVersio

    The Use of Artificial Intelligence for the Classification of Craniofacial Deformities

    Get PDF
    Positional cranial deformities are a common finding in toddlers, yet differentiation from craniosynostosis can be challenging. The aim of this study was to train convolutional neural networks (CNNs) to classify craniofacial deformities based on 2D images generated using photogrammetry as a radiation-free imaging technique. A total of 487 patients with photogrammetry scans were included in this retrospective cohort study: children with craniosynostosis (n = 227), positional deformities (n = 206), and healthy children (n = 54). Three two-dimensional images were extracted from each photogrammetry scan. The datasets were divided into training, validation, and test sets. During the training, fine-tuned ResNet-152s were utilized. The performance was quantified using tenfold cross-validation. For the detection of craniosynostosis, sensitivity was at 0.94 with a specificity of 0.85. Regarding the differentiation of the five existing classes (trigonocephaly, scaphocephaly, positional plagiocephaly left, positional plagiocephaly right, and healthy), sensitivity ranged from 0.45 (positional plagiocephaly left) to 0.95 (scaphocephaly) and specificity ranged from 0.87 (positional plagiocephaly right) to 0.97 (scaphocephaly). We present a CNN-based approach to classify craniofacial deformities on two-dimensional images with promising results. A larger dataset would be required to identify rarer forms of craniosynostosis as well. The chosen 2D approach enables future applications for digital cameras or smartphones

    Fully automatic landmarking of 2D photographs identifies novel genetic loci influencing facial features

    Get PDF
    We report a genome-wide association study for facial features in > 6,000 Latin Americans. We placed 106 landmarks on 2D frontal photographs using the cloud service platform Face++. After Procrustes superposition, genome-wide association testing was performed for 301 inter-landmark distances. We detected nominally significant association (P-value < 5×10− 8) for 42 genome regions. Of these, 9 regions have been previously reported in GWAS of facial features. In follow-up analyses, we replicated 26 of the 33 novel regions (in East Asians or Europeans). The replicated regions include 1q32.3, 3q21.1, 8p11.21, 10p11.1, and 22q12.1, all comprising strong candidate genes involved in craniofacial development. Furthermore, the 1q32.3 region shows evidence of introgression from archaic humans. These results provide novel biological insights into facial variation and establish that automatic landmarking of standard 2D photographs is a simple and informative approach for the genetic analysis of facial variation, suitable for the rapid analysis of large population samples.- Introduction - Results And Discussion -- Study sample and phenotyping -- Trait/covariate correlation and heritability -- Overview of GWAS results and integration with the literature -- Follow-up of genomic regions newly associated with facial features: Replication in two human cohorts -- Follow-up of genomic regions newly associated with facial features: effects in the mouse -- Genome annotations at associated loci - Conclusion - Methods -- Study subjects -- Genotype data -- Phenotyping -- Statistical genetic analysis -- Interaction of EDAR with other genes -- Expression analysis for significant SNPs -- Detection of archaic introgression near ATF3 and association with facial features -- Annotation of SNPs in FUMA -- Shape GWAS in outbred mic

    Loss-of-function variants in CUL3 cause a syndromic neurodevelopmental disorder

    Full text link
    Purpose De novovariants inCUL3(Cullin-3 ubiquitin ligase) have been strongly associated with neurodevelopmental disorders (NDDs), but no large case series have been reported so far. Here we aimed to collect sporadic cases carrying rare variants inCUL3,describe the genotype-phenotype correlation, and investigate the underlying pathogenic mechanism.MethodsGenetic data and detailed clinical records were collected via multi-center collaboration. Dysmorphic facial features were analyzed using GestaltMatcher. Variant effects on CUL3 protein stability were assessed using patient-derived T-cells.ResultsWe assembled a cohort of 35 individuals with heterozygousCUL3variants presenting a syndromic NDD characterized by intellectual disability with or without autistic features. Of these, 33 have loss-of-function (LoF) and two have missense variants.CUL3LoF variants in patients may affect protein stability leading to perturbations in protein homeostasis, as evidenced by decreased ubiquitin-protein conjugatesin vitro. Specifically, we show that cyclin E1 (CCNE1) and 4E-BP1 (EIF4EBP1), two prominent substrates of CUL3, fail to be targeted for proteasomal degradation in patient-derived cells.ConclusionOur study further refines the clinical and mutational spectrum ofCUL3-associated NDDs, expands the spectrum of cullin RING E3 ligase-associated neuropsychiatric disorders, and suggests haploinsufficiency via LoF variants is the predominant pathogenic mechanism

    Étude des anomalies du développement humain# un modèle d’analyse phénotypique

    Get PDF
    Depuis le début des années 90, le projet génome humain a permis l’émergence de nombreuses techniques globalisantes porteuses du suffixe –omique : génomique, transcriptomique, protéomique, épigénomique, etc.… L’étude globale de l’ensemble des phénotypes humains (« phénome ») est à l’origine de nouvelles technologies constituant la « phénomique ». L’approche phénomique permet de déterminer des liens entre des combinaisons de traits phénomiques. Nous voulons appliquer cette approche à l’étude des malformations humaines en particulier leurs combinaisons, ne formant des syndromes, des associations ou des séquences bien caractérisés que dans un petit nombre de cas. Afin d’évaluer la faisabilité de cette approche, pour une étude pilote nous avons décidé d’établir une base de données pour la description phénotypique des anomalies foetales. Nous avons effectué ces étapes : o Réalisation d’une étude rétrospective d’une série d’autopsies de foetus au CHU Sainte- Justine (Montréal, QC, Canada) entre 2001-2006 o Élaboration de trois thésaurus et d’une ontologie des anomalies développementales humaines o Construction une base de données en langage MySQL Cette base de données multicentrique accessible sur (http://www.malformations.org), nous permet de rechercher très facilement les données phénotypiques des 543 cas observés porteurs d’une anomalie donnée, de leur donner une description statistique et de générer les différents types d’hypothèses. Elle nous a également permis de sélectionner 153 cas de foetus malformés qui font l’objet d’une étude de micropuce d’hybridation génomique comparative (aCGH) à la recherche d’une anomalie génomique.Since the early 90s, the Human Genome Project (HGP) has allowed the development of numerous worldwide techniques which carried the suffix “omic”: genomic, transcriptomic, proteomic, epigenomic, etc…. The global investigation of the sets of human phenotypes (phenome) is called phenomic. With phenomic studies we should be able to determine the links among similar phenotypic groups. We wish to apply this approach to human dysmorphology, particularly malformation combinations, which form characteristic malformation associations, malformation sequences, malformation syndromes or malformation disorders only in a minority of cases. As a graduate student research project, we decided to perform a retrospective study of the sets of pathology reports including 543 fetuses autopsied in the Department of Pathology of CHU Sainte-Justine (Montreal, QC, Canada) between 2001 and 2006. We have established an open Malformation Database (MDB) which can be accessed at http://www.malformations.org. To achieve this, we conducted the following steps: o Realization of a retrospective study of fetopathology reports for fetal malformations. o Development of an ontology along with three thesauruses of human developmental anomalies. o Implementation of these thesauruses and ontology in the MySQL system. This hypothesis-generating database allows us to easily retrieve the fetal cases (phenotypic data) with anomalies, calculate the frequencies of these anomalies, and evaluate the feasibility of the phenomic approach to human dysmorphogenesis. We were able as well to select 153 cases of malformed fetuses which will be the subject of aCGH array study for genomic research of human anomalies

    GestaltMatcher Database - A global reference for facial phenotypic variability in rare human diseases

    Get PDF
    The most important factor that complicates the work of dysmorphologists is the significant phenotypic variability of the human face. Next-Generation Phenotyping (NGP) tools that assist clinicians with recognizing characteristic syndromic patterns are particularly challenged when confronted with patients from populations different from their training data. To that end, we systematically analyzed the impact of genetic ancestry on facial dysmorphism. For that purpose, we established the GestaltMatcher Database (GMDB) as a reference dataset for medical images of patients with rare genetic disorders from around the world. We collected 10,980 frontal facial images - more than a quarter previously unpublished - from 8,346 patients, representing 581 rare disorders. Although the predominant ancestry is still European (67%), data from underrepresented populations have been increased considerably via global collaborations (19% Asian and 7% African). This includes previously unpublished reports for more than 40% of the African patients. The NGP analysis on this diverse dataset revealed characteristic performance differences depending on the composition of training and test sets corresponding to genetic relatedness. For clinical use of NGP, incorporating non-European patients resulted in a profound enhancement of GestaltMatcher performance. The top-5 accuracy rate increased by +11.29%. Importantly, this improvement in delineating the correct disorder from a facial portrait was achieved without decreasing the performance on European patients. By design, GMDB complies with the FAIR principles by rendering the curated medical data findable, accessible, interoperable, and reusable. This means GMDB can also serve as data for training and benchmarking. In summary, our study on facial dysmorphism on a global sample revealed a considerable cross ancestral phenotypic variability confounding NGP that should be counteracted by international efforts for increasing data diversity. GMDB will serve as a vital reference database for clinicians and a transparent training set for advancing NGP technology.</p

    Reconhecimento de padrões em expressões faciais : algoritmos e aplicações

    Get PDF
    Orientador: Hélio PedriniTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: O reconhecimento de emoções tem-se tornado um tópico relevante de pesquisa pela comunidade científica, uma vez que desempenha um papel essencial na melhoria contínua dos sistemas de interação humano-computador. Ele pode ser aplicado em diversas áreas, tais como medicina, entretenimento, vigilância, biometria, educação, redes sociais e computação afetiva. Há alguns desafios em aberto relacionados ao desenvolvimento de sistemas emocionais baseados em expressões faciais, como dados que refletem emoções mais espontâneas e cenários reais. Nesta tese de doutorado, apresentamos diferentes metodologias para o desenvolvimento de sistemas de reconhecimento de emoções baseado em expressões faciais, bem como sua aplicabilidade na resolução de outros problemas semelhantes. A primeira metodologia é apresentada para o reconhecimento de emoções em expressões faciais ocluídas baseada no Histograma da Transformada Census (CENTRIST). Expressões faciais ocluídas são reconstruídas usando a Análise Robusta de Componentes Principais (RPCA). A extração de características das expressões faciais é realizada pelo CENTRIST, bem como pelos Padrões Binários Locais (LBP), pela Codificação Local do Gradiente (LGC) e por uma extensão do LGC. O espaço de características gerado é reduzido aplicando-se a Análise de Componentes Principais (PCA) e a Análise Discriminante Linear (LDA). Os algoritmos K-Vizinhos mais Próximos (KNN) e Máquinas de Vetores de Suporte (SVM) são usados para classificação. O método alcançou taxas de acerto competitivas para expressões faciais ocluídas e não ocluídas. A segunda é proposta para o reconhecimento dinâmico de expressões faciais baseado em Ritmos Visuais (VR) e Imagens da História do Movimento (MHI), de modo que uma fusão de ambos descritores codifique informações de aparência, forma e movimento dos vídeos. Para extração das características, o Descritor Local de Weber (WLD), o CENTRIST, o Histograma de Gradientes Orientados (HOG) e a Matriz de Coocorrência em Nível de Cinza (GLCM) são empregados. A abordagem apresenta uma nova proposta para o reconhecimento dinâmico de expressões faciais e uma análise da relevância das partes faciais. A terceira é um método eficaz apresentado para o reconhecimento de emoções audiovisuais com base na fala e nas expressões faciais. A metodologia envolve uma rede neural híbrida para extrair características visuais e de áudio dos vídeos. Para extração de áudio, uma Rede Neural Convolucional (CNN) baseada no log-espectrograma de Mel é usada, enquanto uma CNN construída sobre a Transformada de Census é empregada para a extração das características visuais. Os atributos audiovisuais são reduzidos por PCA e LDA, então classificados por KNN, SVM, Regressão Logística (LR) e Gaussian Naïve Bayes (GNB). A abordagem obteve taxas de reconhecimento competitivas, especialmente em dados espontâneos. A penúltima investiga o problema de detectar a síndrome de Down a partir de fotografias. Um descritor geométrico é proposto para extrair características faciais. Experimentos realizados em uma base de dados pública mostram a eficácia da metodologia desenvolvida. A última metodologia trata do reconhecimento de síndromes genéticas em fotografias. O método visa extrair atributos faciais usando características de uma rede neural profunda e medidas antropométricas. Experimentos são realizados em uma base de dados pública, alcançando taxas de reconhecimento competitivasAbstract: Emotion recognition has become a relevant research topic by the scientific community, since it plays an essential role in the continuous improvement of human-computer interaction systems. It can be applied in various areas, for instance, medicine, entertainment, surveillance, biometrics, education, social networks, and affective computing. There are some open challenges related to the development of emotion systems based on facial expressions, such as data that reflect more spontaneous emotions and real scenarios. In this doctoral dissertation, we propose different methodologies to the development of emotion recognition systems based on facial expressions, as well as their applicability in the development of other similar problems. The first is an emotion recognition methodology for occluded facial expressions based on the Census Transform Histogram (CENTRIST). Occluded facial expressions are reconstructed using an algorithm based on Robust Principal Component Analysis (RPCA). Extraction of facial expression features is then performed by CENTRIST, as well as Local Binary Patterns (LBP), Local Gradient Coding (LGC), and an LGC extension. The generated feature space is reduced by applying Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) algorithms are used for classification. This method reached competitive accuracy rates for occluded and non-occluded facial expressions. The second proposes a dynamic facial expression recognition based on Visual Rhythms (VR) and Motion History Images (MHI), such that a fusion of both encodes appearance, shape, and motion information of the video sequences. For feature extraction, Weber Local Descriptor (WLD), CENTRIST, Histogram of Oriented Gradients (HOG), and Gray-Level Co-occurrence Matrix (GLCM) are employed. This approach shows a new direction for performing dynamic facial expression recognition, and an analysis of the relevance of facial parts. The third is an effective method for audio-visual emotion recognition based on speech and facial expressions. The methodology involves a hybrid neural network to extract audio and visual features from videos. For audio extraction, a Convolutional Neural Network (CNN) based on log Mel-spectrogram is used, whereas a CNN built on Census Transform is employed for visual extraction. The audio and visual features are reduced by PCA and LDA, and classified through KNN, SVM, Logistic Regression (LR), and Gaussian Naïve Bayes (GNB). This approach achieves competitive recognition rates, especially in a spontaneous data set. The second last investigates the problem of detecting Down syndrome from photographs. A geometric descriptor is proposed to extract facial features. Experiments performed on a public data set show the effectiveness of the developed methodology. The last methodology is about recognizing genetic disorders in photos. This method focuses on extracting facial features using deep features and anthropometric measurements. Experiments are conducted on a public data set, achieving competitive recognition ratesDoutoradoCiência da ComputaçãoDoutora em Ciência da Computação140532/2019-6CNPQCAPE
    corecore