128 research outputs found

    A Survey on Artificial Intelligence Techniques for Biomedical Image Analysis in Skeleton-Based Forensic Human Identification

    Get PDF
    This paper represents the first survey on the application of AI techniques for the analysis of biomedical images with forensic human identification purposes. Human identification is of great relevance in today’s society and, in particular, in medico-legal contexts. As consequence, all technological advances that are introduced in this field can contribute to the increasing necessity for accurate and robust tools that allow for establishing and verifying human identity. We first describe the importance and applicability of forensic anthropology in many identification scenarios. Later, we present the main trends related to the application of computer vision, machine learning and soft computing techniques to the estimation of the biological profile, the identification through comparative radiography and craniofacial superimposition, traumatism and pathology analysis, as well as facial reconstruction. The potentialities and limitations of the employed approaches are described, and we conclude with a discussion about methodological issues and future research.Spanish Ministry of Science, Innovation and UniversitiesEuropean Union (EU) PGC2018-101216-B-I00Regional Government of Andalusia under grant EXAISFI P18-FR-4262Instituto de Salud Carlos IIIEuropean Union (EU) DTS18/00136European Commission H2020-MSCA-IF-2016 through the Skeleton-ID Marie Curie Individual Fellowship 746592Spanish Ministry of Science, Innovation and Universities-CDTI, Neotec program 2019 EXP-00122609/SNEO-20191236European Union (EU)Xunta de Galicia ED431G 2019/01European Union (EU) RTI2018-095894-B-I0

    Analyzing fibrous tissue pattern in fibrous dysplasia bone images using deep R-CNN networks for segmentation

    Get PDF
    Predictive health monitoring systems help to detect human health threats in the early stage. Evolving deep learning techniques in medical image analysis results in efficient feedback in quick time. Fibrous dysplasia (FD) is a genetic disorder, triggered by the mutation in Guanine Nucleotide binding protein with alpha stimulatory activities in the human bone genesis. It slowly occupies the bone marrow and converts the bone cell into fibrous tissues. It weakens the bone structure and leads to permanent disability. This paper proposes the study of FD bone image analyzing techniques with deep networks. Also, the linear regression model is annotated for predicting the bone abnormality levels with observed coefficients. Modern image processing begins with various image filters. It describes the edges, shades, texture values of the receptive field. Different types of segmentation and edge detection mechanisms are applied to locate the tumor, lesion, and fibrous tissues in the bone image. Extract the fibrous region in the bone image using the region-based convolutional neural network algorithm. The segmented results are compared with their accuracy metrics. The segmentation loss is reduced by each iteration. The overall loss is 0.24% and the accuracy is 99%, segmenting the masked region produces 98% of accuracy, and building the bounding boxes is 99% of accuracy

    Reconhecimento de padrões em expressões faciais : algoritmos e aplicações

    Get PDF
    Orientador: Hélio PedriniTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: O reconhecimento de emoções tem-se tornado um tópico relevante de pesquisa pela comunidade científica, uma vez que desempenha um papel essencial na melhoria contínua dos sistemas de interação humano-computador. Ele pode ser aplicado em diversas áreas, tais como medicina, entretenimento, vigilância, biometria, educação, redes sociais e computação afetiva. Há alguns desafios em aberto relacionados ao desenvolvimento de sistemas emocionais baseados em expressões faciais, como dados que refletem emoções mais espontâneas e cenários reais. Nesta tese de doutorado, apresentamos diferentes metodologias para o desenvolvimento de sistemas de reconhecimento de emoções baseado em expressões faciais, bem como sua aplicabilidade na resolução de outros problemas semelhantes. A primeira metodologia é apresentada para o reconhecimento de emoções em expressões faciais ocluídas baseada no Histograma da Transformada Census (CENTRIST). Expressões faciais ocluídas são reconstruídas usando a Análise Robusta de Componentes Principais (RPCA). A extração de características das expressões faciais é realizada pelo CENTRIST, bem como pelos Padrões Binários Locais (LBP), pela Codificação Local do Gradiente (LGC) e por uma extensão do LGC. O espaço de características gerado é reduzido aplicando-se a Análise de Componentes Principais (PCA) e a Análise Discriminante Linear (LDA). Os algoritmos K-Vizinhos mais Próximos (KNN) e Máquinas de Vetores de Suporte (SVM) são usados para classificação. O método alcançou taxas de acerto competitivas para expressões faciais ocluídas e não ocluídas. A segunda é proposta para o reconhecimento dinâmico de expressões faciais baseado em Ritmos Visuais (VR) e Imagens da História do Movimento (MHI), de modo que uma fusão de ambos descritores codifique informações de aparência, forma e movimento dos vídeos. Para extração das características, o Descritor Local de Weber (WLD), o CENTRIST, o Histograma de Gradientes Orientados (HOG) e a Matriz de Coocorrência em Nível de Cinza (GLCM) são empregados. A abordagem apresenta uma nova proposta para o reconhecimento dinâmico de expressões faciais e uma análise da relevância das partes faciais. A terceira é um método eficaz apresentado para o reconhecimento de emoções audiovisuais com base na fala e nas expressões faciais. A metodologia envolve uma rede neural híbrida para extrair características visuais e de áudio dos vídeos. Para extração de áudio, uma Rede Neural Convolucional (CNN) baseada no log-espectrograma de Mel é usada, enquanto uma CNN construída sobre a Transformada de Census é empregada para a extração das características visuais. Os atributos audiovisuais são reduzidos por PCA e LDA, então classificados por KNN, SVM, Regressão Logística (LR) e Gaussian Naïve Bayes (GNB). A abordagem obteve taxas de reconhecimento competitivas, especialmente em dados espontâneos. A penúltima investiga o problema de detectar a síndrome de Down a partir de fotografias. Um descritor geométrico é proposto para extrair características faciais. Experimentos realizados em uma base de dados pública mostram a eficácia da metodologia desenvolvida. A última metodologia trata do reconhecimento de síndromes genéticas em fotografias. O método visa extrair atributos faciais usando características de uma rede neural profunda e medidas antropométricas. Experimentos são realizados em uma base de dados pública, alcançando taxas de reconhecimento competitivasAbstract: Emotion recognition has become a relevant research topic by the scientific community, since it plays an essential role in the continuous improvement of human-computer interaction systems. It can be applied in various areas, for instance, medicine, entertainment, surveillance, biometrics, education, social networks, and affective computing. There are some open challenges related to the development of emotion systems based on facial expressions, such as data that reflect more spontaneous emotions and real scenarios. In this doctoral dissertation, we propose different methodologies to the development of emotion recognition systems based on facial expressions, as well as their applicability in the development of other similar problems. The first is an emotion recognition methodology for occluded facial expressions based on the Census Transform Histogram (CENTRIST). Occluded facial expressions are reconstructed using an algorithm based on Robust Principal Component Analysis (RPCA). Extraction of facial expression features is then performed by CENTRIST, as well as Local Binary Patterns (LBP), Local Gradient Coding (LGC), and an LGC extension. The generated feature space is reduced by applying Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) algorithms are used for classification. This method reached competitive accuracy rates for occluded and non-occluded facial expressions. The second proposes a dynamic facial expression recognition based on Visual Rhythms (VR) and Motion History Images (MHI), such that a fusion of both encodes appearance, shape, and motion information of the video sequences. For feature extraction, Weber Local Descriptor (WLD), CENTRIST, Histogram of Oriented Gradients (HOG), and Gray-Level Co-occurrence Matrix (GLCM) are employed. This approach shows a new direction for performing dynamic facial expression recognition, and an analysis of the relevance of facial parts. The third is an effective method for audio-visual emotion recognition based on speech and facial expressions. The methodology involves a hybrid neural network to extract audio and visual features from videos. For audio extraction, a Convolutional Neural Network (CNN) based on log Mel-spectrogram is used, whereas a CNN built on Census Transform is employed for visual extraction. The audio and visual features are reduced by PCA and LDA, and classified through KNN, SVM, Logistic Regression (LR), and Gaussian Naïve Bayes (GNB). This approach achieves competitive recognition rates, especially in a spontaneous data set. The second last investigates the problem of detecting Down syndrome from photographs. A geometric descriptor is proposed to extract facial features. Experiments performed on a public data set show the effectiveness of the developed methodology. The last methodology is about recognizing genetic disorders in photos. This method focuses on extracting facial features using deep features and anthropometric measurements. Experiments are conducted on a public data set, achieving competitive recognition ratesDoutoradoCiência da ComputaçãoDoutora em Ciência da Computação140532/2019-6CNPQCAPE

    Data-Driven Classification Methods for Craniosynostosis Using 3D Surface Scans

    Get PDF
    Diese Arbeit befasst sich mit strahlungsfreier Klassifizierung von Kraniosynostose mit zusätzlichem Schwerpunkt auf Datenaugmentierung und auf die Verwendung synthetischer Daten als Ersatz für klinische Daten. Motivation: Kraniosynostose ist eine Erkrankung, die Säuglinge betrifft und zu Kopfdeformitäten führt. Diagnose mittels strahlungsfreier 3D Oberflächenscans ist eine vielversprechende Alternative zu traditioneller computertomographischer Bildgebung. Aufgrund der niedrigen Prävalenz und schwieriger Anonymisierbarkeit sind klinische Daten nur spärlich vorhanden. Diese Arbeit adressiert diese Herausforderungen, indem sie neue Klassifizierungsalgorithmen vorschlägt, synthetische Daten für die wissenschaftliche Gemeinschaft erstellt und zeigt, dass es möglich ist, klinische Daten vollständig durch synthetische Daten zu ersetzen, ohne die Klassifikationsleistung zu beeinträchtigen. Methoden: Ein Statistisches Shape Modell (SSM) von Kraniosynostosepatienten wird erstellt und öffentlich zugänglich gemacht. Es wird eine 3D-2D-Konvertierung von der 3D-Gittergeometrie in ein 2D-Bild vorgeschlagen, die die Verwendung von Convolutional Neural Networks (CNNs) und Datenaugmentierung im Bildbereich ermöglicht. Drei Klassifizierungsansätze (basierend auf cephalometrischen Messungen, basierend auf dem SSM, und basierend auf den 2D Bildern mit einem CNN) zur Unterscheidung zwischen drei Pathologien und einer Kontrollgruppe werden vorgeschlagen und bewertet. Schließlich werden die klinischen Trainingsdaten vollständig durch synthetische Daten aus einem SSM und einem generativen adversarialen Netz (GAN) ersetzt. Ergebnisse: Die vorgeschlagene CNN-Klassifikation übertraf konkurrierende Ansätze in einem klinischen Datensatz von 496 Probanden und erreichte einen F1-Score von 0,964. Datenaugmentierung erhöhte den F1-Score auf 0,975. Zuschreibungen der Klassifizierungsentscheidung zeigten hohe Amplituden an Teilen des Kopfes, die mit Kraniosynostose in Verbindung stehen. Das Ersetzen der klinischen Daten durch synthetische Daten, die mit einem SSM und einem GAN erstellt wurden, ergab noch immer einen F1-Score von über 0,95, ohne dass das Modell ein einziges klinisches Subjekt gesehen hatte. Schlussfolgerung: Die vorgeschlagene Umwandlung von 3D-Geometrie in ein 2D-kodiertes Bild verbesserte die Leistung bestehender Klassifikatoren und ermöglichte eine Datenaugmentierung während des Trainings. Unter Verwendung eines SSM und eines GANs konnten klinische Trainingsdaten durch synthetische Daten ersetzt werden. Diese Arbeit verbessert bestehende diagnostische Ansätze auf strahlungsfreien Aufnahmen und demonstriert die Verwendbarkeit von synthetischen Daten, was klinische Anwendungen objektiver, interpretierbarer, und weniger kostspielig machen

    Artificial Intelligence in Oral Health

    Get PDF
    This Special Issue is intended to lay the foundation of AI applications focusing on oral health, including general dentistry, periodontology, implantology, oral surgery, oral radiology, orthodontics, and prosthodontics, among others

    The Role of Transient Vibration of the Skull on Concussion

    Get PDF
    Concussion is a traumatic brain injury usually caused by a direct or indirect blow to the head that affects brain function. The maximum mechanical impedance of the brain tissue occurs at 450±50 Hz and may be affected by the skull resonant frequencies. After an impact to the head, vibration resonance of the skull damages the underlying cortex. The skull deforms and vibrates, like a bell for 3 to 5 milliseconds, bruising the cortex. Furthermore, the deceleration forces the frontal and temporal cortex against the skull, eliminating a layer of cerebrospinal fluid. When the skull vibrates, the force spreads directly to the cortex, with no layer of cerebrospinal fluid to reflect the wave or cushion its force. To date, there is few researches investigating the effect of transient vibration of the skull. Therefore, the overall goal of the proposed research is to gain better understanding of the role of transient vibration of the skull on concussion. This goal will be achieved by addressing three research objectives. First, a MRI skull and brain segmentation automatic technique is developed. Due to bones’ weak magnetic resonance signal, MRI scans struggle with differentiating bone tissue from other structures. One of the most important components for a successful segmentation is high-quality ground truth labels. Therefore, we introduce a deep learning framework for skull segmentation purpose where the ground truth labels are created from CT imaging using the standard tessellation language (STL). Furthermore, the brain region will be important for a future work, thus, we explore a new initialization concept of the convolutional neural network (CNN) by orthogonal moments to improve brain segmentation in MRI. Second, the creation of a novel 2D and 3D Automatic Method to Align the Facial Skeleton is introduced. An important aspect for further impact analysis is the ability to precisely simulate the same point of impact on multiple bone models. To perform this task, the skull must be precisely aligned in all anatomical planes. Therefore, we introduce a 2D/3D technique to align the facial skeleton that was initially developed for automatically calculating the craniofacial symmetry midline. In the 2D version, the entire concept of using cephalometric landmarks and manual image grid alignment to construct the training dataset was introduced. Then, this concept was extended to a 3D version where coronal and transverse planes are aligned using CNN approach. As the alignment in the sagittal plane is still undefined, a new alignment based on these techniques will be created to align the sagittal plane using Frankfort plane as a framework. Finally, the resonant frequencies of multiple skulls are assessed to determine how the skull resonant frequency vibrations propagate into the brain tissue. After applying material properties and mesh to the skull, modal analysis is performed to assess the skull natural frequencies. Finally, theories will be raised regarding the relation between the skull geometry, such as shape and thickness, and vibration with brain tissue injury, which may result in concussive injury

    Applications of Artificial Intelligence in Biomedical Sciences

    Get PDF
    Η παρακάτω διπλωματική εργασία αποτελεί μια εκτεταμένη βιβλιογραφική ανασκόπηση των Εφαρμογών της Τεχνητής Νοημοσύνης στις Βιοϊατρικές Επιστήμες. Πιο αναλυτικά, εστιάζει στις εφαρμογές της Τεχνητής Νοημοσύνης στην Ανάπτυξη Νέων Φαρμάκων, στην Ανάλυση Εικόνων (Image Analysis), στην Ιατρική Φροντίδα (Healthcare), στα Radiomics και στις Κλινικές Δοκιμές (Clinical Trials). Η Τεχνητή Νοημοσύνη έχει αποτελέσει ακρογωνιαίο λίθο στην ανάπτυξη πολλών άλλων επιστημών και σύμφωνα με πλήθος ειδικών και ερευνητών θεωρείτο η μεγαλύτερη ανακάλυψη του αιώνα. Στο πρώτο κεφάλαιο αναλύεται η ιστορία της Τεχνητής Νοημοσύνης καθώς και ο ορισμός αυτής. Στο επόμενο κεφάλαιο γίνεται η αναζήτηση της βιβλιογραφίας στην οποία παρουσιάζονται τα πρώτα βήματα που έγιναν ώστε να δημιουργηθεί η επιστήμη που γνωρίζουμε σήμερα. Έπειτα αναλύονται οι εφαρμογές αυτής σε διάφορους τομείς καθώς και η συμβολή της στην περαιτέρω ανάπτυξη τους. Τέλος, στο τελευταίο κεφάλαιο, κεφάλαιο 4, γίνεται η συζήτηση πάνω σε ό,τι ειπώθηκε προηγουμένως καθώς και προτείνονται νέοι δρόμοι ανάπτυξης της επιστήμης της Τεχνητής Νοημοσύνης.In this dissertation is presented the contribution of AI in biomedical sciences and particularly in drug development, image analysis, healthcare, radiomics and clinical trials. It will be demonstrated the general theoretical context behind the evolution of Artificial Intelligence, as well as its applications. The first chapter, analyzes the history of Artificial Intelligence starting with its definition. The second chapter includes a review of the literature underlining some of the most important milestones of the creation of Artificial Intelligence. As AI has been conducive to the development of many fields it has been characterized by many experts as the biggest innovation of the century. Thus, the third chapter presents the different methods of machine learning used in those fields. In the last chapter of the thesis, chapter 4, is represented a discussion about the findings of the thesis as well as about some new ways that Artificial Intelligence could be beneficial

    Applying novel machine learning technology to optimize computer-aided detection and diagnosis of medical images

    Get PDF
    The purpose of developing Computer-Aided Detection (CAD) schemes is to assist physicians (i.e., radiologists) in interpreting medical imaging findings and reducing inter-reader variability more accurately. In developing CAD schemes, Machine Learning (ML) plays an essential role because it is widely used to identify effective image features from complex datasets and optimally integrate them with the classifiers, which aims to assist the clinicians to more accurately detect early disease, classify disease types and predict disease treatment outcome. In my dissertation, in different studies, I assess the feasibility of developing several novel CAD systems in the area of medical imaging for different purposes. The first study aims to develop and evaluate a new computer-aided diagnosis (CADx) scheme based on analysis of global mammographic image features to predict the likelihood of cases being malignant. CADx scheme is applied to pre-process mammograms, generate two image maps in the frequency domain using discrete cosine transform and fast Fourier transform, compute bilateral image feature differences from left and right breasts, and apply a support vector machine (SVM) method to predict the likelihood of the case being malignant. This study demonstrates the feasibility of developing a new global image feature analysis based CADx scheme of mammograms with high performance. This new CADx approach is more efficient in development and potentially more robust in future applications by avoiding difficulty and possible errors in breast lesion segmentation. In the second study, to automatically identify a set of effective mammographic image features and build an optimal breast cancer risk stratification model, I investigate advantages of applying a machine learning approach embedded with a locally preserving projection (LPP) based feature combination and regeneration algorithm to predict short-term breast cancer risk. To this purpose, a computer-aided image processing scheme is applied to segment fibro-glandular tissue depicted on mammograms and initially compute 44 features related to the bilateral asymmetry of mammographic tissue density distribution between left and right breasts. Next, an embedded LLP algorithm optimizes the feature space and regenerates a new operational vector with 4 features using a maximal variance approach. This study demonstrates that applying the LPP algorithm effectively reduces feature dimensionality, and yields higher and potentially more robust performance in predicting short-term breast cancer risk. In the third study, to more precisely classify malignant lesions, I investigate the feasibility of applying a random projection algorithm to build an optimal feature vector from the initially CAD-generated large feature pool and improve the performance of the machine learning model. In this process, a CAD scheme is first applied to segment mass regions and initially compute 181 features. An SVM model embedded with the feature dimensionality reduction method is then built to predict the likelihood of lesions being malignant. This study demonstrates that the random project algorithm is a promising method to generate optimal feature vectors to improve the performance of machine learning models of medical images. The last study aims to develop and test a new CAD scheme of chest X-ray images to detect coronavirus (COVID-19) infected pneumonia. To this purpose, the CAD scheme first applies two image preprocessing steps to remove the majority of diaphragm regions, process the original image using a histogram equalization algorithm, and a bilateral low-pass filter. Then, the original image and two filtered images are used to form a pseudo color image. This image is fed into three input channels of a transfer learning-based convolutional neural network (CNN) model to classify chest X-ray images into 3 classes of COVID-19 infected pneumonia, other community-acquired no-COVID-19 infected pneumonia, and normal (non-pneumonia) cases. This study demonstrates that adding two image preprocessing steps and generating a pseudo color image plays an essential role in developing a deep learning CAD scheme of chest X-ray images to improve accuracy in detecting COVID-19 infected pneumonia. In summary, I developed and presented several image pre-processing algorithms, feature extraction methods, and data optimization techniques to present innovative approaches for quantitative imaging markers based on machine learning systems in all these studies. The studies' simulation and results show the discriminative performance of the proposed CAD schemes on different application fields helpful to assist radiologists on their assessments in diagnosing disease and improve their overall performance

    Craniofacial Growth Series Volume 56

    Full text link
    https://deepblue.lib.umich.edu/bitstream/2027.42/153991/1/56th volume CF growth series FINAL 02262020.pdfDescription of 56th volume CF growth series FINAL 02262020.pdf : Proceedings of the 46th Annual Moyers Symposium and 44th Moyers Presymposiu
    corecore