64 research outputs found

    Speech Signal and Facial Image Processing for Obstructive Sleep Apnea Assessment

    Get PDF
    Obstructive sleep apnea (OSA) is a common sleep disorder characterized by recurring breathing pauses during sleep caused by a blockage of the upper airway (UA). OSA is generally diagnosed through a costly procedure requiring an overnight stay of the patient at the hospital. This has led to proposing less costly procedures based on the analysis of patients' facial images and voice recordings to help in OSA detection and severity assessment. In this paper we investigate the use of both image and speech processing to estimate the apnea-hypopnea index, AHI (which describes the severity of the condition), over a population of 285 male Spanish subjects suspected to suffer from OSA and referred to a Sleep Disorders Unit. Photographs and voice recordings were collected in a supervised but not highly controlled way trying to test a scenario close to an OSA assessment application running on a mobile device (i.e., smartphones or tablets). Spectral information in speech utterances is modeled by a state-of-the-art low-dimensional acoustic representation, called i-vector. A set of local craniofacial features related to OSA are extracted from images after detecting facial landmarks using Active Appearance Models (AAMs). Support vector regression (SVR) is applied on facial features and i-vectors to estimate the AHI.The activities in this paper were funded by the Spanish Ministry of Economy and Competitiveness and the European Union (FEDER) as part of the TEC2012-37585-C02 (CMC-V2) project. Authors also thank Sonia Martinez Diaz for her effort in collecting the OSA database that is used in this study

    Shape-appearance-correlated active appearance model

    Full text link
    © 2016 Elsevier Ltd Among the challenges faced by current active shape or appearance models, facial-feature localization in the wild, with occlusion in a novel face image, i.e. in a generic environment, is regarded as one of the most difficult computer-vision tasks. In this paper, we propose an Active Appearance Model (AAM) to tackle the problem of generic environment. Firstly, a fast face-model initialization scheme is proposed, based on the idea that the local appearance of feature points can be accurately approximated with locality constraints. Nearest neighbors, which have similar poses and textures to a test face, are retrieved from a training set for constructing the initial face model. To further improve the fitting of the initial model to the test face, an orthogonal CCA (oCCA) is employed to increase the correlation between shape features and appearance features represented by Principal Component Analysis (PCA). With these two contributions, we propose a novel AAM, namely the shape-appearance-correlated AAM (SAC-AAM), and the optimization is solved by using the recently proposed fast simultaneous inverse compositional (Fast-SIC) algorithm. Experiment results demonstrate a 5–10% improvement on controlled and semi-controlled datasets, and with around 10% improvement on wild face datasets in terms of fitting accuracy compared to other state-of-the-art AAM models

    Synergistic Visualization And Quantitative Analysis Of Volumetric Medical Images

    Get PDF
    The medical diagnosis process starts with an interview with the patient, and continues with the physical exam. In practice, the medical professional may require additional screenings to precisely diagnose. Medical imaging is one of the most frequently used non-invasive screening methods to acquire insight of human body. Medical imaging is not only essential for accurate diagnosis, but also it can enable early prevention. Medical data visualization refers to projecting the medical data into a human understandable format at mediums such as 2D or head-mounted displays without causing any interpretation which may lead to clinical intervention. In contrast to the medical visualization, quantification refers to extracting the information in the medical scan to enable the clinicians to make fast and accurate decisions. Despite the extraordinary process both in medical visualization and quantitative radiology, efforts to improve these two complementary fields are often performed independently and synergistic combination is under-studied. Existing image-based software platforms mostly fail to be used in routine clinics due to lack of a unified strategy that guides clinicians both visually and quan- titatively. Hence, there is an urgent need for a bridge connecting the medical visualization and automatic quantification algorithms in the same software platform. In this thesis, we aim to fill this research gap by visualizing medical images interactively from anywhere, and performing a fast, accurate and fully-automatic quantification of the medical imaging data. To end this, we propose several innovative and novel methods. Specifically, we solve the following sub-problems of the ul- timate goal: (1) direct web-based out-of-core volume rendering, (2) robust, accurate, and efficient learning based algorithms to segment highly pathological medical data, (3) automatic landmark- ing for aiding diagnosis and surgical planning and (4) novel artificial intelligence algorithms to determine the sufficient and necessary data to derive large-scale problems

    Fast head profile estimation using curvature, derivatives and deep learning methods

    Get PDF
    Fast estimation of head profile and posture has applications across many disciplines, for example, it can be used in sleep apnoea screening and orthodontic examination or could support a suitable physiotherapy regime. Consequently, this thesis focuses on the investigation of methods to estimate head profile and posture efficiently and accurately, and results in the development and evaluation of datasets, features and deep learning models that can achieve this. Accordingly, this thesis initially investigated properties of contour curves that could act as effective features to train machine learning models. Features based on curvature and the first and second Gaussian derivatives were evaluated. These outperformed established features used in the literature to train a long short-term memory recurrent neural network and produced a significant speedup in execution time where pre-filtering of a sampled dataset was required. Following on from this, a new dataset of head profile contours was generated and annotated with anthropometric cranio-facial landmarks, and a novel method of automatically improving the accuracy of the landmark positions was developed using ideas based on the curvature of a plane curve. The features identified here were extracted from the new head profile contour dataset and used to train long short-term recurrent neural networks. The best network, using Gaussian derivatives features achieved an accuracy of 91% and macro F1 score of 91%, an improvement of 51% and 71% respectively when compared with the un-processed contour feature. When using Gaussian derivative features, the network was able to regress landmarks accurately with mean absolute errors ranging from 0 to 5.3 pixels and standard deviations ranging from 0 to 6.9, respectively. End-to-end machine learning approaches, where a deep neural network learns the best features to use from the raw input data, were also investigated. Such an approach, using a one-dimensional temporal convolutional network was able to match previous classifiers in terms of accuracy and macro F1 score, and showed comparable regression abilities. However, this was at the expense of increased training times and increased inference times. This network was an order of magnitude slower when classifying and regressing contours

    Development of an Atlas-Based Segmentation of Cranial Nerves Using Shape-Aware Discrete Deformable Models for Neurosurgical Planning and Simulation

    Get PDF
    Twelve pairs of cranial nerves arise from the brain or brainstem and control our sensory functions such as vision, hearing, smell and taste as well as several motor functions to the head and neck including facial expressions and eye movement. Often, these cranial nerves are difficult to detect in MRI data, and thus represent problems in neurosurgery planning and simulation, due to their thin anatomical structure, in the face of low imaging resolution as well as image artifacts. As a result, they may be at risk in neurosurgical procedures around the skull base, which might have dire consequences such as the loss of eyesight or hearing and facial paralysis. Consequently, it is of great importance to clearly delineate cranial nerves in medical images for avoidance in the planning of neurosurgical procedures and for targeting in the treatment of cranial nerve disorders. In this research, we propose to develop a digital atlas methodology that will be used to segment the cranial nerves from patient image data. The atlas will be created from high-resolution MRI data based on a discrete deformable contour model called 1-Simplex mesh. Each of the cranial nerves will be modeled using its centerline and radius information where the centerline is estimated in a semi-automatic approach by finding a shortest path between two user-defined end points. The cranial nerve atlas is then made more robust by integrating a Statistical Shape Model so that the atlas can identify and segment nerves from images characterized by artifacts or low resolution. To the best of our knowledge, no such digital atlas methodology exists for segmenting nerves cranial nerves from MRI data. Therefore, our proposed system has important benefits to the neurosurgical community

    Assessing the existence of visual clues of human ovulation

    Get PDF
    Is the concealed human ovulation a myth? The author of this work tries to answer the above question by using a medium-size database of facial images specially created and tagged. Analyzing possible facial modifications during the mensal period is a formal tool to assess the veracity about the concealed ovulation. In normal view, the human ovulation remains concealed. In other words, there is no visible external sign of the mensal period in humans. These external signs are very much visible in many animals such as baboons, dogs or elephants. Some are visual (baboons) and others are biochemical (dogs). Insects use pheromones and other animals can use sounds to inform the partners of their fertility period. The objective is not just to study the visual female ovulation signs but also to understand and explain automatic image processing methods which could be used to extract precise landmarks from the facial pictures. This could later be applied to the studies about the fluctuant asymmetry. The field of fluctuant asymmetry is a growing field in evolutionary biology but cannot be easily developed because of the necessary time to manually extract the landmarks. In this work we have tried to see if any perceptible sign is present in human face during the ovulation and how we can detect formal changes, if any, in face appearance during the mensal period. We have taken photography from 50 girls for 32 days. Each day we took many photos of each girl. At the end we chose a set of 30 photos per girl representing the whole mensal cycle. From these photos 600 were chosen to be manually tagged for verification issues. The photos were organized in a rating software to allow human raters to watch and choose the two best looking pictures for each girl. These results were then checked to highlight the relation between chosen photos and ovulation period in the cycle. Results were indicating that in fact there are some clues in the face of human which could eventually give a hint about their ovulation. Later, different automatic landmark detection methods were applied to the pictures to highlight possible modifications in the face during the period. Although the precision of the tested methods, are far from being perfect, the comparison of these measurements to the state of art indexes of beauty shows a slight modification of the face towards a prettier face during the ovulation. The automatic methods tested were Active Appearance Model (AAM), the neural deep learning and the regression trees. It was observed that for this kind of applications the best method was the regression trees. Future work has to be conducted to firmly confirm these data, number of human raters should be augmented, and a proper learning data base should be developed to allow a learning process specific to this problematic. We also think that low level image processing will be necessary to achieve the final precision which could reveal more details of possible changes in human faces.A ovulação no ser humano é, em geral, considerada “oculta”, ou seja, sem sinais exteriores. Mas a ovulação ou o período mensal é uma mudança hormonal extremamente importante que se repete em cada ciclo. Acreditar que esta mudança hormonal não tem nenhum sinal visível parece simplista. Estes sinais externos são muito visíveis em animais, como babuínos, cães ou elefantes. Alguns são visuais (babuínos) e outros são bioquímicos (cães). Insetos usam feromonas e outros animais podem usar sons para informar os parceiros do seu período de fertilidade. O ser humano tem vindo a esconder ou pelo menos camuflar sinais desses durante a evolução. As razoes para esconder ou camuflar a ovulação no ser humano não são claros e não serão discutidos nesta dissertação. Na primeira parte deste trabalho, a autora deste trabalho, depois de criar um base de dados de tamanho médio de imagens faciais e anotar as fotografias vai verificar se sinais de ovulação podem ser detetados por outros pessoas. Ou seja, se modificações que ‘as priori’ são invisíveis podem ser percebidas de maneira inconsciente pelo observador. Na segunda parte, a autora vai analisar as eventuais modificações faciais durante o período, de uma maneira formal, utilizando medidas faciais. Métodos automáticos de analise de imagem aplicados permitem obter os dados necessários. Uma base de dados de imagens para efetuar este trabalho foi criado de raiz, uma vez que nenhuma base de dados existia na literatura. 50 raparigas aceitaram de participar na criação do base de dados. Durante 32 dias e diariamente, cada rapariga foi fotografada. Em cada sessão foi tirada várias fotos. As fotos foram depois apuradas para deixar só 30 fotos ao máximo, para cada rapariga. 600 fotos foram depois escolhidas para serem manualmente anotadas. Essas 600 fotos anotadas, definam a base de dados de verificação. Assim as medidas obtidas automaticamente podem ser verificadas comparando com a base de 600 fotos anotadas. O objetivo deste trabalho não é apenas estudar os sinais visuais da ovulação feminina, mas também testar e explicar métodos de processamento automático de imagens que poderiam ser usados para extrair pontos de interesse, das imagens faciais. A automatização de extração dos pontos de interesse poderia mais tarde ser aplicado aos estudos sobre a assimetria flutuante. O campo da assimetria flutuante é um campo crescente na biologia evolucionária, mas não pode ser desenvolvido facilmente. O tempo necessário para extrair referencias e pontos de interesse é proibitivo. Por além disso, estudos de assimetria flutuante, muitas vezes, baseado numa só fotografia pode vier a ser invalido, se modificações faciais temporárias existirem. Modificações temporárias, tipo durante o período mensal, revela que estudos fenotípicos baseados numa só fotografia não pode constituir uma base viável para estabelecer ligas genótipo-fenótipo. Para tentar ver se algum sinal percetível está presente no rosto humano durante a ovulação, as fotos foram organizadas num software de presentação para permitir o observador humano escolher duas fotos (as mais atraentes) de cada rapariga. Estes resultados foram então analisados para destacar a relação entre as fotos escolhidas e o período de ovulação no ciclo mensal. Os resultados sugeriam que, de facto, existem algumas indicações no rosto que poderiam eventualmente dar informações sobre o período de ovulação. Os observadores escolheram como mais atraente de cada rapariga, aquelas que tinham sido tiradas nos dias imediatos antes ou depois da ovulação. Ou seja, foi claramente estabelecido que a mesma rapariga parecia mais atraente durante os dias próximos da data da ovulação. O software também permite recolher dados sobre o observador para analise posterior de comportamento dos observadores perante as fotografias. Os dados dos observadores podem dar indicações sobre as razoes da ovulação escondida que foi desenvolvida durante a evolução. A seguir, diferentes métodos automáticos de deteção de pontos de interesse foram aplicados às imagens para detetar o tipo de modificações no rosto durante o período. A precisão dos métodos testados, apesar de não ser perfeita, permite observar algumas relações entre as modificações e os índices de atratividade. Os métodos automáticos testados foram Active Appearance Model (AAM), Convolutional Neural Networks (CNN) e árvores de regressão (Dlib-Rt). AAM e CNN foram implementados em Python utilizando o modulo Keras library. Dlib-Rt foi implementado em C++ utilizando OpenCv. Os métodos utilizados, estão todos baseados em aprendizagem e sacrificam a precisão. Comparando os resultados dos métodos automáticos com os resultados manualmente obtidos, indicaram que os métodos baseados em aprendizagem podem não ter a precisão necessária para estudos em simetria flutuante ou para estudos de modificação faciais finas. Apesar de falta de precisão, observou-se que, para este tipo de aplicação, o melhor método (entre os testados) foi as árvores de regressão. Os dados e medidas obtidas, constituíram uma base de dados com a data de período, medidas faciais, dados sociais e dados de atratividade que poderem ser utilizados para trabalhos posteriores. O trabalho futuro tem de ser conduzido para confirmar firmemente estes dados, o número de avaliadores humanos deve ser aumentado, e uma base de dados de aprendizagem adequada deve ser desenvolvida para permitir a definição de um processo de aprendizagem específico para esta problemática. Também foi observado que o processamento de imagens de baixo nível será necessário para alcançar a precisão final que poderia revelar detalhes finos de mudanças em rostos humanos. Transcrever os dados e medidas para o índice de atratividade e aplicar métodos de data-mining pode revelar exatamente quais são as modificações implicadas durante o período mensal. A autora também prevê a utilização de uma câmara fotográfica tipo true-depth permite obter os dados de profundidade e volumo que podem afinar os estudos. Os dados de pigmentação da pele e textura da mesma também devem ser considerados para obter e observar todos tipos de modificação facial durante o período mensal. Os dados também devem separar raparigas com métodos químicos de contraceção, uma vez que estes métodos podem interferir com os níveis hormonais e introduzir erros de apreciação. Por fim o mesmo estudo poderia ser efetuado nos homens, uma vez que homens não sofrem de mudanças hormonais, a aparição de qualquer modificação facial repetível pode indicar existência de fatos camuflados
    corecore