9 research outputs found

    Butterfly Image Classification Using Color Quantization Method on HSV Color Space and Local Binary Pattern

    Get PDF
    A lot of methods are used to develop on image research. Image detection to relay back new information, widely used in various research field, such as health, agriculture or other field research. Various methods are used and developed to get better results. A combination of several methods is performed for testing as part of the research contribution. In this study will perform the combination results of the process color feature extraction with texture features. In color feature extraction using HSV color space method that gets 72 feature extraction and on texture feature extraction using local binary pattern that gets 256 feature extraction. The process of merging the two extracted results gets 328 new feature extractions. The result of combining color feature extraction and texture feature extraction is further classified. Results from image classification of butterflies get an accuracy score of 72%. The results obtained will be tested performance. The results obtained from performance testing get precision value, recall and f-measure respectively 76%, 72% and 74

    Down syndrome detection using modified adaboost algorithm

    Get PDF
    In human body genetic codes are stored in the genes. All of our inherited traits are associated with these genes and are grouped as structures generally called chromosomes. In typical cases, each cell consists of 23 pairs of chromosomes, out of which each parent contributes half. But if a person has a partial or full copy of chromosome 21, the situation is called Down syndrome. It results in intellectual disability, reading impairment, developmental delay, and other medical abnormalities. There is no specific treatment for Down syndrome. Thus, early detection and screening of this disability are the best styles for down syndrome prevention. In this work, recognition of Down syndrome utilizes a set of facial expression images. Solid geometric descriptor is employed for extracting the facial features from the image set. An AdaBoost method is practiced to gather the required data sets and for the categorization. The extracted information is then assigned and used to instruct the Neural Network using Backpropagation algorithm. This work recorded that the presented model meets the requirement with 98.67% accuracy

    Using Evolutionary Feature Selection Methods in Classification of EEG Signals

    Get PDF
    Elektroensefalografi beyindeki elektriksel akımın ölçülmesi ile elde edilen sinyallerdir. Bu sinyallerin sınıflandırılması özellikle beyin sinyalleri ile ilgili rahatsızlıkların teşhis, tanı ve tedavisine katkı sağladığı için önemlidir. Bu çalışmada bu alanda epilepsi hastalığının tanısı için en çok kullanılan veri kümesi olan Bonn Üniversitesi veri kümesi kullanılmıştır. Beş farklı denekten alınan sinyallerden oluşan bu veri kümesinden anlamlı sonuçlar elde edebilmek için öncelikle veri temizleme, öznitelik çıkarma ve öznitelik seçme yöntemleri kullanılmıştır. Daha sonra bu yöntemler sınıflandırma başarısına katkıları açısından kıyaslanmıştır. İlk olarak filtrelenen veriden Ayrık Dalgacık Dönüşümü metodu ile istatistiksel özellikler çıkarılmış, ardından Diferansiyel Evrim Algoritması kullanılarak en iyi sınıflandırma sonucunu veren öznitelik alt kümesi seçilmiştir. Seçilen özniteliklere sahip veri kümesinin sınıflandırma başarısı Destek Vektör Makineleri ile test edilmiştir. Kullanılan yöntem ile bazı sınıfların ayrılmasında literatürdeki benzer çalışmalardan daha iyi sonuçlar elde edilmiştir. Bazı ikili ve üçlü kümelerin sınıflandırılmasında sırasıyla 0,98 ve 0,94 doğruluk oranları elde edilmiştir.Electroencephalography signals are obtained by measuring the electrical current in the brain. The classification of these signals are especially important, as they contribute to the diagnosis, and treatment of disorders related to brain signals. In this study, the data set of the University of Bonn, which is the most widely used data set for the diagnosis of epilepsy, was used in this field. In order to obtain meaningful results from this data set consisting of signals from five different subjects, firstly, data filtering, feature extraction and feature selection methods have been used first. Later, these methods were then compared in terms of their contribution to classification success. First, statistical properties were extracted from the filtered data by the Discrete Wavelet Transform method, and then the subset of the features that gave the best classification result was selected using the Differential Evolution Algorithm. The classification success of the data set with the selected features has been tested with the Support Vector Machines. With the method used, better results were obtained than similar studies in separating some classes. In the classification of some double and triple sets, accuracy rates of 0.98 and 0.94, respectively, were obtained. Keywords: Electroencephalography signal analysis, Differential Evolution Algorithm, Feature Extraction,Feature Selection

    Reconhecimento de padrões em expressões faciais : algoritmos e aplicações

    Get PDF
    Orientador: Hélio PedriniTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: O reconhecimento de emoções tem-se tornado um tópico relevante de pesquisa pela comunidade científica, uma vez que desempenha um papel essencial na melhoria contínua dos sistemas de interação humano-computador. Ele pode ser aplicado em diversas áreas, tais como medicina, entretenimento, vigilância, biometria, educação, redes sociais e computação afetiva. Há alguns desafios em aberto relacionados ao desenvolvimento de sistemas emocionais baseados em expressões faciais, como dados que refletem emoções mais espontâneas e cenários reais. Nesta tese de doutorado, apresentamos diferentes metodologias para o desenvolvimento de sistemas de reconhecimento de emoções baseado em expressões faciais, bem como sua aplicabilidade na resolução de outros problemas semelhantes. A primeira metodologia é apresentada para o reconhecimento de emoções em expressões faciais ocluídas baseada no Histograma da Transformada Census (CENTRIST). Expressões faciais ocluídas são reconstruídas usando a Análise Robusta de Componentes Principais (RPCA). A extração de características das expressões faciais é realizada pelo CENTRIST, bem como pelos Padrões Binários Locais (LBP), pela Codificação Local do Gradiente (LGC) e por uma extensão do LGC. O espaço de características gerado é reduzido aplicando-se a Análise de Componentes Principais (PCA) e a Análise Discriminante Linear (LDA). Os algoritmos K-Vizinhos mais Próximos (KNN) e Máquinas de Vetores de Suporte (SVM) são usados para classificação. O método alcançou taxas de acerto competitivas para expressões faciais ocluídas e não ocluídas. A segunda é proposta para o reconhecimento dinâmico de expressões faciais baseado em Ritmos Visuais (VR) e Imagens da História do Movimento (MHI), de modo que uma fusão de ambos descritores codifique informações de aparência, forma e movimento dos vídeos. Para extração das características, o Descritor Local de Weber (WLD), o CENTRIST, o Histograma de Gradientes Orientados (HOG) e a Matriz de Coocorrência em Nível de Cinza (GLCM) são empregados. A abordagem apresenta uma nova proposta para o reconhecimento dinâmico de expressões faciais e uma análise da relevância das partes faciais. A terceira é um método eficaz apresentado para o reconhecimento de emoções audiovisuais com base na fala e nas expressões faciais. A metodologia envolve uma rede neural híbrida para extrair características visuais e de áudio dos vídeos. Para extração de áudio, uma Rede Neural Convolucional (CNN) baseada no log-espectrograma de Mel é usada, enquanto uma CNN construída sobre a Transformada de Census é empregada para a extração das características visuais. Os atributos audiovisuais são reduzidos por PCA e LDA, então classificados por KNN, SVM, Regressão Logística (LR) e Gaussian Naïve Bayes (GNB). A abordagem obteve taxas de reconhecimento competitivas, especialmente em dados espontâneos. A penúltima investiga o problema de detectar a síndrome de Down a partir de fotografias. Um descritor geométrico é proposto para extrair características faciais. Experimentos realizados em uma base de dados pública mostram a eficácia da metodologia desenvolvida. A última metodologia trata do reconhecimento de síndromes genéticas em fotografias. O método visa extrair atributos faciais usando características de uma rede neural profunda e medidas antropométricas. Experimentos são realizados em uma base de dados pública, alcançando taxas de reconhecimento competitivasAbstract: Emotion recognition has become a relevant research topic by the scientific community, since it plays an essential role in the continuous improvement of human-computer interaction systems. It can be applied in various areas, for instance, medicine, entertainment, surveillance, biometrics, education, social networks, and affective computing. There are some open challenges related to the development of emotion systems based on facial expressions, such as data that reflect more spontaneous emotions and real scenarios. In this doctoral dissertation, we propose different methodologies to the development of emotion recognition systems based on facial expressions, as well as their applicability in the development of other similar problems. The first is an emotion recognition methodology for occluded facial expressions based on the Census Transform Histogram (CENTRIST). Occluded facial expressions are reconstructed using an algorithm based on Robust Principal Component Analysis (RPCA). Extraction of facial expression features is then performed by CENTRIST, as well as Local Binary Patterns (LBP), Local Gradient Coding (LGC), and an LGC extension. The generated feature space is reduced by applying Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) algorithms are used for classification. This method reached competitive accuracy rates for occluded and non-occluded facial expressions. The second proposes a dynamic facial expression recognition based on Visual Rhythms (VR) and Motion History Images (MHI), such that a fusion of both encodes appearance, shape, and motion information of the video sequences. For feature extraction, Weber Local Descriptor (WLD), CENTRIST, Histogram of Oriented Gradients (HOG), and Gray-Level Co-occurrence Matrix (GLCM) are employed. This approach shows a new direction for performing dynamic facial expression recognition, and an analysis of the relevance of facial parts. The third is an effective method for audio-visual emotion recognition based on speech and facial expressions. The methodology involves a hybrid neural network to extract audio and visual features from videos. For audio extraction, a Convolutional Neural Network (CNN) based on log Mel-spectrogram is used, whereas a CNN built on Census Transform is employed for visual extraction. The audio and visual features are reduced by PCA and LDA, and classified through KNN, SVM, Logistic Regression (LR), and Gaussian Naïve Bayes (GNB). This approach achieves competitive recognition rates, especially in a spontaneous data set. The second last investigates the problem of detecting Down syndrome from photographs. A geometric descriptor is proposed to extract facial features. Experiments performed on a public data set show the effectiveness of the developed methodology. The last methodology is about recognizing genetic disorders in photos. This method focuses on extracting facial features using deep features and anthropometric measurements. Experiments are conducted on a public data set, achieving competitive recognition ratesDoutoradoCiência da ComputaçãoDoutora em Ciência da Computação140532/2019-6CNPQCAPE

    Maschinelles Sehen als Hilfsmittel in der Differentialdiagnostik des Cushing-Syndroms

    Get PDF
    Objective: Cushing's syndrome is a rare disease characterized by clinical features that show morphological similarity with the metabolic syndrome. Distinguishing these diseases is a challenge in clinical practice. We have previously shown that computer vision technology can be a potentially useful diagnostic tool in Cushing's syndrome. In this follow-up study, we addressed the described problem by increasing the sample size and including controls matched by age and body-mass-index. Methods: 82 patients (22 male, 60 female) and 98 control subjects (32 male, 66 female) matched by age, gender and body-mass-index were included. The control group consisted of patients with initially suspected, but biochemically excluded Cushing's syndrome. Standardized frontal and profile facial digital photographs were acquired. The images were analyzed using specialized computer vision and classification software. A grid of nodes was semi-automatically placed on disease-relevant facial structures for analysis of texture and geometry. Classification accuracy was calculated using a leave-one-one cross-validation procedure with a maximum likelihood classifier. Results: The overall correct classification rates were 10/22 (45.5%) for male patients and 26/32 (81.3%) for male controls, and 34/60 (56.7%) for female patients and 43/66 (65.2%) for female controls. In subgroup analyses, correct classification rates were higher for iatrogenic than for endogenous Cushing's syndrome. Conclusion: Regarding the advanced problem of detecting Cushing's syndrome within a study sample matched by body-mass-index, we found moderate classification accuracy by facial image analysis. Classification accuracy is most likely higher in a sample with healthy control subjects. Further studies might pursue a more advanced analysis and classification algorithm

    Maschinelles Sehen als Hilfsmittel in der Differentialdiagnostik des Cushing-Syndroms

    Get PDF
    Objective: Cushing's syndrome is a rare disease characterized by clinical features that show morphological similarity with the metabolic syndrome. Distinguishing these diseases is a challenge in clinical practice. We have previously shown that computer vision technology can be a potentially useful diagnostic tool in Cushing's syndrome. In this follow-up study, we addressed the described problem by increasing the sample size and including controls matched by age and body-mass-index. Methods: 82 patients (22 male, 60 female) and 98 control subjects (32 male, 66 female) matched by age, gender and body-mass-index were included. The control group consisted of patients with initially suspected, but biochemically excluded Cushing's syndrome. Standardized frontal and profile facial digital photographs were acquired. The images were analyzed using specialized computer vision and classification software. A grid of nodes was semi-automatically placed on disease-relevant facial structures for analysis of texture and geometry. Classification accuracy was calculated using a leave-one-one cross-validation procedure with a maximum likelihood classifier. Results: The overall correct classification rates were 10/22 (45.5%) for male patients and 26/32 (81.3%) for male controls, and 34/60 (56.7%) for female patients and 43/66 (65.2%) for female controls. In subgroup analyses, correct classification rates were higher for iatrogenic than for endogenous Cushing's syndrome. Conclusion: Regarding the advanced problem of detecting Cushing's syndrome within a study sample matched by body-mass-index, we found moderate classification accuracy by facial image analysis. Classification accuracy is most likely higher in a sample with healthy control subjects. Further studies might pursue a more advanced analysis and classification algorithm

    Optimierungsstrategien für Gesichtsklassifikation bei der softwaregestützten Erkennung von Akromegalie

    Get PDF
    corecore