17,038 research outputs found

    An efficient emotion classification system using EEG

    Get PDF
    Emotion classification via Electroencephalography (EEG) is used to find the relationships between EEG signals and human emotions. There are many available channels, which consist of electrodes capturing brainwave activity. Some applications may require a reduced number of channels and frequency bands to shorten the computation time, facilitate human comprehensibility, and develop a practical wearable. In prior research, different sets of channels and frequency bands have been used. In this study, a systematic way of selecting the set of channels and frequency bands has been investigated, and results shown that by using the reduced number of channels and frequency bands, it can achieve similar accuracies. The study also proposed a method used to select the appropriate features using the Relief F method. The experimental results of this study showed that the method could reduce and select appropriate features confidently and efficiently. Moreover, the Fuzzy Support Vector Machine (FSVM) is used to improve emotion classification accuracy, as it was found from this research that it performed better than the Support Vector Machine (SVM) in handling the outliers, which are typically presented in the EEG signals. Furthermore, the FSVM is treated as a black-box model, but some applications may need to provide comprehensible human rules. Therefore, the rules are extracted using the Classification and Regression Trees (CART) approach to provide human comprehensibility to the system. The FSVM and rule extraction experiments showed that The FSVM performed better than the SVM in classifying the emotion of interest used in the experiments, and rule extraction from the FSVM utilizing the CART (FSVM-CART) had a good trade-off between classification accuracy and human comprehensibility

    Disconnection of network hubs and cognitive impairment after traumatic brain injury.

    Get PDF
    Traumatic brain injury affects brain connectivity by producing traumatic axonal injury. This disrupts the function of large-scale networks that support cognition. The best way to describe this relationship is unclear, but one elegant approach is to view networks as graphs. Brain regions become nodes in the graph, and white matter tracts the connections. The overall effect of an injury can then be estimated by calculating graph metrics of network structure and function. Here we test which graph metrics best predict the presence of traumatic axonal injury, as well as which are most highly associated with cognitive impairment. A comprehensive range of graph metrics was calculated from structural connectivity measures for 52 patients with traumatic brain injury, 21 of whom had microbleed evidence of traumatic axonal injury, and 25 age-matched controls. White matter connections between 165 grey matter brain regions were defined using tractography, and structural connectivity matrices calculated from skeletonized diffusion tensor imaging data. This technique estimates injury at the centre of tract, but is insensitive to damage at tract edges. Graph metrics were calculated from the resulting connectivity matrices and machine-learning techniques used to select the metrics that best predicted the presence of traumatic brain injury. In addition, we used regularization and variable selection via the elastic net to predict patient behaviour on tests of information processing speed, executive function and associative memory. Support vector machines trained with graph metrics of white matter connectivity matrices from the microbleed group were able to identify patients with a history of traumatic brain injury with 93.4% accuracy, a result robust to different ways of sampling the data. Graph metrics were significantly associated with cognitive performance: information processing speed (R(2) = 0.64), executive function (R(2) = 0.56) and associative memory (R(2) = 0.25). These results were then replicated in a separate group of patients without microbleeds. The most influential graph metrics were betweenness centrality and eigenvector centrality, which provide measures of the extent to which a given brain region connects other regions in the network. Reductions in betweenness centrality and eigenvector centrality were particularly evident within hub regions including the cingulate cortex and caudate. Our results demonstrate that betweenness centrality and eigenvector centrality are reduced within network hubs, due to the impact of traumatic axonal injury on network connections. The dominance of betweenness centrality and eigenvector centrality suggests that cognitive impairment after traumatic brain injury results from the disconnection of network hubs by traumatic axonal injury

    Prediction of protein-protein interaction types using association rule based classification

    Get PDF
    This article has been made available through the Brunel Open Access Publishing Fund - Copyright @ 2009 Park et alBackground: Protein-protein interactions (PPI) can be classified according to their characteristics into, for example obligate or transient interactions. The identification and characterization of these PPI types may help in the functional annotation of new protein complexes and in the prediction of protein interaction partners by knowledge driven approaches. Results: This work addresses pattern discovery of the interaction sites for four different interaction types to characterize and uses them for the prediction of PPI types employing Association Rule Based Classification (ARBC) which includes association rule generation and posterior classification. We incorporated domain information from protein complexes in SCOP proteins and identified 354 domain-interaction sites. 14 interface properties were calculated from amino acid and secondary structure composition and then used to generate a set of association rules characterizing these domain-interaction sites employing the APRIORI algorithm. Our results regarding the classification of PPI types based on a set of discovered association rules shows that the discriminative ability of association rules can significantly impact on the prediction power of classification models. We also showed that the accuracy of the classification can be improved through the use of structural domain information and also the use of secondary structure content. Conclusion: The advantage of our approach is that we can extract biologically significant information from the interpretation of the discovered association rules in terms of understandability and interpretability of rules. A web application based on our method can be found at http://bioinfo.ssu.ac.kr/~shpark/picasso/SHP was supported by the Korea Research Foundation Grant funded by the Korean Government(KRF-2005-214-E00050). JAR has been supported by the Programme Alβan, the European Union Programme of High level Scholarships for Latin America, scholarship E04D034854CL. SK was supported by Soongsil University Research Fund

    Machine Learning in Automated Text Categorization

    Full text link
    The automated categorization (or classification) of texts into predefined categories has witnessed a booming interest in the last ten years, due to the increased availability of documents in digital form and the ensuing need to organize them. In the research community the dominant approach to this problem is based on machine learning techniques: a general inductive process automatically builds a classifier by learning, from a set of preclassified documents, the characteristics of the categories. The advantages of this approach over the knowledge engineering approach (consisting in the manual definition of a classifier by domain experts) are a very good effectiveness, considerable savings in terms of expert manpower, and straightforward portability to different domains. This survey discusses the main approaches to text categorization that fall within the machine learning paradigm. We will discuss in detail issues pertaining to three different problems, namely document representation, classifier construction, and classifier evaluation.Comment: Accepted for publication on ACM Computing Survey

    Evaluation of dimensionality reduction methods applied to numerical weather models for solar radiation forecasting

    Get PDF
    The interest in solar radiation prediction has increased greatly in recent times among the scientific community. In this context, Machine Learning techniques have shown their ability to learn accurate prediction models. The aim of this paper is to go one step further and automatically achieve interpretability during the learning process by performing dimensionality reduction on the input variables. To this end, three non standard multivariate feature selection approaches are applied, based on the adaptation of strong learning algorithms to the feature selection task, as well as a battery of classic dimensionality reduction models. The goal is to obtain robust sets of features that not only improve prediction accuracy but also provide more interpretable and consistent results. Real data from the Weather Research and Forecasting model, which produces a very large number of variables, is used as the input. As is to be expected, the results prove that dimensionality reduction in general is a useful tool for improving performance, as well as easing the interpretability of the results. In fact, the proposed non standard methods offer important accuracy improvements and one of them provides with an intuitive and reduced selection of features and mesoscale nodes (around 10% of the initial variables centered on three specific nodes).This work has been partially supported by the projects TIN2014-54583-C2-2-R, TEC2014-52289-R and TEC2016-81900-REDT of the Spanish Interministerial Commission of Science and Technology (MICYT), and by Comunidad Autónoma de Madrid, under project PRICAM P2013ICE-2933
    corecore