1,384 research outputs found

    Accountable, Explainable Artificial Intelligence Incorporation Framework for a Real-Time Affective State Assessment Module

    Get PDF
    The rapid growth of artificial intelligence (AI) and machine learning (ML) solutions has seen it adopted across various industries. However, the concern of ‘black-box’ approaches has led to an increase in the demand for high accuracy, transparency, accountability, and explainability in AI/ML approaches. This work contributes through an accountable, explainable AI (AXAI) framework for delineating and assessing AI systems. This framework has been incorporated into the development of a real-time, multimodal affective state assessment system

    Parameterization and R-Peak Error Estimations of ECG Signals Using Independent Component Analysis

    Get PDF
    Principal component analysis (PCA) is used to reduce dimensionality of electrocardiogram (ECG) data prior to performing independent component analysis (ICA). A newly developed PCA variance estimator by the author has been applied for detecting true, actual and false peaks of ECG data files. In this paper, it is felt that the ability of ICA is also checked for parameterization of ECG signals, which is necessary at times. Independent components (ICs) of properly parameterized ECG signals are more readily interpretable than the measurements themselves, or their ICs. The original ECG recordings and the samples are corrected by statistical measures to estimate the noise statistics of ECG signals and find the reconstruction errors. The capability of ICA is justified by finding the true, false and actual peaks of around 25–50, CSE (common standards for electrocardiography) database ECG files. In the present work, joint approximation for diagonalization of the eigen matrices (Jade) algorithm is applied to 3-channel ECG. ICA processing of different cases is dealt with and the R-peak magnitudes of the ECG waveforms before and after applying ICA are found and marked. ICA results obtained indicate that in most of the cases, the percentage error in reconstruction is very small. The developed PCA variance estimator along with the quadratic spline wavelet gave a sensitivity of 97.47% before applying ICA and 98.07% after ICA processing

    Structured Dropout for Weak Label and Multi-Instance Learning and Its Application to Score-Informed Source Separation

    Full text link
    Many success stories involving deep neural networks are instances of supervised learning, where available labels power gradient-based learning methods. Creating such labels, however, can be expensive and thus there is increasing interest in weak labels which only provide coarse information, with uncertainty regarding time, location or value. Using such labels often leads to considerable challenges for the learning process. Current methods for weak-label training often employ standard supervised approaches that additionally reassign or prune labels during the learning process. The information gain, however, is often limited as only the importance of labels where the network already yields reasonable results is boosted. We propose treating weak-label training as an unsupervised problem and use the labels to guide the representation learning to induce structure. To this end, we propose two autoencoder extensions: class activity penalties and structured dropout. We demonstrate the capabilities of our approach in the context of score-informed source separation of music

    A Novel Semi-Supervised Methodology for Extracting Tumor Type-Specific MRS Sources in Human Brain Data

    Get PDF
    BackgroundThe clinical investigation of human brain tumors often starts with a non-invasive imaging study, providing information about the tumor extent and location, but little insight into the biochemistry of the analyzed tissue. Magnetic Resonance Spectroscopy can complement imaging by supplying a metabolic fingerprint of the tissue. This study analyzes single-voxel magnetic resonance spectra, which represent signal information in the frequency domain. Given that a single voxel may contain a heterogeneous mix of tissues, signal source identification is a relevant challenge for the problem of tumor type classification from the spectroscopic signal.Methodology/Principal FindingsNon-negative matrix factorization techniques have recently shown their potential for the identification of meaningful sources from brain tissue spectroscopy data. In this study, we use a convex variant of these methods that is capable of handling negatively-valued data and generating sources that can be interpreted as tumor class prototypes. A novel approach to convex non-negative matrix factorization is proposed, in which prior knowledge about class information is utilized in model optimization. Class-specific information is integrated into this semi-supervised process by setting the metric of a latent variable space where the matrix factorization is carried out. The reported experimental study comprises 196 cases from different tumor types drawn from two international, multi-center databases. The results indicate that the proposed approach outperforms a purely unsupervised process by achieving near perfect correlation of the extracted sources with the mean spectra of the tumor types. It also improves tissue type classification.Conclusions/SignificanceWe show that source extraction by unsupervised matrix factorization benefits from the integration of the available class information, so operating in a semi-supervised learning manner, for discriminative source identification and brain tumor labeling from single-voxel spectroscopy data. We are confident that the proposed methodology has wider applicability for biomedical signal processing

    Studying the validity of the ABCDisCo method of filtering QCD Instanton events from multiple perturbative background theories

    Get PDF
    Yang-Mills theories within the Standard Model of particle physics predict topologically non-trivial objects which describe the tunnelling between classically degenerate vacuum states in Minkowski spacetime. Though predicted theoretically as long ago as the 1960’s, no experimental evidence for these objects has yet been found. One example of such objects – known as “QCD (quantum chromodynamic) Instantons” – should be frequently created at the Large Hadron Collider, but are still to be identified from the existing data as they are eclipsed by large numbers of background events. This work aimed to deploy the ABCDisCo [1] method of filtering Instanton events from background events on simulated and real experimental data samples. It will also appraise the extent to which the performance of Machine Learning algorithms (neural networks) employed by this method is insensitive to changes in the expected theoretical behavior of the perturbative QCD backgrounds

    A Survey on the Project in title

    Full text link
    In this paper we present a survey of work that has been done in the project ldquo;Unsupervised Adaptive P300 BCI in the framework of chaotic theory and stochastic theoryrdquo;we summarised the following papers, (Mohammed J Alhaddad amp; 2011), (Mohammed J. Alhaddad amp; Kamel M, 2012), (Mohammed J Alhaddad, Kamel, amp; Al-Otaibi, 2013), (Mohammed J Alhaddad, Kamel, amp; Bakheet, 2013), (Mohammed J Alhaddad, Kamel, amp; Al-Otaibi, 2014), (Mohammed J Alhaddad, Kamel, amp; Bakheet, 2014), (Mohammed J Alhaddad, Kamel, amp; Kadah, 2014), (Mohammed J Alhaddad, Kamel, Makary, Hargas, amp; Kadah, 2014), (Mohammed J Alhaddad, Mohammed, Kamel, amp; Hagras, 2015).We developed a new pre-processing method for denoising P300-based brain-computer interface data that allows better performance with lower number of channels and blocks. The new denoising technique is based on a modified version of the spectral subtraction denoising and works on each temporal signal channel independently thus offering seamless integration with existing pre-processing and allowing low channel counts to be used. We also developed a novel approach for brain-computer interface data that requires no prior training. The proposed approach is based on interval type-2 fuzzy logic based classifier which is able to handle the usersrsquo; uncertainties to produce better prediction accuracies than other competing classifiers such as BLDA or RFLDA. In addition, the generated type-2 fuzzy classifier is learnt from data via genetic algorithms to produce a small number of rules with a rule length of only one antecedent to maximize the transparency and interpretability for the normal clinician. We also employ a feature selection system based on an ensemble neural networks recursive feature selection which is able to find the effective time instances within the effective sensors in relation to given P300 event. The basic principle of this new class of techniques is that the trial with true activation signal within each block has to be different from the rest of the trials within that block. Hence, a measure that is sensitive to this dissimilarity can be used to make a decision based on a single block without any prior training. The new methods were verified using various experiments which were performed on standard data sets and using real-data sets obtained from real subjects experiments performed in the BCI lab in King Abdulaziz University. The results were compared to the classification results of the same data using previous methods. Enhanced performance in different experiments as quantitatively assessed using classification block accuracy as well as bit rate estimates was confirmed. It will be shown that the produced type-2 fuzzy logic based classifier will learn simple rules which are easy to understand explaining the events in question. In addition, the produced type-2 fuzzy logic classifier will be able to give better accuracies when compared to BLDA or RFLDA on various human subjects on the standard and real-world data sets
    corecore