3,088 research outputs found

    Diffuse pattern learning with Fuzzy ARTMAP and PASS

    Get PDF
    Fuzzy ARTMAP is compared to a classifier system (CS) called PASS (predictive adaptive sequential system). Previously reported results in a benchmark classification task suggest that Fuzzy ARTMAP systems perform better and are more parsimonious than systems based on the CS architecture. The tasks considered here differ from ordinary classificatory tasks in the amount of output uncertainty associated with input categories. To be successful, learning systems must identify not only correct input categories, but also the most likely outputs for those categories. Performance under various types of diffuse patterns is investigated using a simulated scenario

    Cancer diagnosis using deep learning: A bibliographic review

    Get PDF
    In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements

    AUTOMATED INTERPRETATION OF THE BACKGROUND EEG USING FUZZY LOGIC

    Get PDF
    A new framework is described for managing uncertainty and for deahng with artefact corruption to introduce objectivity in the interpretation of the electroencephalogram (EEG). Conventionally, EEG interpretation is time consuming and subjective, and is known to show significant inter- and intra-personnel variation. A need thus exists to automate the interpretation of the EEG to provide a more consistent and efficient assessment. However, automated analysis of EEGs by computers is complicated by two major factors. The difficulty of adequately capturing in machine form, the skills and subjective expertise of the experienced electroencephalbgrapher, and the lack of a reliable means of dealing with the range of EEG artefacts (signal contamination). In this thesis, a new framework is described which introduces objectivity in two important outcomes of clinical evaluation of the EEG, namely, the clinical factual report and the clinical 'conclusion', by capturing the subjective expertise of the electroencephalographer and dealing with the problem of artefact corruption. The framework is separated into two stages .to assist piecewise optimisation and to cater for different requirements. The first stage, 'quantitative analysis', relies on novel digital signal processing algorithms and cluster analysis techniques to reduce data and identify and describe background activities in the EEG. To deal with artefact corruption, an artefact removal strategy, based on new reUable techniques for artefact identification is used to ensure that artefact-free activities only are used in the analysis. The outcome is a quantitative analysis, which efficiently describes the background activity in the record, and can support future clinical investigations in neurophysiology. In clinical practice, many of the EEG features are described by the clinicians in natural language terms, such as very high, extremely irregular, somewhat abnormal etc. The second stage of the framework, 'qualitative analysis', captures the subjectivity and linguistic uncertainty expressed.by the clinical experts, using novel, intelligent models, based on fuzzy logic, to provide an analysis closely comparable to the clinical interpretation made in practice. The outcome of this stage is an EEG report with qualitative descriptions to complement the quantitative analysis. The system was evaluated using EEG records from 1 patient with Alzheimer's disease and 2 age-matched normal controls for the factual report, and 3 patients with Alzheimer's disease and 7 age-matched nonnal controls for the 'conclusion'. Good agreement was found between factual reports produced by the system and factual reports produced by qualified clinicians. Further, the 'conclusion' produced by the system achieved 100% discrimination between the two subject groups. After a thorough evaluation, the system should significantly aid the process of EEG interpretation and diagnosis

    Breaking Sticks and Ambiguities with Adaptive Skip-gram

    Full text link
    Recently proposed Skip-gram model is a powerful method for learning high-dimensional word representations that capture rich semantic relationships between words. However, Skip-gram as well as most prior work on learning word representations does not take into account word ambiguity and maintain only single representation per word. Although a number of Skip-gram modifications were proposed to overcome this limitation and learn multi-prototype word representations, they either require a known number of word meanings or learn them using greedy heuristic approaches. In this paper we propose the Adaptive Skip-gram model which is a nonparametric Bayesian extension of Skip-gram capable to automatically learn the required number of representations for all words at desired semantic resolution. We derive efficient online variational learning algorithm for the model and empirically demonstrate its efficiency on word-sense induction task

    CES-513 Stages for Developing Control Systems using EMG and EEG Signals: A survey

    Get PDF
    Bio-signals such as EMG (Electromyography), EEG (Electroencephalography), EOG (Electrooculogram), ECG (Electrocardiogram) have been deployed recently to develop control systems for improving the quality of life of disabled and elderly people. This technical report aims to review the current deployment of these state of the art control systems and explain some challenge issues. In particular, the stages for developing EMG and EEG based control systems are categorized, namely data acquisition, data segmentation, feature extraction, classification, and controller. Some related Bio-control applications are outlined. Finally a brief conclusion is summarized.

    Solving multiple-criteria R&D project selection problems with a data-driven evidential reasoning rule

    Full text link
    In this paper, a likelihood based evidence acquisition approach is proposed to acquire evidence from experts'assessments as recorded in historical datasets. Then a data-driven evidential reasoning rule based model is introduced to R&D project selection process by combining multiple pieces of evidence with different weights and reliabilities. As a result, the total belief degrees and the overall performance can be generated for ranking and selecting projects. Finally, a case study on the R&D project selection for the National Science Foundation of China is conducted to show the effectiveness of the proposed model. The data-driven evidential reasoning rule based model for project evaluation and selection (1) utilizes experimental data to represent experts' assessments by using belief distributions over the set of final funding outcomes, and through this historic statistics it helps experts and applicants to understand the funding probability to a given assessment grade, (2) implies the mapping relationships between the evaluation grades and the final funding outcomes by using historical data, and (3) provides a way to make fair decisions by taking experts' reliabilities into account. In the data-driven evidential reasoning rule based model, experts play different roles in accordance with their reliabilities which are determined by their previous review track records, and the selection process is made interpretable and fairer. The newly proposed model reduces the time-consuming panel review work for both managers and experts, and significantly improves the efficiency and quality of project selection process. Although the model is demonstrated for project selection in the NSFC, it can be generalized to other funding agencies or industries.Comment: 20 pages, forthcoming in International Journal of Project Management (2019

    Software Effort Estimation Accuracy Prediction of Machine Learning Techniques: A Systematic Performance Evaluation

    Full text link
    Software effort estimation accuracy is a key factor in effective planning, controlling and to deliver a successful software project within budget and schedule. The overestimation and underestimation both are the key challenges for future software development, henceforth there is a continuous need for accuracy in software effort estimation (SEE). The researchers and practitioners are striving to identify which machine learning estimation technique gives more accurate results based on evaluation measures, datasets and the other relevant attributes. The authors of related research are generally not aware of previously published results of machine learning effort estimation techniques. The main aim of this study is to assist the researchers to know which machine learning technique yields the promising effort estimation accuracy prediction in the software development. In this paper, the performance of the machine learning ensemble technique is investigated with the solo technique based on two most commonly used accuracy evaluation metrics. We used the systematic literature review methodology proposed by Kitchenham and Charters. This includes searching for the most relevant papers, applying quality assessment criteria, extracting data and drawing results. We have evaluated a state-of-the-art accuracy performance of 28 selected studies (14 ensemble, 14 solo) using Mean Magnitude of Relative Error (MMRE) and PRED (25) as a set of reliable accuracy metrics for performance evaluation of accuracy among two techniques to report the research questions stated in this study. We found that machine learning techniques are the most frequently implemented in the construction of ensemble effort estimation (EEE) techniques. The results of this study revealed that the EEE techniques usually yield a promising estimation accuracy than the solo techniques.Comment: Pages: 27 Figures: 15 Tables:

    A New Computer-Aided Diagnosis System with Modified Genetic Feature Selection for BI-RADS Classification of Breast Masses in Mammograms

    Full text link
    Mammography remains the most prevalent imaging tool for early breast cancer screening. The language used to describe abnormalities in mammographic reports is based on the breast Imaging Reporting and Data System (BI-RADS). Assigning a correct BI-RADS category to each examined mammogram is a strenuous and challenging task for even experts. This paper proposes a new and effective computer-aided diagnosis (CAD) system to classify mammographic masses into four assessment categories in BI-RADS. The mass regions are first enhanced by means of histogram equalization and then semiautomatically segmented based on the region growing technique. A total of 130 handcrafted BI-RADS features are then extrcated from the shape, margin, and density of each mass, together with the mass size and the patient's age, as mentioned in BI-RADS mammography. Then, a modified feature selection method based on the genetic algorithm (GA) is proposed to select the most clinically significant BI-RADS features. Finally, a back-propagation neural network (BPN) is employed for classification, and its accuracy is used as the fitness in GA. A set of 500 mammogram images from the digital database of screening mammography (DDSM) is used for evaluation. Our system achieves classification accuracy, positive predictive value, negative predictive value, and Matthews correlation coefficient of 84.5%, 84.4%, 94.8%, and 79.3%, respectively. To our best knowledge, this is the best current result for BI-RADS classification of breast masses in mammography, which makes the proposed system promising to support radiologists for deciding proper patient management based on the automatically assigned BI-RADS categories
    • …
    corecore