260 research outputs found

    Self-adaptive parameter and strategy based particle swarm optimization for large-scale feature selection problems with multiple classifiers

    Get PDF
    This work was partially supported by the National Natural Science Foundation of China (61403206, 61876089,61876185), the Natural Science Foundation of Jiangsu Province (BK20141005), the Natural Science Foundation of the Jiangsu Higher Education Institutions of China (14KJB520025), the Engineering Research Center of Digital Forensics, Ministry of Education, and the Priority Academic Program Development of Jiangsu Higher Education Institutions.Peer reviewedPostprin

    Particle swarm optimization with state-based adaptive velocity limit strategy

    Full text link
    Velocity limit (VL) has been widely adopted in many variants of particle swarm optimization (PSO) to prevent particles from searching outside the solution space. Several adaptive VL strategies have been introduced with which the performance of PSO can be improved. However, the existing adaptive VL strategies simply adjust their VL based on iterations, leading to unsatisfactory optimization results because of the incompatibility between VL and the current searching state of particles. To deal with this problem, a novel PSO variant with state-based adaptive velocity limit strategy (PSO-SAVL) is proposed. In the proposed PSO-SAVL, VL is adaptively adjusted based on the evolutionary state estimation (ESE) in which a high value of VL is set for global searching state and a low value of VL is set for local searching state. Besides that, limit handling strategies have been modified and adopted to improve the capability of avoiding local optima. The good performance of PSO-SAVL has been experimentally validated on a wide range of benchmark functions with 50 dimensions. The satisfactory scalability of PSO-SAVL in high-dimension and large-scale problems is also verified. Besides, the merits of the strategies in PSO-SAVL are verified in experiments. Sensitivity analysis for the relevant hyper-parameters in state-based adaptive VL strategy is conducted, and insights in how to select these hyper-parameters are also discussed.Comment: 33 pages, 8 figure

    A Tent L\'evy Flying Sparrow Search Algorithm for Feature Selection: A COVID-19 Case Study

    Full text link
    The "Curse of Dimensionality" induced by the rapid development of information science, might have a negative impact when dealing with big datasets. In this paper, we propose a variant of the sparrow search algorithm (SSA), called Tent L\'evy flying sparrow search algorithm (TFSSA), and use it to select the best subset of features in the packing pattern for classification purposes. SSA is a recently proposed algorithm that has not been systematically applied to feature selection problems. After verification by the CEC2020 benchmark function, TFSSA is used to select the best feature combination to maximize classification accuracy and minimize the number of selected features. The proposed TFSSA is compared with nine algorithms in the literature. Nine evaluation metrics are used to properly evaluate and compare the performance of these algorithms on twenty-one datasets from the UCI repository. Furthermore, the approach is applied to the coronavirus disease (COVID-19) dataset, yielding the best average classification accuracy and the average number of feature selections, respectively, of 93.47% and 2.1. Experimental results confirm the advantages of the proposed algorithm in improving classification accuracy and reducing the number of selected features compared to other wrapper-based algorithms

    最適化問題に対するブレインストーム最適化アルゴリズムの改善

    Get PDF
    富山大学・富理工博甲第170号・于洋・2020/3/24富山大学202

    Automated Semantic Understanding of Human Emotions in Writing and Speech

    Get PDF
    Affective Human Computer Interaction (A-HCI) will be critical for the success of new technologies that will prevalent in the 21st century. If cell phones and the internet are any indication, there will be continued rapid development of automated assistive systems that help humans to live better, more productive lives. These will not be just passive systems such as cell phones, but active assistive systems like robot aides in use in hospitals, homes, entertainment room, office, and other work environments. Such systems will need to be able to properly deduce human emotional state before they determine how to best interact with people. This dissertation explores and extends the body of knowledge related to Affective HCI. New semantic methodologies are developed and studied for reliable and accurate detection of human emotional states and magnitudes in written and spoken speech; and for mapping emotional states and magnitudes to 3-D facial expression outputs. The automatic detection of affect in language is based on natural language processing and machine learning approaches. Two affect corpora were developed to perform this analysis. Emotion classification is performed at the sentence level using a step-wise approach which incorporates sentiment flow and sentiment composition features. For emotion magnitude estimation, a regression model was developed to predict evolving emotional magnitude of actors. Emotional magnitudes at any point during a story or conversation are determined by 1) previous emotional state magnitude; 2) new text and speech inputs that might act upon that state; and 3) information about the context the actors are in. Acoustic features are also used to capture additional information from the speech signal. Evaluation of the automatic understanding of affect is performed by testing the model on a testing subset of the newly extended corpus. To visualize actor emotions as perceived by the system, a methodology was also developed to map predicted emotion class magnitudes to 3-D facial parameters using vertex-level mesh morphing. The developed sentence level emotion state detection approach achieved classification accuracies as high as 71% for the neutral vs. emotion classification task in a test corpus of children’s stories. After class re-sampling, the results of the step-wise classification methodology on a test sub-set of a medical drama corpus achieved accuracies in the 56% to 84% range for each emotion class and polarity. For emotion magnitude prediction, the developed recurrent (prior-state feedback) regression model using both text-based and acoustic based features achieved correlation coefficients in the range of 0.69 to 0.80. This prediction function was modeled using a non-linear approach based on Support Vector Regression (SVR) and performed better than other approaches based on Linear Regression or Artificial Neural Networks

    Accurate dosimetry for microbeam radiation therapy

    Get PDF
    Microbeam Radiation Therapy (MRT) is an emergent treatment modality that uses spatially fractionated synchrotron x-ray beams. MRT has been identified as a promising treatment concept that might be applied to patients with malignant cen-tral nervous system (CNS) tumors for whom, at the current stage of development, no satisfactory therapy is available yet. The use of a fractionated beam allows a better skin sparing and a better tolerance of healthy tissue to high dose rates. MRT consists of a stereotactic irradiation with highly collimated, quasi-parallel array of narrow beams 50 µm wide spaced with 400 µm made of synchrotron generated x-rays at an energy ranging from 0 to 600 keV. The European Synchrotron Radiation Facility (ESRF) as an x-ray source allows a very small beam divergence and an extremely high dose rate. The dose deposited on the path of the primary photons (peak dose) of several hundred grays (Gy) is well tolerated by normal tissues and provides at the same time a higher therapeutic index for various tumor models in rodents. The high dose rate forces us to develop an accurate and reproducible dosimetry protocol to ensure the matching between the prescribed and the deliv-ered dose. MRT is by definition a non-conventional irradiation method, therefore the number of dosimetric errors becomes larger than in conventional treatments due to two reasons (i) the reference conditions recommended by the Association of Physicists in Medicine (AAPM) or the International Atomic Energy Agency (IAEA) cannot be established, (ii) the measurement of absorbed dose to water in composite fields is not standardized.This PhD is focused on bridging the gap between MC simulated values of output fac-tors (OF) and peak-to-valley dose ratios (PVDR) and experimental measurements. Several aspects of the irradiation setup such as insertion devices on the path of the x-ray beam are accounted for as well as the internal structure of the dosimeters. Each contribution to OF and PVDR is quantified to correct for the measurements

    Evolutionary Computation 2020

    Get PDF
    Intelligent optimization is based on the mechanism of computational intelligence to refine a suitable feature model, design an effective optimization algorithm, and then to obtain an optimal or satisfactory solution to a complex problem. Intelligent algorithms are key tools to ensure global optimization quality, fast optimization efficiency and robust optimization performance. Intelligent optimization algorithms have been studied by many researchers, leading to improvements in the performance of algorithms such as the evolutionary algorithm, whale optimization algorithm, differential evolution algorithm, and particle swarm optimization. Studies in this arena have also resulted in breakthroughs in solving complex problems including the green shop scheduling problem, the severe nonlinear problem in one-dimensional geodesic electromagnetic inversion, error and bug finding problem in software, the 0-1 backpack problem, traveler problem, and logistics distribution center siting problem. The editors are confident that this book can open a new avenue for further improvement and discoveries in the area of intelligent algorithms. The book is a valuable resource for researchers interested in understanding the principles and design of intelligent algorithms

    Machine Learning Approaches to Predict Recurrence of Aggressive Tumors

    Get PDF
    Cancer recurrence is the major cause of cancer mortality. Despite tremendous research efforts, there is a dearth of biomarkers that reliably predict risk of cancer recurrence. Currently available biomarkers and tools in the clinic have limited usefulness to accurately identify patients with a higher risk of recurrence. Consequently, cancer patients suffer either from under- or over- treatment. Recent advances in machine learning and image analysis have facilitated development of techniques that translate digital images of tumors into rich source of new data. Leveraging these computational advances, my work addresses the unmet need to find risk-predictive biomarkers for Triple Negative Breast Cancer (TNBC), Ductal Carcinoma in-situ (DCIS), and Pancreatic Neuroendocrine Tumors (PanNETs). I have developed unique, clinically facile, models that determine the risk of recurrence, either local, invasive, or metastatic in these tumors. All models employ hematoxylin and eosin (H&E) stained digitized images of patient tumor samples as the primary source of data. The TNBC (n=322) models identified unique signatures from a panel of 133 protein biomarkers, relevant to breast cancer, to predict site of metastasis (brain, lung, liver, or bone) for TNBC patients. Even our least significant model (bone metastasis) offered superior prognostic value than clinopathological variables (Hazard Ratio [HR] of 5.123 vs. 1.397 p\u3c0.05). A second model predicted 10-year recurrence risk, in women with DCIS treated with breast conserving surgery, by identifying prognostically relevant features of tumor architecture from digitized H&E slides (n=344), using a novel two-step classification approach. In the validation cohort, our DCIS model provided a significantly higher HR (6.39) versus any clinopathological marker (p\u3c0.05). The third model is a deep-learning based, multi-label (annotation followed by metastasis association), whole slide image analysis pipeline (n=90) that identified a PanNET high risk group with over an 8x higher risk of metastasis (versus the low risk group p\u3c0.05), regardless of cofounding clinical variables. These machine-learning based models may guide treatment decisions and demonstrate proof-of-principle that computational pathology has tremendous clinical utility

    Implementing decision tree-based algorithms in medical diagnostic decision support systems

    Get PDF
    As a branch of healthcare, medical diagnosis can be defined as finding the disease based on the signs and symptoms of the patient. To this end, the required information is gathered from different sources like physical examination, medical history and general information of the patient. Development of smart classification models for medical diagnosis is of great interest amongst the researchers. This is mainly owing to the fact that the machine learning and data mining algorithms are capable of detecting the hidden trends between features of a database. Hence, classifying the medical datasets using smart techniques paves the way to design more efficient medical diagnostic decision support systems. Several databases have been provided in the literature to investigate different aspects of diseases. As an alternative to the available diagnosis tools/methods, this research involves machine learning algorithms called Classification and Regression Tree (CART), Random Forest (RF) and Extremely Randomized Trees or Extra Trees (ET) for the development of classification models that can be implemented in computer-aided diagnosis systems. As a decision tree (DT), CART is fast to create, and it applies to both the quantitative and qualitative data. For classification problems, RF and ET employ a number of weak learners like CART to develop models for classification tasks. We employed Wisconsin Breast Cancer Database (WBCD), Z-Alizadeh Sani dataset for coronary artery disease (CAD) and the databanks gathered in Ghaem Hospital’s dermatology clinic for the response of patients having common and/or plantar warts to the cryotherapy and/or immunotherapy methods. To classify the breast cancer type based on the WBCD, the RF and ET methods were employed. It was found that the developed RF and ET models forecast the WBCD type with 100% accuracy in all cases. To choose the proper treatment approach for warts as well as the CAD diagnosis, the CART methodology was employed. The findings of the error analysis revealed that the proposed CART models for the applications of interest attain the highest precision and no literature model can rival it. The outcome of this study supports the idea that methods like CART, RF and ET not only improve the diagnosis precision, but also reduce the time and expense needed to reach a diagnosis. However, since these strategies are highly sensitive to the quality and quantity of the introduced data, more extensive databases with a greater number of independent parameters might be required for further practical implications of the developed models
    corecore