71,218 research outputs found

    Combined optimization of feature selection and algorithm parameters in machine learning of language

    Get PDF
    Comparative machine learning experiments have become an important methodology in empirical approaches to natural language processing (i) to investigate which machine learning algorithms have the 'right bias' to solve specific natural language processing tasks, and (ii) to investigate which sources of information add to accuracy in a learning approach. Using automatic word sense disambiguation as an example task, we show that with the methodology currently used in comparative machine learning experiments, the results may often not be reliable because of the role of and interaction between feature selection and algorithm parameter optimization. We propose genetic algorithms as a practical approach to achieve both higher accuracy within a single approach, and more reliable comparisons

    Machine Learning tools for global PDF fits

    Get PDF
    The use of machine learning algorithms in theoretical and experimental high-energy physics has experienced an impressive progress in recent years, with applications from trigger selection to jet substructure classification and detector simulation among many others. In this contribution, we review the machine learning tools used in the NNPDF family of global QCD analyses. These include multi-layer feed-forward neural networks for the model-independent parametrisation of parton distributions and fragmentation functions, genetic and covariance matrix adaptation algorithms for training and optimisation, and closure testing for the systematic validation of the fitting methodology.Comment: 12 pages, 9 figures, to appear in the proceedings of the XXIIIth Quark Confinement and the Hadron Spectrum conference, 1-6 August 2018, University of Maynooth, Irelan

    Combination of Evolutionary Algorithms with Experimental Design, Traditional Optimization and Machine Learning

    Get PDF
    Evolutionary algorithms alone cannot solve optimization problems very efficiently since there are many random (not very rational) decisions in these algorithms. Combination of evolutionary algorithms and other techniques have been proven to be an efficient optimization methodology. In this talk, I will explain the basic ideas of our three algorithms along this line (1): Orthogonal genetic algorithm which treats crossover/mutation as an experimental design problem, (2) Multiobjective evolutionary algorithm based on decomposition (MOEA/D) which uses decomposition techniques from traditional mathematical programming in multiobjective optimization evolutionary algorithm, and (3) Regular model based multiobjective estimation of distribution algorithms (RM-MEDA) which uses the regular property and machine learning methods for improving multiobjective evolutionary algorithms

    Quantum autoencoders via quantum adders with genetic algorithms

    Full text link
    The quantum autoencoder is a recent paradigm in the field of quantum machine learning, which may enable an enhanced use of resources in quantum technologies. To this end, quantum neural networks with less nodes in the inner than in the outer layers were considered. Here, we propose a useful connection between approximate quantum adders and quantum autoencoders. Specifically, this link allows us to employ optimized approximate quantum adders, obtained with genetic algorithms, for the implementation of quantum autoencoders for a variety of initial states. Furthermore, we can also directly optimize the quantum autoencoders via genetic algorithms. Our approach opens a different path for the design of quantum autoencoders in controllable quantum platforms

    A NEAT Approach to Malware Classification

    Get PDF
    Current malware detection software often relies on machine learning, which is seen as an improvement over signature-based techniques. Problems with a machine learning based approach can arise when malware writers modify their code with the intent to evade detection. This leads to a cat and mouse situation where new models must constantly be trained to detect new malware variants. In this research, we experiment with genetic algorithms as a means of evolving machine learning models to detect malware. Genetic algorithms, which simulate natural selection, provide a way for models to adapt to continuous changes in a malware families, and thereby improve detection rates. Specifically, we use the Neuro-Evolution of Augmenting Topologies (NEAT) algorithm to optimize machine learning classifiers based on decision trees and neural networks. We compare the performance of our NEAT approach to standard models, including random forest and support vector machines

    Improving Patient Care with Machine Learning: A Game-Changer for Healthcare

    Get PDF
    Machine learning has revolutionized the field of healthcare by offering tremendous potential to improve patient care across various domains. This research study aimed to explore the impact of machine learning in healthcare and identify key findings in several areas.Machine learning algorithms demonstrated the ability to detect diseases at an early stage and facilitate accurate diagnoses by analyzing extensive medical data, including patient records, lab results, imaging scans, and genetic information. This capability holds the potential to improve patient outcomes and increase survival rates.The study highlighted that machine learning can generate personalized treatment plans by analyzing individual patient data, considering factors such as medical history, genetic information, and treatment outcomes. This personalized approach enhances treatment effectiveness, reduces adverse events, and contributes to improved patient outcomes.Predictive analytics utilizing machine learning techniques showed promise in patient monitoring by leveraging real-time data such as vital signs, physiological information, and electronic health records. By providing early warnings, healthcare providers can proactively intervene, preventing adverse events and enhancing patient safety.Machine learning played a significant role in precision medicine and drug discovery. By analyzing vast biomedical datasets, including genomics, proteomics, and clinical trial information, machine learning algorithms identified novel drug targets, predicted drug efficacy and toxicity, and optimized treatment regimens. This accelerated drug discovery process holds the potential to provide more effective and personalized treatment options.The study also emphasized the value of machine learning in pharmacovigilance and adverse event detection. By analyzing the FDA Adverse Event Reporting System (FAERS) big data, machine learning algorithms uncovered hidden associations between drugs, medical products, and adverse events, aiding in early detection and monitoring of drug-related safety issues. This finding contributes to improved patient safety and reduced occurrences of adverse events.The research demonstrated the remarkable potential of machine learning in medical imaging analysis. Deep learning algorithms trained on large datasets were able to detect abnormalities in various medical images, facilitating faster and more accurate diagnoses. This technology reduces human error and ultimately leads to improved patient outcomes.While machine learning offers immense benefits, ethical considerations such as patient privacy, algorithm bias, and transparency must be addressed for responsible implementation. Healthcare professionals should remain central to decision-making processes, utilizing machine learning as a tool to enhance their expertise rather than replace it. This study showcases the transformative potential of machine learning in revolutionizing healthcare and improving patient care

    Genetic Algorithm Modeling with GPU Parallel Computing Technology

    Get PDF
    We present a multi-purpose genetic algorithm, designed and implemented with GPGPU / CUDA parallel computing technology. The model was derived from a multi-core CPU serial implementation, named GAME, already scientifically successfully tested and validated on astrophysical massive data classification problems, through a web application resource (DAMEWARE), specialized in data mining based on Machine Learning paradigms. Since genetic algorithms are inherently parallel, the GPGPU computing paradigm has provided an exploit of the internal training features of the model, permitting a strong optimization in terms of processing performances and scalability.Comment: 11 pages, 2 figures, refereed proceedings; Neural Nets and Surroundings, Proceedings of 22nd Italian Workshop on Neural Nets, WIRN 2012; Smart Innovation, Systems and Technologies, Vol. 19, Springe
    corecore