819 research outputs found

    Mining a Small Medical Data Set by Integrating the Decision Tree and t-test

    Get PDF
    [[abstract]]Although several researchers have used statistical methods to prove that aspiration followed by the injection of 95% ethanol left in situ (retention) is an effective treatment for ovarian endometriomas, very few discuss the different conditions that could generate different recovery rates for the patients. Therefore, this study adopts the statistical method and decision tree techniques together to analyze the postoperative status of ovarian endometriosis patients under different conditions. Since our collected data set is small, containing only 212 records, we use all of these data as the training data. Therefore, instead of using a resultant tree to generate rules directly, we use the value of each node as a cut point to generate all possible rules from the tree first. Then, using t-test, we verify the rules to discover some useful description rules after all possible rules from the tree have been generated. Experimental results show that our approach can find some new interesting knowledge about recurrent ovarian endometriomas under different conditions.[[journaltype]]國外[[incitationindex]]EI[[booktype]]紙本[[countrycodes]]FI

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Analytic Predictive of Hepatitis using The Regression Logic Algorithm

    Get PDF
    Hepatitis is an inflammation of the liver which is one of the diseases that affects the health of millions of people in the world of all ages. Predicting the outcome of this disease can be said to be quite challenging, where the main challenge for public health care services itself is due to a limited clinical diagnosis at an early stage. So by utilizing machine learning techniques on existing data, namely by concluding diagnostic rules to see trends in hepatitis patient data and see what factors are affecting patients with hepatitis, can make the diagnosis process more reliable to improve their health care. The approach that can be used to carry out this prediction process is a regression technique. The regression itself provides a relationship between the independent variable and the dependent variable. By using the hepatitis disease dataset from UCI Machine Learning, this study applies a logistic regression model that provides analysis results with an accuracy rate of 83.33

    A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications

    Get PDF
    Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms

    An overview on structural health monitoring: From the current state-of-the-art to new bio-inspired sensing paradigms

    Get PDF
    In the last decades, the field of structural health monitoring (SHM) has grown exponentially. Yet, several technical constraints persist, which are preventing full realization of its potential. To upgrade current state-of-the-art technologies, researchers have started to look at nature’s creations giving rise to a new field called ‘biomimetics’, which operates across the border between living and non-living systems. The highly optimised and time-tested performance of biological assemblies keeps on inspiring the development of bio-inspired artificial counterparts that can potentially outperform conventional systems. After a critical appraisal on the current status of SHM, this paper presents a review of selected works related to neural, cochlea and immune-inspired algorithms implemented in the field of SHM, including a brief survey of the advancements of bio-inspired sensor technology for the purpose of SHM. In parallel to this engineering progress, a more in-depth understanding of the most suitable biological patterns to be transferred into multimodal SHM systems is fundamental to foster new scientific breakthroughs. Hence, grounded in the dissection of three selected human biological systems, a framework for new bio-inspired sensing paradigms aimed at guiding the identification of tailored attributes to transplant from nature to SHM is outlined.info:eu-repo/semantics/acceptedVersio

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Influence Distribution Training Data on Performance Supervised Machine Learning Algorithms

    Get PDF
    Almost all fields of life need Banknote. Even particular fields of life require banknotes in large quantities such as banks, transportation companies, and casinos. Therefore Banknotes are an essential component in carrying out all activities every day, especially those related to finance. Through technological advancements such as scanners and copy machine, it can provide the opportunity for anyone to commit a crime. The crime is like a counterfeit banknote. Many people still find it difficult to distinguish between a genuine banknote ad counterfeit Banknote, that is because counterfeit Banknote produced have a high degree of resemblance to the genuine Banknote. Based on that background, authors want to do a classification process to distinguish between genuine Banknote and counterfeit Banknote. The classification process use methods Supervised Learning and compares the level of accuracy based on the distribution of training data. The methods of supervised Learning used are Support Vector Machine (SVM), K-Nearest Neighbor (K-NN), and Naïve Bayes. K-NN method is a method that has the highest specificity, sensitivity, and accuracy of the three methods used by the authors both in the training data of 30%, 50%, and 80%. Where in the training data 30% and 50% value specificity: 0.99, sensitivity: 1.00, accuracy: 0.99. While the 80% training data value specificity: 1.00, sensitivity: 1.00, accuracy: 1.00. This means that the distribution of training data influences the performance of the Supervised Machine Learning algorithm. In the KNN method, the greater the training data, the better the accuracy

    Swarm Intelligence

    Get PDF
    Swarm Intelligence has emerged as one of the most studied artificial intelligence branches during the last decade, constituting the fastest growing stream in the bio-inspired computation community. A clear trend can be deduced analyzing some of the most renowned scientific databases available, showing that the interest aroused by this branch has increased at a notable pace in the last years. This book describes the prominent theories and recent developments of Swarm Intelligence methods, and their application in all fields covered by engineering. This book unleashes a great opportunity for researchers, lecturers, and practitioners interested in Swarm Intelligence, optimization problems, and artificial intelligence
    • …
    corecore