863 research outputs found
Detecting Heart Attacks Using Learning Classifiers
Cardiovascular diseases (CVDs) have emerged as a critical global threat to human life. The diagnosis of these diseases presents a complex challenge, particularly for inexperienced doctors, as their symptoms can be mistaken for signs of aging or similar conditions. Early detection of heart disease can help prevent heart failure, making it crucial to develop effective diagnostic techniques. Machine Learning (ML) techniques have gained popularity among researchers for identifying new patients based on past data. While various forecasting techniques have been applied to different medical datasets, accurate detection of heart attacks in a timely manner remains elusive. This article presents a comprehensive comparative analysis of various ML techniques, including Decision Tree, Support Vector Machines, Random Forest, Extreme Gradient Boosting (XGBoost), Adaptive Boosting, Multilayer Perceptron, Gradient Boosting, K-Nearest Neighbor, and Logistic Regression. These classifiers are implemented and evaluated in Python using data from over 300 patients obtained from the Kaggle cardiovascular repository in CSV format. The classifiers categorize patients into two groups: those with a heart attack and those without. Performance evaluation metrics such as recall, precision, accuracy, and the F1-measure are employed to assess the classifiers’ effectiveness. The results of this study highlight XGBoost classifier as a promising tool in the medical domain for accurate diagnosis, demonstrating the highest predictive accuracy (95.082%) with a calculation time of (0.07995 sec) on the dataset compared to other classifiers
Evolutionary algorithm-based analysis of gravitational microlensing lightcurves
A new algorithm developed to perform autonomous fitting of gravitational
microlensing lightcurves is presented. The new algorithm is conceptually
simple, versatile and robust, and parallelises trivially; it combines features
of extant evolutionary algorithms with some novel ones, and fares well on the
problem of fitting binary-lens microlensing lightcurves, as well as on a number
of other difficult optimisation problems. Success rates in excess of 90% are
achieved when fitting synthetic though noisy binary-lens lightcurves, allowing
no more than 20 minutes per fit on a desktop computer; this success rate is
shown to compare very favourably with that of both a conventional (iterated
simplex) algorithm, and a more state-of-the-art, artificial neural
network-based approach. As such, this work provides proof of concept for the
use of an evolutionary algorithm as the basis for real-time, autonomous
modelling of microlensing events. Further work is required to investigate how
the algorithm will fare when faced with more complex and realistic microlensing
modelling problems; it is, however, argued here that the use of parallel
computing platforms, such as inexpensive graphics processing units, should
allow fitting times to be constrained to under an hour, even when dealing with
complicated microlensing models. In any event, it is hoped that this work might
stimulate some interest in evolutionary algorithms, and that the algorithm
described here might prove useful for solving microlensing and/or more general
model-fitting problems.Comment: 14 pages, 3 figures; accepted for publication in MNRA
Machine Learning in Automated Text Categorization
The automated categorization (or classification) of texts into predefined
categories has witnessed a booming interest in the last ten years, due to the
increased availability of documents in digital form and the ensuing need to
organize them. In the research community the dominant approach to this problem
is based on machine learning techniques: a general inductive process
automatically builds a classifier by learning, from a set of preclassified
documents, the characteristics of the categories. The advantages of this
approach over the knowledge engineering approach (consisting in the manual
definition of a classifier by domain experts) are a very good effectiveness,
considerable savings in terms of expert manpower, and straightforward
portability to different domains. This survey discusses the main approaches to
text categorization that fall within the machine learning paradigm. We will
discuss in detail issues pertaining to three different problems, namely
document representation, classifier construction, and classifier evaluation.Comment: Accepted for publication on ACM Computing Survey
Development and validation of the social emotional competence questionnaire (SECQ)
Reliable and valid measures of children’s and adolescents’ social emotional
competence (SEC) are necessary to develop in order to assess their social
emotional development and provide appropriate intervention in child and
adolescent development. A pool of 25 items was created for the Social
Emotional Competence Questionnaire (SECQ) that represented five dimensions
of SEC: self-awareness, social awareness, self-management, relationship
management and responsible decision-making. A series of four studies are
reported relating to the development and validation of the measure.
Confirmatory factor analyses of the responses of 444 fourth-graders showed an
acceptable fit of the model. The model was replicated with another 356
secondary school students. Additional studies revealed good internal
consistency. The significant correlations among the five SEC components and
academic performance provided evidence for the predictive validity of the
instrument. With multiple samples, these results showed that the scale holds
promise as a reliable, valid measure of SECpeer-reviewe
Model-Based Problem Solving through Symbolic Regression via Pareto Genetic Programming.
Pareto genetic programming methodology is extended by additional generic model selection and generation strategies that (1) drive the modeling engine to creation of models of reduced non-linearity and increased generalization capabilities, and (2) improve the effectiveness of the search for robust models by goal softening and adaptive fitness evaluations. In addition to the new strategies for model development and model selection, this dissertation presents a new approach for analysis, ranking, and compression of given multi-dimensional input-response data for the purpose of balancing the information content of undesigned data sets.
Recommended from our members
Proceedings of ECAI International Workshop on Neural-Symbolic Learning and reasoning NeSy 2006
Leo: Lagrange Elementary Optimization
Global optimization problems are frequently solved using the practical and
efficient method of evolutionary sophistication. But as the original problem
becomes more complex, so does its efficacy and expandability. Thus, the purpose
of this research is to introduce the Lagrange Elementary Optimization (Leo) as
an evolutionary method, which is self-adaptive inspired by the remarkable
accuracy of vaccinations using the albumin quotient of human blood. They
develop intelligent agents using their fitness function value after gene
crossing. These genes direct the search agents during both exploration and
exploitation. The main objective of the Leo algorithm is presented in this
paper along with the inspiration and motivation for the concept. To demonstrate
its precision, the proposed algorithm is validated against a variety of test
functions, including 19 traditional benchmark functions and the CECC06 2019
test functions. The results of Leo for 19 classic benchmark test functions are
evaluated against DA, PSO, and GA separately, and then two other recent
algorithms such as FDO and LPB are also included in the evaluation. In
addition, the Leo is tested by ten functions on CECC06 2019 with DA, WOA, SSA,
FDO, LPB, and FOX algorithms distinctly. The cumulative outcomes demonstrate
Leo's capacity to increase the starting population and move toward the global
optimum. Different standard measurements are used to verify and prove the
stability of Leo in both the exploration and exploitation phases. Moreover,
Statistical analysis supports the findings results of the proposed research.
Finally, novel applications in the real world are introduced to demonstrate the
practicality of Leo.Comment: 28 page
- …