63 research outputs found
Enhanced Version of Multi-algorithm Genetically Adaptive for Multiobjective optimization
Abstract: Multi-objective EAs (MOEAs) are well established population-based techniques for solving various search and optimization problems. MOEAs employ different evolutionary operators to evolve populations of solutions for approximating the set of optimal solutions of the problem at hand in a single simulation run. Different evolutionary operators suite different problems. The use of multiple operators with a self-adaptive capability can further improve the performance of existing MOEAs. This paper suggests an enhanced version of a genetically adaptive multi-algorithm for multi-objective (AMAL-GAM) optimisation which includes differential evolution (DE), particle swarm optimization (PSO), simulated binary crossover (SBX), Pareto archive evolution strategy (PAES) and simplex crossover (SPX) for population evolution during the course of optimization. We examine the performance of this enhanced version of AMALGAM experimentally over two different test suites, the ZDT test problems and the test instances designed recently for the special session on MOEA?s competition at the Congress of Evolutionary Computing of 2009 (CEC?09). The suggested algorithm has found better approximate solutions on most test problems in terms of inverted generational distance (IGD) as the metric indicator. - See more at: http://thesai.org/Publications/ViewPaper?Volume=6&Issue=12&Code=ijacsa&SerialNo=37#sthash.lxkuyzEf.dpu
Bayesian belief networks for dementia diagnosis and other applications: a comparison of hand-crafting and construction using a novel data driven technique
The Bayesian network (BN) formalism is a powerful representation for
encoding domains characterised by uncertainty. However, before it
can be used it must first be constructed, which is a major challenge
for any real-life problem. There are two broad approaches, namely
the hand-crafted approach, which relies on a human expert, and the
data-driven approach, which relies on data. The former approach is
useful, however issues such as human bias can introduce errors into
the model. We have conducted a literature review of the
expert-driven approach, and we have cherry-picked a number of common
methods, and engineered a framework to assist non-BN experts with
expert-driven construction of BNs. The latter construction approach
uses algorithms to construct the model from a data set. However,
construction from data is provably NP-hard.
To solve this problem, approximate, heuristic algorithms have been
proposed; in particular, algorithms that assume an order between the
nodes, therefore reducing the search space. However, traditionally,
this approach relies on an expert providing the order among the
variables
--- an expert may not always be available, or may be unable to
provide the order. Nevertheless, if a good order is available, these
order-based algorithms have demonstrated good performance. More
recent approaches attempt to ``learn'' a good order then use the
order-based algorithm to discover the structure. To eliminate the
need for order information during construction, we propose a search
in the entire space of Bayesian network structures --- we present a
novel approach for carrying out this task, and we demonstrate its
performance against existing algorithms that search in the entire
space and the space of orders.
Finally, we employ the hand-crafting framework to construct models
for the task of diagnosis in a ``real-life'' medical domain,
dementia diagnosis. We collect real dementia data from clinical
practice, and we apply the data-driven algorithms developed to
assess the concordance between the reference models developed by
hand and the models derived from real clinical data
Platelet Diagnostics:A novel liquid biomarker
The aim of this thesis is to find a novel liquid biomarker for the detection of cancer and to optimize treatment. The first chapter gives an introduction to the oncology biomarker field and focuses on platelets and their role in cancer. In part 1, we evaluate extracellular vesicles (EVs). EVs are small vesicles released by all types of cells, including tumor cells, into the circulation. They carry protein kinases and can be isolated from plasma. We demonstrate that AKT and ERK kinase protein levels in EVs reflect the cellular expression levels and treatment with kinase inhibitors alters their concentration, depending on the clinical response to the drug. Therefore, EVs may provide a promising biomarker biosource for monitoring of treatment responses. Part 2 starts with reviews describing the function and role of platelets in greater depth. Chapter 3 focusses on thrombocytogenesis and several biological processes in which platelets play a role. Furthermore, the RNA processing machineries harboured by platelets are discussed. Both chapter 3 and 4 evaluate the change platelets undergo after being exposed to tumor and its environment. The exchange of biomolecules with tumor cells results in educated platelets, so-called tumor educated platelets (TEPs). TEPs play a role in several hallmarks of cancer and have the ability to respond to systemic alterations making them an interesting biomarker. In chapter 5 the diagnostic potential of platelets is first discussed. We determine their potential by sequencing the RNA of 283 platelet samples, of which 228 are patients with cancer, and 55 are healthy controls. We reach an accuracy of 96%. Furthermore, we are able to pinpoint the location of the primary tumor with an accuracy of 71%. In part 3, our developed thromboSeq platform is taken to the next level. Several potential confounding factors are taken into account such as age and comorbidity. We show that particle-swarm optimization (PSO)-enhanced algorithms enable efficient selection of RNA biomarker panels. In a validation cohort we apply these algorithms to non-small-cell lung cancer and reach an accuracy of 88% in late stage (n=518) and early-stage 81% accuracy. Finally, in chapter 7 we describe our wet- and dry-lab protocols in detail. This includes platelet RNA isolation, mRNA amplification, and preparation for next-generation sequencing. The dry-lab protocol describes the automated FASTQ file pre-processing to quantified gene counts, quality controls, data normalization and correction, and swarm intelligence-enhanced support vector machine (SVM) algorithm development. Part 4 focuses on central nervous system (CNS) malignancies especially on glioblastoma. Chapter 8 gives an overview of the different liquid biomarkers for diffuse glioma, the most common primary CNS malignancy. In chapter 9 we assess the specificity of the platelet education due to glioblastoma by comparing the RNA profile of TEPs from glioblastoma patients with a neuroinflammatory disease and brain metastasis patients. This results in a detection accuracy of 80%. Secondly, analysis of patients with glioblastoma versus healthy controls in an independent validation series provide a detection accuracy of 95%. Furthermore, we describe the potential value of platelets as a monitoring biomarker for patients with glioma, distinguishing pseudoprogression from real tumor progression. In part 5 thromboSeq is applied to breast cancer diagnostics both as a screening tool in the general population and in a high risk population, BRCA mutated women. In chapter 11 we first apply our technique to an inflammatory condition, multiple sclerosis (MS). Platelet RNA is used as input for the development of a diagnostic MS classifier capable of detecting MS with 80% accuracy in the independent validation series. In the final part we conclude this thesis with a general discussion of the main findings and suggestions for future research
Investigating hybrids of evolution and learning for real-parameter optimization
In recent years, more and more advanced techniques have been developed in the field
of hybridizing of evolution and learning, this means that more applications with these techniques
can benefit from this progress. One example of these advanced techniques is the
Learnable Evolution Model (LEM), which adopts learning as a guide for the general evolutionary
search. Despite this trend and the progress in LEM, there are still many ideas and
attempts which deserve further investigations and tests. For this purpose, this thesis has
developed a number of new algorithms attempting to combine more learning algorithms
with evolution in different ways. With these developments, we expect to understand the
effects and relations between evolution and learning, and also achieve better performances
in solving complex problems.
The machine learning algorithms combined into the standard Genetic Algorithm (GA)
are the supervised learning method k-nearest-neighbors (KNN), the Entropy-Based Discretization
(ED) method, and the decision tree learning algorithm ID3. We test these algorithms
on various real-parameter function optimization problems, especially the functions
in the special session on CEC 2005 real-parameter function optimization. Additionally, a
medical cancer chemotherapy treatment problem is solved in this thesis by some of our
hybrid algorithms.
The performances of these algorithms are compared with standard genetic algorithms
and other well-known contemporary evolution and learning hybrid algorithms. Some of
them are the CovarianceMatrix Adaptation Evolution Strategies (CMAES), and variants of
the Estimation of Distribution Algorithms (EDA).
Some important results have been derived from our experiments on these developed algorithms.
Among them, we found that even some very simple learning methods hybridized
properly with evolution procedure can provide significant performance improvement; and
when more complex learning algorithms are incorporated with evolution, the resulting algorithms
are very promising and compete very well against the state of the art hybrid algorithms
both in well-defined real-parameter function optimization problems and a practical
evaluation-expensive problem
High-Performance Modelling and Simulation for Big Data Applications
This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
- …