114 research outputs found
Explicit memory schemes for evolutionary algorithms in dynamic environments
Copyright @ 2007 Springer-VerlagProblem optimization in dynamic environments has atrracted a growing interest from the evolutionary computation community in reccent years due to its importance in real world optimization problems. Several approaches have been developed to enhance the performance of evolutionary algorithms for dynamic optimization problems, of which the memory scheme is a major one. This chapter investigates the application of explicit memory schemes for evolutionary algorithms in dynamic environments. Two kinds of explicit memory schemes: direct memory and associative memory, are studied within two classes of evolutionary algorithms: genetic algorithms and univariate marginal distribution algorithms for dynamic optimization problems. Based on a series of systematically constructed dynamic test environments, experiments are carried out to investigate these explicit memory schemes and the performance of direct and associative memory schemes are campared and analysed. The experimental results show the efficiency of the memory schemes for evolutionary algorithms in dynamic environments, especially when the environment changes cyclically. The experimental results also indicate that the effect of the memory schemes depends not only on the dynamic problems and dynamic environments but also on the evolutionary algorithm used
Is population structure sufficient to generate area-level inequalities in influenza rates? An examination using agent-based models
Background: In New Haven County, CT (NHC), influenza hospitalization rates have been shown to increase with census tract poverty in multiple influenza seasons. Though multiple factors have been hypothesized to cause these inequalities, including population structure, differential vaccine uptake, and differential access to healthcare, the impact of each in generating observed inequalities remains unknown. We can design interventions targeting factors with the greatest explanatory power if we quantify the proportion of observed inequalities that hypothesized factors are able to generate. Here, we ask if population structure is sufficient to generate the observed area-level inequalities in NHC. To our knowledge, this is the first use of simulation models to examine the causes of differential poverty-related influenza rates. Methods: Using agent-based models with a census-informed, realistic representation of household size, age-structure, population density in NHC census tracts, and contact rates in workplaces, schools, households, and neighborhoods, we measured poverty-related differential influenza attack rates over the course of an epidemic with a 23 % overall clinical attack rate. We examined the role of asthma prevalence rates as well as individual contact rates and infection susceptibility in generating observed area-level influenza inequalities. Results: Simulated attack rates (AR) among adults increased with census tract poverty level (F = 30.5; P < 0.001) in an epidemic caused by a virus similar to A (H1N1) pdm09. We detected a steeper, earlier influenza rate increase in high-poverty census tracts - a finding that we corroborate with a temporal analysis of NHC surveillance data during the 2009 H1N1 pandemic. The ratio of the simulated adult AR in the highest- to lowest-poverty tracts was 33 % of the ratio observed in surveillance data. Increasing individual contact rates in the neighborhood did not increase simulated area-level inequalities. When we modified individual susceptibility such that it was inversely proportional to household income, inequalities in AR between high- and low-poverty census tracts were comparable to those observed in reality. Discussion: To our knowledge, this is the first study to use simulations to probe the causes of observed inequalities in influenza disease patterns. Knowledge of the causes and their relative explanatory power will allow us to design interventions that have the greatest impact on reducing inequalities. Conclusion: Differential exposure due to population structure in our realistic simulation model explains a third of the observed inequality. Differential susceptibility to disease due to prevailing chronic conditions, vaccine uptake, and smoking should be considered in future models in order to quantify the role of additional factors in generating influenza inequalities
Adaptive mutation using statistics mechanism for genetic algorithms
Copyright @ 2004 Springer-Verla
Evolutionary approaches to signal decomposition in an application service management system
The increased demand for autonomous control in enterprise information systems has generated interest on efficient global search methods for multivariate datasets in order to search for original elements in time-series patterns,
and build causal models of systems interactions, utilization dependencies, and performance characteristics. In this context, activity signals deconvolution is a necessary step to achieve effective adaptive control in Application Service Management. The paper investigates the potential of population-based metaheuristic algorithms, particularly variants of particle swarm, genetic algorithms and differential
evolution methods, for activity signals deconvolution when the application performance model is unknown a priori. In our approach, the Application Service Management System is treated as a black- or grey-box, and the activity signals deconvolution is formulated as a search problem, decomposing time-series that outline relations between action signals and utilization-execution time of resources. Experiments are conducted using a queue-based computing system model as a test-bed under different load conditions and search configurations. Special attention was put on high-dimensional scenarios, testing effectiveness for large-scale multivariate data analyses that can obtain a near-optimal signal decomposition solution in a short time. The experimental results reveal benefits, qualities and drawbacks of the various metaheuristic strategies selected for a given signal deconvolution problem,
and confirm the potential of evolutionary-type search to
effectively explore the search space even in high-dimensional cases. The approach and the algorithms investigated can be useful in support of human administrators, or in enhancing the effectiveness of feature extraction schemes that feed decision
blocks of autonomous controllers
What makes a problem hard for a genetic algorithm? Some anomalous results and their explanation
What makes a problem easy or hard for a genetic algorithm (GA)? This question has become increasingly important as people have tried to apply the GA to ever more diverse types of problems. Much previous work on this question has studied the relationship between GA performance and the structure of a given fitness function when it is expressed as a Walsh polynomial . The work of Bethke, Goldberg, and others has produced certain theoretical results about this relationship. In this article we review these theoretical results, and then discuss a number of seemingly anomalous experimental results reported by Tanese concerning the performance of the GA on a subclass of Walsh polynomials, some members of which were expected to be easy for the GA to optimize. Tanese found that the GA was poor at optimizing all functions in this subclass, that a partitioning of a single large population into a number of smaller independent populations seemed to improve performance, and that hillelimbing outperformed both the original and partitioned forms of the GA on these functions. These results seemed to contradict several commonly held expectations about GAs.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/46892/1/10994_2004_Article_BF00993046.pd
Galactic and Extragalactic Samples of Supernova Remnants: How They Are Identified and What They Tell Us
Supernova remnants (SNRs) arise from the interaction between the ejecta of a
supernova (SN) explosion and the surrounding circumstellar and interstellar
medium. Some SNRs, mostly nearby SNRs, can be studied in great detail. However,
to understand SNRs as a whole, large samples of SNRs must be assembled and
studied. Here, we describe the radio, optical, and X-ray techniques which have
been used to identify and characterize almost 300 Galactic SNRs and more than
1200 extragalactic SNRs. We then discuss which types of SNRs are being found
and which are not. We examine the degree to which the luminosity functions,
surface-brightness distributions and multi-wavelength comparisons of the
samples can be interpreted to determine the class properties of SNRs and
describe efforts to establish the type of SN explosion associated with a SNR.
We conclude that in order to better understand the class properties of SNRs, it
is more important to study (and obtain additional data on) the SNRs in galaxies
with extant samples at multiple wavelength bands than it is to obtain samples
of SNRs in other galaxiesComment: Final 2016 draft of a chapter in "Handbook of Supernovae" edited by
Athem W. Alsabti and Paul Murdin. Final version available at
https://doi.org/10.1007/978-3-319-20794-0_90-
Facilitating the development of controlled vocabularies for metabolomics technologies with text mining
BACKGROUND: Many bioinformatics applications rely on controlled vocabularies or ontologies to consistently interpret and seamlessly integrate information scattered across public resources. Experimental data sets from metabolomics studies need to be integrated with one another, but also with data produced by other types of omics studies in the spirit of systems biology, hence the pressing need for vocabularies and ontologies in metabolomics. However, it is time-consuming and non trivial to construct these resources manually. RESULTS: We describe a methodology for rapid development of controlled vocabularies, a study originally motivated by the needs for vocabularies describing metabolomics technologies. We present case studies involving two controlled vocabularies (for nuclear magnetic resonance spectroscopy and gas chromatography) whose development is currently underway as part of the Metabolomics Standards Initiative. The initial vocabularies were compiled manually, providing a total of 243 and 152 terms. A total of 5,699 and 2,612 new terms were acquired automatically from the literature. The analysis of the results showed that full-text articles (especially the Materials and Methods sections) are the major source of technology-specific terms as opposed to paper abstracts. CONCLUSIONS: We suggest a text mining method for efficient corpus-based term acquisition as a way of rapidly expanding a set of controlled vocabularies with the terms used in the scientific literature. We adopted an integrative approach, combining relatively generic software and data resources for time- and cost-effective development of a text mining tool for expansion of controlled vocabularies across various domains, as a practical alternative to both manual term collection and tailor-made named entity recognition methods
Application of geographic information systems and simulation modelling to dental public health: Where next?
Public health research in dentistry has used geographic information systems since the 1960s. Since then, the methods used in the field have matured, moving beyond simple spatial associations to the use of complex spatial statistics and, on occasions, simulation modelling. Many analyses are often descriptive in nature; however, and the use of more advanced spatial simulation methods within dental public health remains rare, despite the potential they offer the field. This review introduces a new approach to geographical analysis of oral health outcomes in neighbourhoods and small area geographies through two novel simulation methods-spatial microsimulation and agent-based modelling. Spatial microsimulation is a population synthesis technique, used to combine survey data with Census population totals to create representative individual-level population datasets, allowing for the use of individual-level data previously unavailable at small spatial scales. Agent-based models are computer simulations capable of capturing interactions and feedback mechanisms, both of which are key to understanding health outcomes. Due to these dynamic and interactive processes, the method has an advantage over traditional statistical techniques such as regression analysis, which often isolate elements from each other when testing for statistical significance. This article discusses the current state of spatial analysis within the dental public health field, before reviewing each of the methods, their applications, as well as their advantages and limitations. Directions and topics for future research are also discussed, before addressing the potential to combine the two methods in order to further utilize their advantages. Overall, this review highlights the promise these methods offer, not just for making methodological advances, but also for adding to our ability to test and better understand theoretical concepts and pathways
- …