18,333 research outputs found

    A robust statistical estimation of the basic parameters of single stellar populations. I. Method

    Full text link
    The colour-magnitude diagrams of resolved single stellar populations, such as open and globular clusters, have provided the best natural laboratories to test stellar evolution theory. Whilst a variety of techniques have been used to infer the basic properties of these simple populations, systematic uncertainties arise from the purely geometrical degeneracy produced by the similar shape of isochrones of different ages and metallicities. Here we present an objective and robust statistical technique which lifts this degeneracy to a great extent through the use of a key observable: the number of stars along the isochrone. Through extensive Monte Carlo simulations we show that, for instance, we can infer the four main parameters (age, metallicity, distance and reddening) in an objective way, along with robust confidence intervals and their full covariance matrix. We show that systematic uncertainties due to field contamination, unresolved binaries, initial or present-day stellar mass function are either negligible or well under control. This technique provides, for the first time, a proper way to infer with unprecedented accuracy the fundamental properties of simple stellar populations, in an easy-to-implement algorithm.Comment: 17 pages, 12 figures, MNRAS, in pres

    Genetic learning particle swarm optimization

    Get PDF
    Social learning in particle swarm optimization (PSO) helps collective efficiency, whereas individual reproduction in genetic algorithm (GA) facilitates global effectiveness. This observation recently leads to hybridizing PSO with GA for performance enhancement. However, existing work uses a mechanistic parallel superposition and research has shown that construction of superior exemplars in PSO is more effective. Hence, this paper first develops a new framework so as to organically hybridize PSO with another optimization technique for “learning.” This leads to a generalized “learning PSO” paradigm, the *L-PSO. The paradigm is composed of two cascading layers, the first for exemplar generation and the second for particle updates as per a normal PSO algorithm. Using genetic evolution to breed promising exemplars for PSO, a specific novel *L-PSO algorithm is proposed in the paper, termed genetic learning PSO (GL-PSO). In particular, genetic operators are used to generate exemplars from which particles learn and, in turn, historical search information of particles provides guidance to the evolution of the exemplars. By performing crossover, mutation, and selection on the historical information of particles, the constructed exemplars are not only well diversified, but also high qualified. Under such guidance, the global search ability and search efficiency of PSO are both enhanced. The proposed GL-PSO is tested on 42 benchmark functions widely adopted in the literature. Experimental results verify the effectiveness, efficiency, robustness, and scalability of the GL-PSO

    A Survey on Compiler Autotuning using Machine Learning

    Full text link
    Since the mid-1990s, researchers have been trying to use machine-learning based approaches to solve a number of different compiler optimization problems. These techniques primarily enhance the quality of the obtained results and, more importantly, make it feasible to tackle two main compiler optimization problems: optimization selection (choosing which optimizations to apply) and phase-ordering (choosing the order of applying optimizations). The compiler optimization space continues to grow due to the advancement of applications, increasing number of compiler optimizations, and new target architectures. Generic optimization passes in compilers cannot fully leverage newly introduced optimizations and, therefore, cannot keep up with the pace of increasing options. This survey summarizes and classifies the recent advances in using machine learning for the compiler optimization field, particularly on the two major problems of (1) selecting the best optimizations and (2) the phase-ordering of optimizations. The survey highlights the approaches taken so far, the obtained results, the fine-grain classification among different approaches and finally, the influential papers of the field.Comment: version 5.0 (updated on September 2018)- Preprint Version For our Accepted Journal @ ACM CSUR 2018 (42 pages) - This survey will be updated quarterly here (Send me your new published papers to be added in the subsequent version) History: Received November 2016; Revised August 2017; Revised February 2018; Accepted March 2018

    Multiobjective genetic programming for financial portfolio management in dynamic environments

    Get PDF
    Multiobjective (MO) optimisation is a useful technique for evolving portfolio optimisation solutions that span a range from high-return/high-risk to low-return/low-risk. The resulting Pareto front would approximate the risk/reward Efficient Frontier [Mar52], and simplifies the choice of investment model for a given client’s attitude to risk. However, the financial market is continuously changing and it is essential to ensure that MO solutions are capturing true relationships between financial factors and not merely over fitting the training data. Research on evolutionary algorithms in dynamic environments has been directed towards adapting the algorithm to improve its suitability for retraining whenever a change is detected. Little research focused on how to assess and quantify the success of multiobjective solutions in unseen environments. The multiobjective nature of the problem adds a unique feature to be satisfied to judge robustness of solutions. That is, in addition to examining whether solutions remain optimal in the new environment, we need to ensure that the solutions’ relative positions previously identified on the Pareto front are not altered. This thesis investigates the performance of Multiobjective Genetic Programming (MOGP) in the dynamic real world problem of portfolio optimisation. The thesis provides new definitions and statistical metrics based on phenotypic cluster analysis to quantify robustness of both the solutions and the Pareto front. Focusing on the critical period between an environment change and when retraining occurs, four techniques to improve the robustness of solutions are examined. Namely, the use of a validation data set; diversity preservation; a novel variation on mating restriction; and a combination of both diversity enhancement and mating restriction. In addition, preliminary investigation of using the robustness metrics to quantify the severity of change for optimum tracking in a dynamic portfolio optimisation problem is carried out. Results show that the techniques used offer statistically significant improvement on the solutions’ robustness, although not on all the robustness criteria simultaneously. Combining the mating restriction with diversity enhancement provided the best robustness results while also greatly enhancing the quality of solutions

    Intelligent systems in manufacturing: current developments and future prospects

    Get PDF
    Global competition and rapidly changing customer requirements are demanding increasing changes in manufacturing environments. Enterprises are required to constantly redesign their products and continuously reconfigure their manufacturing systems. Traditional approaches to manufacturing systems do not fully satisfy this new situation. Many authors have proposed that artificial intelligence will bring the flexibility and efficiency needed by manufacturing systems. This paper is a review of artificial intelligence techniques used in manufacturing systems. The paper first defines the components of a simplified intelligent manufacturing systems (IMS), the different Artificial Intelligence (AI) techniques to be considered and then shows how these AI techniques are used for the components of IMS

    Control of quantum phenomena: Past, present, and future

    Full text link
    Quantum control is concerned with active manipulation of physical and chemical processes on the atomic and molecular scale. This work presents a perspective of progress in the field of control over quantum phenomena, tracing the evolution of theoretical concepts and experimental methods from early developments to the most recent advances. The current experimental successes would be impossible without the development of intense femtosecond laser sources and pulse shapers. The two most critical theoretical insights were (1) realizing that ultrafast atomic and molecular dynamics can be controlled via manipulation of quantum interferences and (2) understanding that optimally shaped ultrafast laser pulses are the most effective means for producing the desired quantum interference patterns in the controlled system. Finally, these theoretical and experimental advances were brought together by the crucial concept of adaptive feedback control, which is a laboratory procedure employing measurement-driven, closed-loop optimization to identify the best shapes of femtosecond laser control pulses for steering quantum dynamics towards the desired objective. Optimization in adaptive feedback control experiments is guided by a learning algorithm, with stochastic methods proving to be especially effective. Adaptive feedback control of quantum phenomena has found numerous applications in many areas of the physical and chemical sciences, and this paper reviews the extensive experiments. Other subjects discussed include quantum optimal control theory, quantum control landscapes, the role of theoretical control designs in experimental realizations, and real-time quantum feedback control. The paper concludes with a prospective of open research directions that are likely to attract significant attention in the future.Comment: Review article, final version (significantly updated), 76 pages, accepted for publication in New J. Phys. (Focus issue: Quantum control

    Updating, Upgrading, Refining, Calibration and Implementation of Trade-Off Analysis Methodology Developed for INDOT

    Get PDF
    As part of the ongoing evolution towards integrated highway asset management, the Indiana Department of Transportation (INDOT), through SPR studies in 2004 and 2010, sponsored research that developed an overall framework for asset management. This was intended to foster decision support for alternative investments across the program areas on the basis of a broad range of performance measures and against the background of the various alternative actions or spending amounts that could be applied to the several different asset types in the different program areas. The 2010 study also developed theoretical constructs for scaling and amalgamating the different performance measures, and for analyzing the different kinds of trade-offs. The research products from the present study include this technical report which shows how theoretical underpinnings of the methodology developed for INDOT in 2010 have been updated, upgraded, and refined. The report also includes a case study that shows how the trade-off analysis framework has been calibrated using available data. Supplemental to the report is Trade-IN Version 1.0, a set of flexible and easy-to-use spreadsheets that implement the tradeoff framework. With this framework and using data at the current time or in the future, INDOT’s asset managers are placed in a better position to quantify and comprehend the relationships between budget levels and system-wide performance, the relationships between different pairs of conflicting or non-conflicting performance measures under a given budget limit, and the consequences, in terms of system-wide performance, of funding shifts across the management systems or program areas

    Homogenization of plain weave composites with imperfect microstructure: Part II--Analysis of real-world materials

    Full text link
    A two-layer statistically equivalent periodic unit cell is offered to predict a macroscopic response of plain weave multilayer carbon-carbon textile composites. Falling-short in describing the most typical geometrical imperfections of these material systems the original formulation presented in (Zeman and \v{S}ejnoha, International Journal of Solids and Structures, 41 (2004), pp. 6549--6571) is substantially modified, now allowing for nesting and mutual shift of individual layers of textile fabric in all three directions. Yet, the most valuable asset of the present formulation is seen in the possibility of reflecting the influence of negligible meso-scale porosity through a system of oblate spheroidal voids introduced in between the two layers of the unit cell. Numerical predictions of both the effective thermal conductivities and elastic stiffnesses and their comparison with available laboratory data and the results derived using the Mori-Tanaka averaging scheme support credibility of the present approach, about as much as the reliability of local mechanical properties found from nanoindentation tests performed directly on the analyzed composite samples.Comment: 28 pages, 14 figure

    Indicator-based MONEDA: A Comparative Study of Scalability with Respect to Decision Space Dimensions

    Get PDF
    Proceedings of: 2011 IEEE Congress on Evolutionary Computation (CEC), New Orleans, LA, June 5-8 2011The multi-objective neural EDA (MONEDA) was proposed with the aim of overcoming some difficulties of current MOEDAs. MONEDA has been shown to yield relevant results when confronted with complex problems. Furthermore, its performance has been shown to adequately adapt to problems with many objectives. Nevertheless, one key issue remains to be studied: MONEDA scalability with regard to the number of decision variables. In this paper has a two-fold purpose. On one hand we propose a modification of MONEDA that incorporates an indicator-based selection mechanism based on the HypE algorithm, while, on the other, we assess the indicator-based MONEDA when solving some complex two-objective problems, in particular problems UF1 to UF7 of the CEC 2009 MOP competition, configured with a progressively-increasing number of decision variables.This work was supported by projects CICYT TIN2008-06742-C02-02/TSI, CICYT TEC2008-06732-C02-02/TEC, SINPROB, CAM CONTEXTS S2009/TIC-1485 and DPS2008-07029-C02-02.Publicad
    corecore