42 research outputs found

    On the evolutionary optimisation of many conflicting objectives

    Get PDF
    This inquiry explores the effectiveness of a class of modern evolutionary algorithms, represented by Non-dominated Sorting Genetic Algorithm (NSGA) components, for solving optimisation tasks with many conflicting objectives. Optimiser behaviour is assessed for a grid of mutation and recombination operator configurations. Performance maps are obtained for the dual aims of proximity to, and distribution across, the optimal trade-off surface. Performance sweet-spots for both variation operators are observed to contract as the number of objectives is increased. Classical settings for recombination are shown to be suitable for small numbers of objectives but correspond to very poor performance for higher numbers of objectives, even when large population sizes are used. Explanations for this behaviour are offered via the concepts of dominance resistance and active diversity promotion

    Generalized decomposition and cross entropy methods for many-objective optimization

    Get PDF
    Decomposition-based algorithms for multi-objective optimization problems have increased in popularity in the past decade. Although their convergence to the Pareto optimal front (PF) is in several instances superior to that of Pareto-based algorithms, the problem of selecting a way to distribute or guide these solutions in a high-dimensional space has not been explored. In this work, we introduce a novel concept which we call generalized decomposition. Generalized decomposition provides a framework with which the decision maker (DM) can guide the underlying evolutionary algorithm toward specific regions of interest or the entire Pareto front with the desired distribution of Pareto optimal solutions. Additionally, it is shown that generalized decomposition simplifies many-objective problems by unifying the three performance objectives of multi-objective evolutionary algorithms – convergence to the PF, evenly distributed Pareto optimal solutions and coverage of the entire front – to only one, that of convergence. A framework, established on generalized decomposition, and an estimation of distribution algorithm (EDA) based on low-order statistics, namely the cross-entropy method (CE), is created to illustrate the benefits of the proposed concept for many objective problems. This choice of EDA also enables the test of the hypothesis that low-order statistics based EDAs can have comparable performance to more elaborate EDAs

    On the effect of scalarising norm choice in a ParEGO implementation

    Get PDF
    Computationally expensive simulations play an increasing role in engineering design, but their use in multi-objective optimization is heavily resource constrained. Specialist optimizers, such as ParEGO, exist for this setting, but little knowledge is available to guide their configuration. This paper uses a new implementation of ParEGO to examine three hypotheses relating to a key configuration parameter: choice of scalarising norm. Two hypotheses consider the theoretical trade-off between convergence speed and ability to capture an arbitrary Pareto front geometry. Experiments confirm these hypotheses in the bi-objective setting but the trade-off is largely unseen in many-objective settings. A third hypothesis considers the ability of dynamic norm scheduling schemes to overcome the trade-off. Experiments using a simple scheme offer partial support to the hypothesis in the bi-objective setting but no support in many-objective contexts. Norm scheduling is tentatively recommended for bi-objective problems for which the Pareto front geometry is concave or unknown

    Toward a unified framework for model calibration and optimisation in virtual engineering workflows

    Get PDF
    When designing a new product it is often advantageous to use virtual engineering as either a replacement or assistant to more traditional prototyping. Virtual engineering consists of two main stages: (i) development of the simulation model; (ii) use of the model in design optimisation. There is a vast literature on both of these stages in isolation but virtually no studies have considered them in combination. The model calibration and design optimisation processes both however, crucially, draw on the same resource budget for simulation evaluations. When evaluations are expensive, there may be advantages in treating the two stages as combined. This study lays out a joint framework by which such problems can be expressed through a unified mathematical notation. A previously published case study is reviewed within the context of this framework, and directions for further development are discussed

    Towards understanding the cost of adaptation in decomposition-based optimization algorithms

    Get PDF
    Decomposition-based methods are an increasingly popular choice for a posteriori multi-objective optimization. However the ability of such methods to describe a trade-off surface depends on the choice of weighting vectors defining the set of subproblems to be solved. Recent adaptive approaches have sought to progressively modify the weighting vectors to obtain a desirable distribution of solutions. This paper argues that adaptation imposes a non-negligible cost - in terms of convergence - on decomposition-based algorithms. To test this hypothesis, the process of adaptation is abstracted and then subjected to experimentation on established problems involving between three and 11 conflicting objectives. The results show that adaptive approaches require longer traversals through objectivespace than fixed-weight approaches. Since fixed weights cannot, in general, be specified in advance, it is concluded that the new wave of decomposition-based methods offer no immediate panacea to the well-known conflict between convergence and distribution afflicting Pareto-based a posteriori methods

    Commentary on Robinson et al. (2021): Evaluating theories of change for public health policies using computer model discovery methods

    Get PDF
    Recent developments in computer modelling—known as model discovery—could help to confirm the mechanisms underpinning Robinson and colleagues’ important early findings for the effectiveness of minimum unit pricing, and to test the complete theory of change underpinning this crucial evaluation

    Beyond behaviour : how health inequality theory can enhance our understanding of the ‘alcohol-harm paradox’

    Get PDF
    There are large socioeconomic inequalities in alcohol-related harm. The alcohol harm paradox (AHP) is the consistent finding that lower socioeconomic groups consume the same or less as higher socioeconomic groups yet experience greater rates of harm. To date, alcohol researchers have predominantly taken an individualised behavioural approach to understand the AHP. This paper calls for a new approach which draws on theories of health inequality, specifically the social determinants of health, fundamental cause theory, political economy of health and eco-social models. These theories consist of several interwoven causal mechanisms, including genetic inheritance, the role of social networks, the unequal availability of wealth and other resources, the psychosocial experience of lower socioeconomic position, and the accumulation of these experiences over time. To date, research exploring the causes of the AHP has often lacked clear theoretical underpinning. Drawing on these theoretical approaches in alcohol research would not only address this gap but would also result in a structured effort to identify the causes of the AHP. Given the present lack of clear evidence in favour of any specific theory, it is difficult to conclude whether one theory should take primacy in future research efforts. However, drawing on any of these theories would shift how we think about the causes of the paradox, from health behaviour in isolation to the wider context of complex interacting mechanisms between individuals and their environment. Meanwhile, computer simulations have the potential to test the competing theoretical perspectives, both in the abstract and empirically via synthesis of the disparate existing evidence base. Overall, making greater use of existing theoretical frameworks in alcohol epidemiology would offer novel insights into the AHP and generate knowledge of how to intervene to mitigate inequalities in alcohol-related harm

    The normative underpinnings of population-level alcohol use: An individual-level simulation model

    Get PDF
    Background. By defining what is “normal,” appropriate, expected, and unacceptable, social norms shape human behavior. However, the individual-level mechanisms through which social norms impact population-level trends in health-relevant behaviors are not well understood. Aims. To test the ability of social norms mechanisms to predict changes in population-level drinking patterns. Method. An individual-level model was developed to simulate dynamic normative mechanisms and behavioral rules underlying drinking behavior over time. The model encompassed descriptive and injunctive drinking norms and their impact on frequency and quantity of alcohol use. A microsynthesis initialized in 1979 was used as a demographically representative synthetic U.S. population. Three experiments were performed in order to test the modelled normative mechanisms. Results. Overall, the experiments showed limited influence of normative interventions on population-level alcohol use. An increase in the desire to drink led to the most meaningful changes in the population’s drinking behavior. The findings of the experiments underline the importance of autonomy, that is, the degree to which an individual is susceptible to normative influence. Conclusion. The model was able to predict theoretically plausible changes in drinking patterns at the population level through the impact of social mechanisms. Future applications of the model could be used to plan norms interventions pertaining to alcohol use as well as other health behaviors

    The long‐term effectiveness and cost‐effectiveness of public health interventions; how can we model behavior? A review

    Get PDF
    The effectiveness and cost of a public health intervention is dependent on complex human behaviors, yet health economic models typically make simplified assumptions about behavior, based on little theory or evidence. This paper reviews existing methods across disciplines for incorporating behavior within simulation models, to explore what methods could be used within health economic models and to highlight areas for further research. This may lead to better-informed model predictions. The most promising methods identified which could be used to improve modeling of the causal pathways of behavior-change interventions include econometric analyses, structural equation models, data mining and agent-based modeling; the latter of which has the advantage of being able to incorporate the non-linear, dynamic influences on behavior, including social and spatial networks. Twenty-two studies were identified which quantify behavioral theories within simulation models. These studies highlight the importance of combining individual decision making and interactions with the environment and demonstrate the importance of social norms in determining behavior. However, there are many theoretical and practical limitations of quantifying behavioral theory. Further research is needed about the use of agent-based models for health economic modeling, and the potential use of behavior maintenance theories and data mining

    Using multi-objective grammar-based genetic programming to integrate multiple social theories in agent-based modeling

    Get PDF
    Different theoretical mechanisms have been proposed for explaining complex social phenomena. For example, explanations for observed trends in population alcohol use have been postulated based on norm theory, role theory, and others. Many mechanism-based models of phenomena attempt to translate a single theory into a simulation model. However, single theories often only represent a partial explanation for the phenomenon. The potential of integrating theories together, computationally, represents a promising way of improving the explanatory capability of generative social science. This paper presents a framework for such integrative model discovery, based on multi-objective grammar-based genetic programming (MOGGP). The framework is demonstrated using two separate theory-driven models of alcohol use dynamics based on norm theory and role theory. The proposed integration considers how the sequence of decisions to consume the next drink in a drinking occasion may be influenced by factors from the different theories. A new grammar is constructed based on this integration. Results of the MOGGP model discovery process find new hybrid models that outperform the existing single-theory models and the baseline hybrid model. Future work should consider and further refine the role of domain experts in defining the meaningfulness of models identified by MOGGP
    corecore