517 research outputs found

    A Profile Likelihood Analysis of the Constrained MSSM with Genetic Algorithms

    Full text link
    The Constrained Minimal Supersymmetric Standard Model (CMSSM) is one of the simplest and most widely-studied supersymmetric extensions to the standard model of particle physics. Nevertheless, current data do not sufficiently constrain the model parameters in a way completely independent of priors, statistical measures and scanning techniques. We present a new technique for scanning supersymmetric parameter spaces, optimised for frequentist profile likelihood analyses and based on Genetic Algorithms. We apply this technique to the CMSSM, taking into account existing collider and cosmological data in our global fit. We compare our method to the MultiNest algorithm, an efficient Bayesian technique, paying particular attention to the best-fit points and implications for particle masses at the LHC and dark matter searches. Our global best-fit point lies in the focus point region. We find many high-likelihood points in both the stau co-annihilation and focus point regions, including a previously neglected section of the co-annihilation region at large m_0. We show that there are many high-likelihood points in the CMSSM parameter space commonly missed by existing scanning techniques, especially at high masses. This has a significant influence on the derived confidence regions for parameters and observables, and can dramatically change the entire statistical inference of such scans.Comment: 47 pages, 8 figures; Fig. 8, Table 7 and more discussions added to Sec. 3.4.2 in response to referee's comments; accepted for publication in JHE

    Adaptive algorithms for history matching and uncertainty quantification

    Get PDF
    Numerical reservoir simulation models are the basis for many decisions in regard to predicting, optimising, and improving production performance of oil and gas reservoirs. History matching is required to calibrate models to the dynamic behaviour of the reservoir, due to the existence of uncertainty in model parameters. Finally a set of history matched models are used for reservoir performance prediction and economic and risk assessment of different development scenarios. Various algorithms are employed to search and sample parameter space in history matching and uncertainty quantification problems. The algorithm choice and implementation, as done through a number of control parameters, have a significant impact on effectiveness and efficiency of the algorithm and thus, the quality of results and the speed of the process. This thesis is concerned with investigation, development, and implementation of improved and adaptive algorithms for reservoir history matching and uncertainty quantification problems. A set of evolutionary algorithms are considered and applied to history matching. The shared characteristic of applied algorithms is adaptation by balancing exploration and exploitation of the search space, which can lead to improved convergence and diversity. This includes the use of estimation of distribution algorithms, which implicitly adapt their search mechanism to the characteristics of the problem. Hybridising them with genetic algorithms, multiobjective sorting algorithms, and real-coded, multi-model and multivariate Gaussian-based models can help these algorithms to adapt even more and improve their performance. Finally diversity measures are used to develop an explicit, adaptive algorithm and control the algorithm’s performance, based on the structure of the problem. Uncertainty quantification in a Bayesian framework can be carried out by resampling of the search space using Markov chain Monte-Carlo sampling algorithms. Common critiques of these are low efficiency and their need for control parameter tuning. A Metropolis-Hastings sampling algorithm with an adaptive multivariate Gaussian proposal distribution and a K-nearest neighbour approximation has been developed and applied

    Detailed modelling and optmization of crystallization process

    Get PDF
    Orientador: Rubens Maciel FilhoTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia QuimicaResumo: O foco de estudo neste trabalho é a cristalização, processo bastante utilizado industrialmente, principalmente na obtenção de produtos de alto valor agregado nas indústrias farmacêuticas e de química fina. Embora seja um processo de clássica utilização, seus mecanismos, sua modelagem e o real controle de sua operação ainda requerem estudos. A tese apresenta discussões e desenvolvimentos na área de modelagem determinística detalhada do processo e sua otimização, tanto por métodos determinísticos quanto estocásticos. A modelagem é discutida detalhadamente e os desenvolvimentos presentes na literatura de métodos numéricos aplicáveis à solução do balanço de população, parte integrante da modelagem, são apresentados com enfoque nos processos de cristalização e nas principais vantagens e desvantagens. Estudos preliminares de melhoria do processo de cristalização em modo batelada operada por resfriamento indicam a necessidade de otimização da política operacional de resfriamento. Uma vez que o método determinístico de otimização de Programação Quadrática Sucessiva se apresenta ineficiente para resolução do problema de otimização, a utilização de Algoritmo Genético, um método estocástico de otimização bastante estabelecido na literatura, é avaliada, para a busca do ótimo global deste processo, em um estudo pioneiro na literatura de aplicação dessa técnica de otimização em processos de cristalização. Uma vez que o uso de Algoritmos Genéticos exige que se executem sucessivas corridas com diferentes valores para os seus parâmetros no intuito de se aumentar a probabilidade de alcance do ótimo global (ou suas cercanias), um procedimento original, geral e relativamente simples é desenvolvido e proposto para detecção do conjunto de parâmetros do algoritmo de influência significativa sobre a resposta de otimização. A metodologia proposta é aplicada a casos de estudo gerais, de complexidades diferentes e se mostra bastante útil nos estudos preliminares via Algoritmo Genético. O procedimento é então aplicado ao problema de otimização da trajetória de resfriamento a ser utilizada em um processo de cristalização em modo batelada. Os resultados obtidos na tese apontam para a dificuldade dos métodos determinísticos de otimização em lidar com problemas de alta dimensionalidade, levando a ótimos locais, enquanto os métodos evolucionários são capazes de se aproximar do ótimo global, sendo, no entanto, de lenta execução. O procedimento desenvolvido para detecção dos parâmetros significativos do Algoritmo Genético é uma contribuição relevante da tese e pode ser aplicado a qualquer problema de otimização, de qualquer complexidade e dimensionalidadeAbstract: This work is focused on crystallization, a process widely used in industry, especially for the production of high added-value particles in pharmaceutical and fine chemistry industries. Although it is a process of established utilization, its mechanisms, modeling and the real control of its operation still require research and study. This thesis presents considerations and developments on the detailed deterministic modeling area and the process optimization with both deterministic and stochastic methods. The modeling is discussed in detail and the literature developed numerical methods for the population balance solution, which is part of the modeling, are presented focusing on crystallization processes and on the main advantages and drawbacks. Preliminary studies on batch cooling crystallization processes improvement drive to the need of cooling operating policy optimization. Since the Sequential Quadratic Programming deterministic method of optimization is inefficient for the optimization problem, the use of Genetic Algorithm (GA), a stochastic optimization method well established in literature, is evaluated in the global optimum search for this process, in a pioneering literature study of GA application in crystallization processes. Since the GA requires that many runs, with different values for its parameters, are executed, in order to increase the probability of global optimum (or its neighborhood) achievement, an original, general and relatively simple procedure for the detection of the parameters set with significant influence on the optimization response is developed and proposed. The proposed methodology is applied to general case studies, with different complexities and is very useful in the preliminary studies via GA. The procedure is, then, applied to the cooling profile optimization problem in a batch cooling optimization process. The results of the study presented in this thesis indicate that the deterministic optimization methods do not deal well with high dimensionality problems, leading to achievement of local optima. The evolutionary methods are able to detect the region of the global optimum but, on the other hand, are not fast codes. The developed procedure for the significant GA parameters detection is a relevant contribution of the thesis and can be applied to any optimization problem (of any complexity and of any dimensionality)DoutoradoDesenvolvimento de Processos QuímicosDoutor em Engenharia Químic

    An application of genetic algorithms to chemotherapy treatment.

    Get PDF
    The present work investigates methods for optimising cancer chemotherapy within the bounds of clinical acceptability and making this optimisation easily accessible to oncologists. Clinical oncologists wish to be able to improve existing treatment regimens in a systematic, effective and reliable way. In order to satisfy these requirements a novel approach to chemotherapy optimisation has been developed, which utilises Genetic Algorithms in an intelligent search process for good chemotherapy treatments. The following chapters consequently address various issues related to this approach. Chapter 1 gives some biomedical background to the problem of cancer and its treatment. The complexity of the cancer phenomenon, as well as the multi-variable and multi-constrained nature of chemotherapy treatment, strongly support the use of mathematical modelling for predicting and controlling the development of cancer. Some existing mathematical models, which describe the proliferation process of cancerous cells and the effect of anti-cancer drugs on this process, are presented in Chapter 2. Having mentioned the control of cancer development, the relevance of optimisation and optimal control theory becomes evident for achieving the optimal treatment outcome subject to the constraints of cancer chemotherapy. A survey of traditional optimisation methods applicable to the problem under investigation is given in Chapter 3 with the conclusion that the constraints imposed on cancer chemotherapy and general non-linearity of the optimisation functionals associated with the objectives of cancer treatment often make these methods of optimisation ineffective. Contrariwise, Genetic Algorithms (GAs), featuring the methods of evolutionary search and optimisation, have recently demonstrated in many practical situations an ability to quickly discover useful solutions to highly-constrained, irregular and discontinuous problems that have been difficult to solve by traditional optimisation methods. Chapter 4 presents the essence of Genetic Algorithms, as well as their salient features and properties, and prepares the ground for the utilisation of Genetic Algorithms for optimising cancer chemotherapy treatment. The particulars of chemotherapy optimisation using Genetic Algorithms are given in Chapter 5 and Chapter 6, which present the original work of this thesis. In Chapter 5 the optimisation problem of single-drug chemotherapy is formulated as a search task and solved by several numerical methods. The results obtained from different optimisation methods are used to assess the quality of the GA solution and the effectiveness of Genetic Algorithms as a whole. Also, in Chapter 5 a new approach to tuning GA factors is developed, whereby the optimisation performance of Genetic Algorithms can be significantly improved. This approach is based on statistical inference about the significance of GA factors and on regression analysis of the GA performance. Being less computationally intensive compared to the existing methods of GA factor adjusting, the newly developed approach often gives better tuning results. Chapter 6 deals with the optimisation of multi-drug chemotherapy, which is a more practical and challenging problem. Its practicality can be explained by oncologists' preferences to administer anti-cancer drugs in various combinations in order to better cope with the occurrence of drug resistant cells. However, the imposition of strict toxicity constraints on combining various anticancer drugs together, makes the optimisation problem of multi-drug chemotherapy very difficult to solve, especially when complex treatment objectives are considered. Nevertheless, the experimental results of Chapter 6 demonstrate that this problem is tractable to Genetic Algorithms, which are capable of finding good chemotherapeutic regimens in different treatment situations. On the basis of these results a decision has been made to encapsulate Genetic Algorithms into an independent optimisation module and to embed this module into a more general and user-oriented environment - the Oncology Workbench. The particulars of this encapsulation and embedding are also given in Chapter 6. Finally, Chapter 7 concludes the present work by summarising the contributions made to the knowledge of the subject treated and by outlining the directions for further investigations. The main contributions are: (1) a novel application of the Genetic Algorithm technique in the field of cancer chemotherapy optimisation, (2) the development of a statistical method for tuning the values of GA factors, and (3) the development of a robust and versatile optimisation utility for a clinically usable decision support system. The latter contribution of this thesis creates an opportunity to widen the application domain of Genetic Algorithms within the field of drug treatments and to allow more clinicians to benefit from utilising the GA optimisation

    Towards understanding tree root profiles: simulating hydrologically optimal strategies for root distribution

    Get PDF
    In this modelling study differences in vertical root distributions measured in four contrasting forest locations in the Netherlands were investigated. Root distributions are seen as a reflection of the plant’s optimisation strategy, based on hydrological grounds. The 'optimal' root distribution is defined as the one that maximises the water uptake from the root zone over a period of ten years. The optimal root distributions of four forest locations with completely different soil physical characteristics are calculated using the soil hydrological model SWIF. Two different model configurations for root interactions were tested: the standard model configuration in which one single root profile was used (SWIF-NC), and a model configuration in which two root profiles compete for the same available water (SWIF-C). The root profiles were parameterised with genetic algorithms. The fitness of a certain root profile was defined as the amount of water uptake over a simulation period of ten years. The root profiles of SWIF-C were optimised using an evolutionary game. The results showed clear differences in optimal root distributions between the various sites and also between the two model configurations. Optimisation with SWIF-C resulted in root profiles that were easier to interpret in terms of feasible biological strategies. Preferential water uptake in wetter soil regions was an important factor for interpretation of the simulated root distributions. As the optimised root profiles still showed differences with measured profiles, this analysis is presented, not as the final solution for explaining differences in root profiles of vegetation but as a first step using an optimisation theory to increase understanding of the root profiles of trees.</p> <p style='line-height: 20px;'><b>Keywords:</b> forest hydrology, optimisation, root

    Spatial energetics:a thermodynamically-consistent methodology for modelling resource acquisition, distribution, and end-use networks in nature and society

    Get PDF
    Resource acquisition, distribution, and end-use (RADE) networks are ubiquitous in natural and human-engineered systems, connecting spatially-distributed points of supply and demand, to provide energy and material resources required by these systems for growth and maintenance. A clear understanding of the dynamics of these networks is crucial to protect those supported and impacted by them, but past modelling efforts are limited in their explicit consideration of spatial size and topology, which are necessary to the thermodynamically-realistic representation of the energetics of these networks. This thesis attempts to address these limitations by developing a spatially-explicit modelling framework for generalised energetic resource flows, as occurring in ecological and coupled socio-ecological systems. The methodology utilises equations from electrical engineering to operationalise the first and second laws of thermodynamics in flow calculations, and places these within an optimisation algorithm to replicate the selective pressure to maximise resource transfer and consumption and minimise energetic transport costs. The framework is applied to the nectar collection networks of A. mellifera as a proof-of-concept. The promising performance of the methodology in calculating the energetics of these networks in a flow-conserving manner, replicating attributes of foraging networks, and generating network structures consistent with those of known RADE networks, demonstrate the validity of the methodology, and suggests several potential avenues for future refinement and application

    Automatic identification of mechanical parts for robotic disassembly using deep neural network techniques

    Get PDF
    This work addressed the automatic visual identification of mechanical objects from 3D camera scans, and is part of a wider project focusing on automatic disassembly for remanufacturing. The main challenge of the task was the intrinsic uncertainties on the state of end-of-life products, which required a highly robust identification system. The use of point cloud models implied also the need to deal with significant computational overheads. The state-of-the-art PointNet deep neural network was chosen as the classifier system, due to its learning capabilities, suitability to processing 3D models, and ability to recognise objects irrespective of their pose. To obviate the need for collecting a large set of training models, it was decided that PointNet was to be trained using examples generated from 3D CAD models, and used on scans of real objects. Different tests were carried out to assess PointNet ability to deal with imprecise sensor readings and partial views. Due to restrictions on access due to the pandemic, it was not possible to collect a sufficiently systematic set of scans of physical objects in the lab. Various tests were thus carried out using combinations of CAD models of mechanical and everyday objects, primitive geometric shapes, and real scans of everyday objects from popular machine vision benchmarks. The investigation confirmed PointNet’s ability to recognise complex mechanical objects and irregular everyday shapes with good accuracy, generalising the results of learning from geometric shapes and CAD models. The performance of PointNet was not significantly affected by the use of partial views of the objects, a very common case in industrial applications. PointNet showed some limitations when tasked with recognising noisy scenes, and a practical solution was suggested to minimise this problem. To reduce the computational complexity of training a deep architecture using large data sets of 3D scenes, a predator-prey coevolutionary scheme was devised. The proposed algorithm evolves subsets of the training set, selecting for these subsets the most difficult examples. The remaining training samples are discarded by the evolutionary procedure, which thus reduces the number of examples that are presented to the classifier. The experimental results showed that this economy of training samples allows reducing the execution time of the learning procedure, without affecting the neural network recognition accuracy. This simplification of the learning procedure is of general importance for the whole deep learning field, since practical implementations are often hindered by the complexity of the training process

    Shape and topology optimisation for manufactured products

    Get PDF

    Speciation in the Deep Sea: Multi-Locus Analysis of Divergence and Gene Flow between Two Hybridizing Species of Hydrothermal Vent Mussels

    Get PDF
    International audienceBackground: Reconstructing the history of divergence and gene flow between closely-related organisms has long been a difficult task of evolutionary genetics. Recently, new approaches based on the coalescence theory have been developed to test the existence of gene flow during the process of divergence. The deep sea is a motivating place to apply these new approaches. Differentiation by adaptation can be driven by the heterogeneity of the hydrothermal environment while populations should not have been strongly perturbed by climatic oscillations, the main cause of geographic isolation at the surface. Methodology/Principal Finding: Samples of DNA sequences were obtained for seven nuclear loci and a mitochondrial locus in order to conduct a multi-locus analysis of divergence and gene flow between two closely related and hybridizing species of hydrothermal vent mussels, Bathymodiolus azoricus and B. puteoserpentis. The analysis revealed that (i) the two species have started to diverge approximately 0.760 million years ago, (ii) the B. azoricus population size was 2 to 5 time greater than the B. puteoserpentis and the ancestral population and (iii) gene flow between the two species occurred over the complete species range and was mainly asymmetric, at least for the chromosomal regions studied. Conclusions/Significance: A long history of gene flow has been detected between the two Bathymodiolus species. However, it proved very difficult to conclusively distinguish secondary introgression from ongoing parapatric differentiation. As powerful as coalescence approaches could be, we are left by the fact that natural populations often deviates from standard assumptions of the underlying model. A more direct observation of the history of recombination at one of the seven loci studied suggests an initial period of allopatric differentiation during which recombination was blocked between lineages. Even in the deep sea, geographic isolation may well be a crucial promoter of speciation
    corecore