3,427 research outputs found

    G\mathcal{G}-SELC: Optimization by sequential elimination of level combinations using genetic algorithms and Gaussian processes

    Full text link
    Identifying promising compounds from a vast collection of feasible compounds is an important and yet challenging problem in the pharmaceutical industry. An efficient solution to this problem will help reduce the expenditure at the early stages of drug discovery. In an attempt to solve this problem, Mandal, Wu and Johnson [Technometrics 48 (2006) 273--283] proposed the SELC algorithm. Although powerful, it fails to extract substantial information from the data to guide the search efficiently, as this methodology is not based on any statistical modeling. The proposed approach uses Gaussian Process (GP) modeling to improve upon SELC, and hence named G\mathcal{G}-SELC. The performance of the proposed methodology is illustrated using four and five dimensional test functions. Finally, we implement the new algorithm on a real pharmaceutical data set for finding a group of chemical compounds with optimal properties.Comment: Published in at http://dx.doi.org/10.1214/08-AOAS199 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Approximating Pareto frontier using a hybrid line search approach

    Get PDF
    This is the post-print version of the final paper published in Information Sciences. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2010 Elsevier B.V.The aggregation of objectives in multiple criteria programming is one of the simplest and widely used approach. But it is well known that this technique sometimes fail in different aspects for determining the Pareto frontier. This paper proposes a new approach for multicriteria optimization, which aggregates the objective functions and uses a line search method in order to locate an approximate efficient point. Once the first Pareto solution is obtained, a simplified version of the former one is used in the context of Pareto dominance to obtain a set of efficient points, which will assure a thorough distribution of solutions on the Pareto frontier. In the current form, the proposed technique is well suitable for problems having multiple objectives (it is not limited to bi-objective problems) and require the functions to be continuous twice differentiable. In order to assess the effectiveness of this approach, some experiments were performed and compared with two recent well known population-based metaheuristics namely ParEGO and NSGA II. When compared to ParEGO and NSGA II, the proposed approach not only assures a better convergence to the Pareto frontier but also illustrates a good distribution of solutions. From a computational point of view, both stages of the line search converge within a short time (average about 150 ms for the first stage and about 20 ms for the second stage). Apart from this, the proposed technique is very simple, easy to implement and use to solve multiobjective problems.CNCSIS IDEI 2412, Romani

    Modelling and planning reliable wireless sensor networks based on multi-objective optimization genetic algorithm with changeable length

    Get PDF
    Wireless sensor networks (WSN) have shown their potentials in various applications, which bring a lot of benefits to users from different working areas. However, due to the diversity of the deployed environments and resource constraints, it is difficult to predict the performance of a topology. Besides the connectivity, coverage, cost, network longevity and service quality should all be considered during the planning procedure. Therefore, efficiently planning a reliable WSN is a challenging task, which requires designers coping with comprehensive and interdisciplinary knowledge. A WSN planning method is proposed in this work to tackle the above mentioned challenges and efficiently deploying reliable WSNs. First of all, the above mentioned metrics are modeled more comprehensively and practically compared with other works. Especially 3D ray tracing method is used to model the radio link and sensing signal, which are sensitive to the obstruction of obstacles; network routing is constructed by using AODV protocol; the network longevity, packet delay and packet drop rate are obtained via simulating practical events in WSNet simulator, which to the best of our knowledge, is the first time that network simulator is involved in a planning algorithm. Moreover, a multi-objective optimization algorithm is developed to cater for the characteristics of WSNs. Network size is changeable during evolution, meanwhile the crossovers and mutations are limited by certain constraints to eliminate invalid modifications and improve the computation efficiency. The capability of providing multiple optimized solutions simultaneously allows users making their own decisions, and the results are more comprehensive optimized compared with other state-of-the-art algorithms. Practical WSN deployments are also realized for both indoor and outdoor environments and the measurements coincident well with the generated optimized topologies, which prove the efficiency and reliability of the proposed algorithm

    Modeling and Optimization of Stochastic Process Parameters in Complex Engineering Systems

    Get PDF
    For quality engineering researchers and practitioners, a wide number of statistical tools and techniques are available for use in the manufacturing industry. The objective or goal in applying these tools has always been to improve or optimize a product or process in terms of efficiency, production cost, or product quality. While tremendous progress has been made in the design of quality optimization models, there remains a significant gap between existing research and the needs of the industrial community. Contemporary manufacturing processes are inherently more complex - they may involve multiple stages of production or require the assessment of multiple quality characteristics. New and emerging fields, such as nanoelectronics and molecular biometrics, demand increased degrees of precision and estimation, that which is not attainable with current tools and measures. And since most researchers will focus on a specific type of characteristic or a given set of conditions, there are many critical industrial processes for which models are not applicable. Thus, the objective of this research is to improve existing techniques by not only expanding their range of applicability, but also their ability to more realistically model a given process. Several quality models are proposed that seek greater precision in the estimation of the process parameters and the removal of assumptions that limit their breadth and scope. An extension is made to examine the effectiveness of these models in both non-standard conditions and in areas that have not been previously investigated. Upon the completion of an in-depth literature review, various quality models are proposed, and numerical examples are used to validate the use of these methodologies

    An Integrated Probability-Based Approach for Multiple Response Surface Optimization

    Get PDF
    Nearly all real life systems have multiple quality characteristics where individual modeling and optimization approaches can not provide a balanced compromising solution. Since performance, cost, schedule, and consistency remain the basics of any design process, design configurations are expected to meet several conflicting requirements at the same time. Correlation between responses and model parameter uncertainty demands extra scrutiny and prevents practitioners from studying responses in isolation. Like any other multi-objective problem, multi-response optimization problem requires trade-offs and compromises, which in turn makes the available algorithms difficult to generalize for all design problems. Although multiple modeling and optimization approaches have been highly utilized in different industries, and several software applications are available, there is no perfect solution to date and this is likely to remain so in the future. Therefore, problem specific structure, diversity, and the complexity of the available approaches require careful consideration by the quality engineers in their applications

    Incorporating Memory and Learning Mechanisms Into Meta-RaPS

    Get PDF
    Due to the rapid increase of dimensions and complexity of real life problems, it has become more difficult to find optimal solutions using only exact mathematical methods. The need to find near-optimal solutions in an acceptable amount of time is a challenge when developing more sophisticated approaches. A proper answer to this challenge can be through the implementation of metaheuristic approaches. However, a more powerful answer might be reached by incorporating intelligence into metaheuristics. Meta-RaPS (Metaheuristic for Randomized Priority Search) is a metaheuristic that creates high quality solutions for discrete optimization problems. It is proposed that incorporating memory and learning mechanisms into Meta-RaPS, which is currently classified as a memoryless metaheuristic, can help the algorithm produce higher quality results. The proposed Meta-RaPS versions were created by taking different perspectives of learning. The first approach taken is Estimation of Distribution Algorithms (EDA), a stochastic learning technique that creates a probability distribution for each decision variable to generate new solutions. The second Meta-RaPS version was developed by utilizing a machine learning algorithm, Q Learning, which has been successfully applied to optimization problems whose output is a sequence of actions. In the third Meta-RaPS version, Path Relinking (PR) was implemented as a post-optimization method in which the new algorithm learns the good attributes by memorizing best solutions, and follows them to reach better solutions. The fourth proposed version of Meta-RaPS presented another form of learning with its ability to adaptively tune parameters. The efficiency of these approaches motivated us to redesign Meta-RaPS by removing the improvement phase and adding a more sophisticated Path Relinking method. The new Meta-RaPS could solve even the largest problems in much less time while keeping up the quality of its solutions. To evaluate their performance, all introduced versions were tested using the 0-1 Multidimensional Knapsack Problem (MKP). After comparing the proposed algorithms, Meta-RaPS PR and Meta-RaPS Q Learning appeared to be the algorithms with the best and worst performance, respectively. On the other hand, they could all show superior performance than other approaches to the 0-1 MKP in the literature
    • …
    corecore