54 research outputs found

    Benchmarks for evaluating optimization algorithms and benchmarking MATLAB derivative-free optimizers for practitioners’ rapid access

    Get PDF
    MATLAB Âź has built in five derivative-free optimizers (DFOs), including two direct search algorithms (simplex search, pattern search) and three heuristic algorithms (simulated annealing, particle swarm optimization, and genetic algorithm), plus a few in the official user repository, such as Powell's conjugate (PC) direct search recommended by MathWorks Âź . To help a practicing engineer or scientist to choose a MATLAB DFO most suitable for their application at hand, this paper presents a set of five benchmarking criteria for optimization algorithms and then uses four widely adopted benchmark problems to evaluate the DFOs systematically. Comprehensive tests recommend that the PC be most suitable for a unimodal or relatively simple problem, whilst the genetic algorithm (with elitism in MATLAB, GAe) for a relatively complex, multimodal or unknown problem. This paper also provides an amalgamated scoring system and a decision tree for specific objectives, in addition to recommending the GAe for optimizing structures and categories as well as for offline global search together with PC for local parameter tuning or online adaptation. To verify these recommendations, all the six DFOs are further tested in a case study optimizing a popular nonlinear filter. The results corroborate the benchmarking results. It is expected that the benchmarking system would help select optimizers for practical applications

    Analisis Performance Central Prosessing Unit (CPU) Realtime Menggunakan Metode Benchmarking

    Get PDF
    Perkembangan teknologi semakin berkembang cepat baik dari performa, grafik, bandwidth dan lain-lainnya sehingga mempengaruhi berbagai sendi kehidupan dan profesi, hal ini menyebabkan perubahan sistem pada piranti atau kinerja pada central prosessing unit. Pada dunia bisnis, saat ini telah memfaatkan kemajuan teknologi informasi demi kelancaran kerja dibidang yang digeluti baik sekala kecil maupun sekala besar. Metode yang digunakan benchmarking merupakan suatu proses mengidetifikasi terhadap hardware dan proses suatu tolak ukur sebuah performa yang diharapkan. Adapun langkah pengujian melakukan evalusi kinerja central prosessing unit (CPU) yang dilakukan pada kinerja hardware atau perangkat keras baik prosessor, ram, vega dan lain sebagainya. Hasil pengujian yang dilaksanakan pada cental prosessing unit (CPU) penggunaan ram oleh prosessor i3 sebesar 3.1 Gb, GPU 3%, Disk uses 1%, penggunaan network atau jaringan 7.7 Mbps, penggunaan power suplay very low. Prosessor i5 sebesar 4.2 Gb, GPU 0%, Disk uses 0%, penggunaan network atau jaringan 7.7 Mbps, penggunaan power suplay low. Prosessor i7 sebesar 2.5 Gb, GPU 9%, Disk uses 9%, penggunaan network atau jaringan 104 Kbps, penggunaan power suplay high

    Derivative-Free Optimization with Proxy Models for Oil Production Platforms Sharing a Subsea Gas Network

    Get PDF
    The deployment of offshore platforms for the extraction of oil and gas from subsea reservoirs presents unique challenges, particularly when multiple platforms are connected by a subsea gas network. In the Santos basin, the aim is to maximize oil production while maintaining safe and sustainable levels of CO2 content and pressure in the gas stream. To address these challenges, a novel methodology has been proposed that uses boundary conditions to coordinate the use of shared resources among the platforms. This approach decouples the optimization of oil production in platforms from the coordination of shared resources, allowing for more efficient and effective operation of the offshore oilfield. In addition to this methodology, a fast and accurate proxy model has been developed for gas pipeline networks. This model allows for efficient optimization of the gas flow through the network, taking into account the physical and operational constraints of the system. In experiments, the use of the proposed proxy model in tandem with derivativefree optimization algorithms resulted in an average error of less than 5% in pressure calculations, and a processing time that was over up to 1000 times faster than the phenomenological simulator. These results demonstrate the effectiveness and efficiency of the proposed methodology in optimizing oil production in offshore platforms connected by a subsea gas network, while maintaining safe and sustainable levels of CO2 content and pressure in the gas stream.N/

    Gradient boosting in automatic machine learning: feature selection and hyperparameter optimization

    Get PDF
    Das Ziel des automatischen maschinellen Lernens (AutoML) ist es, alle Aspekte der Modellwahl in prĂ€diktiver Modellierung zu automatisieren. Diese Arbeit beschĂ€ftigt sich mit Gradienten Boosting im Kontext von AutoML mit einem Fokus auf Gradient Tree Boosting und komponentenweisem Boosting. Beide Techniken haben eine gemeinsame Methodik, aber ihre Zielsetzung ist unterschiedlich. WĂ€hrend Gradient Tree Boosting im maschinellen Lernen als leistungsfĂ€higer Vorhersagealgorithmus weit verbreitet ist, wurde komponentenweises Boosting im Rahmen der Modellierung hochdimensionaler Daten entwickelt. Erweiterungen des komponentenweisen Boostings auf multidimensionale Vorhersagefunktionen werden in dieser Arbeit ebenfalls untersucht. Die Herausforderung der Hyperparameteroptimierung wird mit Fokus auf Bayesianische Optimierung und effiziente Stopping-Strategien diskutiert. Ein groß angelegter Benchmark ĂŒber Hyperparameter verschiedener Lernalgorithmen, zeigt den kritischen Einfluss von Hyperparameter Konfigurationen auf die QualitĂ€t der Modelle. Diese Daten können als Grundlage fĂŒr neue AutoML- und Meta-LernansĂ€tze verwendet werden. DarĂŒber hinaus werden fortgeschrittene Strategien zur Variablenselektion zusammengefasst und eine neue Methode auf Basis von permutierten Variablen vorgestellt. Schließlich wird ein AutoML-Ansatz vorgeschlagen, der auf den Ergebnissen und Best Practices fĂŒr die Variablenselektion und Hyperparameteroptimierung basiert. Ziel ist es AutoML zu vereinfachen und zu stabilisieren sowie eine hohe Vorhersagegenauigkeit zu gewĂ€hrleisten. Dieser Ansatz wird mit AutoML-Methoden, die wesentlich komplexere SuchrĂ€ume und Ensembling Techniken besitzen, verglichen. Vier Softwarepakete fĂŒr die statistische Programmiersprache R sind Teil dieser Arbeit, die neu entwickelt oder erweitert wurden: mlrMBO: Ein generisches Paket fĂŒr die Bayesianische Optimierung; autoxgboost: Ein AutoML System, das sich vollstĂ€ndig auf Gradient Tree Boosting fokusiert; compboost: Ein modulares, in C++ geschriebenes Framework fĂŒr komponentenweises Boosting; gamboostLSS: Ein Framework fĂŒr komponentenweises Boosting additiver Modelle fĂŒr Location, Scale und Shape.The goal of automatic machine learning (AutoML) is to automate all aspects of model selection in (supervised) predictive modeling. This thesis deals with gradient boosting techniques in the context of AutoML with a focus on gradient tree boosting and component-wise gradient boosting. Both techniques have a common methodology, but their goal is quite different. While gradient tree boosting is widely used in machine learning as a powerful prediction algorithm, component-wise gradient boosting strength is in feature selection and modeling of high-dimensional data. Extensions of component-wise gradient boosting to multidimensional prediction functions are considered as well. Focusing on Bayesian optimization and efficient early stopping strategies the challenge of hyperparameter optimization for these algorithms is discussed. Difficulty in the optimization of these algorithms is shown by a large scale random search on hyperparameters for machine learning algorithms, that can build the foundation of new AutoML and metalearning approaches. Furthermore, advanced feature selection strategies are summarized and a new method based on shadow features is introduced. Finally, an AutoML approach based on the results and best practices for feature selection and hyperparameter optimization is proposed, with the goal of simplifying and stabilizing AutoML while maintaining high prediction accuracy. This is compared to AutoML approaches using much more complex search spaces and ensembling techniques. Four software packages for the statistical programming language R have been newly developed or extended as a part of this thesis: mlrMBO: A general framework for Bayesian optimization; autoxgboost: An automatic machine learning framework that heavily utilizes gradient tree boosting; compboost: A modular framework for component-wise boosting written in C++; gamboostLSS: A framework for component-wise boosting for generalized additive models for location scale and shape

    Vision 2040: A Roadmap for Integrated, Multiscale Modeling and Simulation of Materials and Systems

    Get PDF
    Over the last few decades, advances in high-performance computing, new materials characterization methods, and, more recently, an emphasis on integrated computational materials engineering (ICME) and additive manufacturing have been a catalyst for multiscale modeling and simulation-based design of materials and structures in the aerospace industry. While these advances have driven significant progress in the development of aerospace components and systems, that progress has been limited by persistent technology and infrastructure challenges that must be overcome to realize the full potential of integrated materials and systems design and simulation modeling throughout the supply chain. As a result, NASA's Transformational Tools and Technology (TTT) Project sponsored a study (performed by a diverse team led by Pratt & Whitney) to define the potential 25-year future state required for integrated multiscale modeling of materials and systems (e.g., load-bearing structures) to accelerate the pace and reduce the expense of innovation in future aerospace and aeronautical systems. This report describes the findings of this 2040 Vision study (e.g., the 2040 vision state; the required interdependent core technical work areas, Key Element (KE); identified gaps and actions to close those gaps; and major recommendations) which constitutes a community consensus document as it is a result of over 450 professionals input obtain via: 1) four society workshops (AIAA, NAFEMS, and two TMS), 2) community-wide survey, and 3) the establishment of 9 expert panels (one per KE) consisting on average of 10 non-team members from academia, government and industry to review, update content, and prioritize gaps and actions. The study envisions the development of a cyber-physical-social ecosystem comprised of experimentally verified and validated computational models, tools, and techniques, along with the associated digital tapestry, that impacts the entire supply chain to enable cost-effective, rapid, and revolutionary design of fit-for-purpose materials, components, and systems. Although the vision focused on aeronautics and space applications, it is believed that other engineering communities (e.g., automotive, biomedical, etc.) can benefit as well from the proposed framework with only minor modifications. Finally, it is TTT's hope and desire that this vision provides the strategic guidance to both public and private research and development decision makers to make the proposed 2040 vision state a reality and thereby provide a significant advancement in the United States global competitiveness

    Compilation and Code Optimization for Data Analytics

    Get PDF
    The trade-offs between the use of modern high-level and low-level programming languages in constructing complex software artifacts are well known. High-level languages allow for greater programmer productivity: abstraction and genericity allow for the same functionality to be implemented with significantly less code compared to low-level languages. Modularity, object-orientation, functional programming, and powerful type systems allow programmers not only to create clean abstractions and protect them from leaking, but also to define code units that are reusable and easily composable, and software architectures that are adaptable and extensible. The abstraction, succinctness, and modularity of high-level code help to avoid software bugs and facilitate debugging and maintenance. The use of high-level languages comes at a performance cost: increased indirection due to abstraction, virtualization, and interpretation, and superfluous work, particularly in the form of tempory memory allocation and deallocation to support objects and encapsulation. As a result of this, the cost of high-level languages for performance-critical systems may seem prohibitive. The vision of abstraction without regret argues that it is possible to use high-level languages for building performance-critical systems that allow for both productivity and high performance, instead of trading off the former for the latter. In this thesis, we realize this vision for building different types of data analytics systems. Our means of achieving this is by employing compilation. The goal is to compile away expensive language features -- to compile high-level code down to efficient low-level code

    Compilation and Code Optimization for Data Analytics

    Get PDF
    The trade-offs between the use of modern high-level and low-level programming languages in constructing complex software artifacts are well known. High-level languages allow for greater programmer productivity: abstraction and genericity allow for the same functionality to be implemented with significantly less code compared to low-level languages. Modularity, object-orientation, functional programming, and powerful type systems allow programmers not only to create clean abstractions and protect them from leaking, but also to define code units that are reusable and easily composable, and software architectures that are adaptable and extensible. The abstraction, succinctness, and modularity of high-level code help to avoid software bugs and facilitate debugging and maintenance. The use of high-level languages comes at a performance cost: increased indirection due to abstraction, virtualization, and interpretation, and superfluous work, particularly in the form of tempory memory allocation and deallocation to support objects and encapsulation. As a result of this, the cost of high-level languages for performance-critical systems may seem prohibitive. The vision of abstraction without regret argues that it is possible to use high-level languages for building performance-critical systems that allow for both productivity and high performance, instead of trading off the former for the latter. In this thesis, we realize this vision for building different types of data analytics systems. Our means of achieving this is by employing compilation. The goal is to compile away expensive language features -- to compile high-level code down to efficient low-level code

    Modeling for inversion in exploration geophysics

    Get PDF
    Seismic inversion, and more generally geophysical exploration, aims at better understanding the earth's subsurface, which is one of today's most important challenges. Firstly, it contains natural resources that are critical to our technologies such as water, minerals and oil and gas. Secondly, monitoring the subsurface in the context of CO2 sequestration, earthquake detection and global seismology are of major interests with regard to safety and the environment hazards. However, the technologies to monitor the subsurface or find resources are scientifically extremely challenging. Seismic inversion can be formulated as a mathematical optimization problem that minimizes the difference between field recorded data and numerically modeled synthetic data. The process of solving this optimization problem then requires to numerically model, thousands of times, wave-propagation in large three-dimensional representations of part of the earth subsurface. The mathematical and computational complexity of this problem, therefore, calls for software design that abstracts these requirements and facilitates algorithm and software development. My thesis addresses some of the challenges that arise from these problems; mainly the computational cost and access to the right software for research and development. In the first part, I will discuss a performance metric that improves the current runtime-only benchmarks in exploration geophysics. This metric, the roofline model, first provides insight at the hardware level of the performance of a given implementation relative to the maximum achievable performance. Second, this study demonstrates that the choice of numerical discretization has a major impact on the achievable performance depending on the hardware at hand and shows that a flexible framework with respect to the discretization parameters is necessary. In the second part, I will introduce and describe Devito, a symbolic finite-difference DSL that provides a high-level interface to the definition of partial differential equations (PDE) such as the wave equation. Devito, from the symbolic definition of PDEs, then generates and compiles highly optimized C code on-the-fly to compute the solution of the PDE. The combination of the high-level abstractions and the just-in-time compiler enable research for geophysical exploration and PDE-constrainted optimization based on the paradigm of separation of concerns. This allows researchers to concentrate on their respective field of study while having access to computationally performant solvers with a flexible and easy to use interface to successfully implement complex representations of the physics. The second part of my thesis will be split into two sub-parts; first describing the symbolic application programming interface (API), before describing and benchmarking the just-in-time compiler. I will end my thesis with concluding remarks, the latest developments and a brief description of projects that were enabled by Devito.Ph.D

    Management, Technology and Learning for Individuals, Organisations and Society in Turbulent Environments

    Get PDF
    This book presents the collection of fifty papers which were presented in the Second International Conference on BUSINESS SUSTAINABILITY 2011 - Management, Technology and Learning for Individuals, Organisations and Society in Turbulent Environments , held in Póvoa de Varzim, Portugal, from 22ndto 24thof June, 2011.The main motive of the meeting was growing awareness of the importance of the sustainability issue. This importance had emerged from the growing uncertainty of the market behaviour that leads to the characterization of the market, i.e. environment, as turbulent. Actually, the characterization of the environment as uncertain and turbulent reflects the fact that the traditional technocratic and/or socio-technical approaches cannot effectively and efficiently lead with the present situation. In other words, the rise of the sustainability issue means the quest for new instruments to deal with uncertainty and/or turbulence. The sustainability issue has a complex nature and solutions are sought in a wide range of domains and instruments to achieve and manage it. The domains range from environmental sustainability (referring to natural environment) through organisational and business sustainability towards social sustainability. Concerning the instruments for sustainability, they range from traditional engineering and management methodologies towards “soft” instruments such as knowledge, learning, and creativity. The papers in this book address virtually whole sustainability problems space in a greater or lesser extent. However, although the uncertainty and/or turbulence, or in other words the dynamic properties, come from coupling of management, technology, learning, individuals, organisations and society, meaning that everything is at the same time effect and cause, we wanted to put the emphasis on business with the intention to address primarily companies and their businesses. Due to this reason, the main title of the book is “Business Sustainability 2.0” but with the approach of coupling Management, Technology and Learning for individuals, organisations and society in Turbulent Environments. Also, the notation“2.0” is to promote the publication as a step further from our previous publication – “Business Sustainability I” – as would be for a new version of software. Concerning the Second International Conference on BUSINESS SUSTAINABILITY, its particularity was that it had served primarily as a learning environment in which the papers published in this book were the ground for further individual and collective growth in understanding and perception of sustainability and capacity for building new instruments for business sustainability. In that respect, the methodology of the conference work was basically dialogical, meaning promoting dialog on the papers, but also including formal paper presentations. In this way, the conference presented a rich space for satisfying different authors’ and participants’ needs. Additionally, promoting the widest and global learning environment and participation, in accordance with the Conference's assumed mission to promote Proactive Generative Collaborative Learning, the Conference Organisation shares/puts open to the community the papers presented in this book, as well as the papers presented on the previous Conference(s). These papers can be accessed from the conference webpage (http://labve.dps.uminho.pt/bs11). In these terms, this book could also be understood as a complementary instrument to the Conference authors’ and participants’, but also to the wider readerships’ interested in the sustainability issues. The book brought together 107 authors from 11 countries, namely from Australia, Belgium, Brazil, Canada, France, Germany, Italy, Portugal, Serbia, Switzerland, and United States of America. The authors “ranged” from senior and renowned scientists to young researchers providing a rich and learning environment. At the end, the editors hope, and would like, that this book to be useful, meeting the expectation of the authors and wider readership and serving for enhancing the individual and collective learning, and to incentive further scientific development and creation of new papers. Also, the editors would use this opportunity to announce the intention to continue with new editions of the conference and subsequent editions of accompanying books on the subject of BUSINESS SUSTAINABILITY, the third of which is planned for year 2013.info:eu-repo/semantics/publishedVersio
    • 

    corecore