4,443 research outputs found
Black-box Optimisation for Buildings and Its Enhancement by Advanced Communication Infrastructure
The solution of repeated fixed-horizon trajectory optimization problems of
processes that are either too difficult or too complex to be described by physicsbased
models can pose formidable challenges. Very often, soft-computing
methods - e.g. black-box modeling and evolutionary optimization - are used.
These approaches are ineffective or even computationally intractable for
searching high-dimensional parameter spaces. In this paper, a structured
iterative process is described for addressing such problems: the starting point is
a simple parameterization of the trajectory starting with a reduced number of
parameters; after selection of values for these parameters so that this simpler
problem is covered satisfactorily, a refinement procedure increases the number
of parameters and the optimization is repeated. This continuous parameter
refinement and optimization process can yield effective solutions after only a few
iterations. To illustrate the applicability of the proposed approach we
investigate the problem of dynamic optimization of the operation of HVAC
(heating, ventilation, and air conditioning) systems, and illustrative simulation
results are presented. Finally, the development of advanced communication and
interoperability components is described, addressing the problem of how the
proposed algorithm could be deployed in realistic contexts
Optimisation of Mobile Communication Networks - OMCO NET
The mini conference âOptimisation of Mobile Communication Networksâ focuses on advanced methods for search and optimisation applied to wireless communication networks. It is sponsored by Research & Enterprise Fund Southampton Solent University.
The conference strives to widen knowledge on advanced search methods capable of optimisation of wireless communications networks. The aim is to provide a forum for exchange of recent knowledge, new ideas and trends in this progressive and challenging area. The conference will popularise new successful approaches on resolving hard tasks such as minimisation of transmit power, cooperative and optimal routing
Simulation-Based Sailboat Trajectory Optimization using On-Board Heterogeneous Computers
A dynamic programming-based algorithm adapted to on-board heterogeneouscomputers for simulation-based trajectory optimization was studied inthe context of high-performance sailing. The algorithm can efficiently utilizeall OpenCL-capable devices, starting the computation (if necessary, in singleprecision)on a GPU and finalizing it (if necessary, in double-precision) withthe use of a CPU. The serial and parallel versions of the algorithm are presentedin detail. Possible extensions of the basic algorithm are also described. Theexperimental results show that contemporary heterogeneous on-board/mobilecomputers can be treated as micro HPC platforms. They offer high performance(the OpenCL-capable GPU was found to accelerate the optimization routine 41fold) while remaining energy and cost efficient. The simulation-based approachhas the potential to give very accurate results, as the mathematical model uponwhich the simulator is based may be as complex as required. The black-box representedperformance measure and the use of OpenCL make the presentedapproach applicable to many trajectory optimization problems
Machine Learning for Fluid Mechanics
The field of fluid mechanics is rapidly advancing, driven by unprecedented
volumes of data from field measurements, experiments and large-scale
simulations at multiple spatiotemporal scales. Machine learning offers a wealth
of techniques to extract information from data that could be translated into
knowledge about the underlying fluid mechanics. Moreover, machine learning
algorithms can augment domain knowledge and automate tasks related to flow
control and optimization. This article presents an overview of past history,
current developments, and emerging opportunities of machine learning for fluid
mechanics. It outlines fundamental machine learning methodologies and discusses
their uses for understanding, modeling, optimizing, and controlling fluid
flows. The strengths and limitations of these methods are addressed from the
perspective of scientific inquiry that considers data as an inherent part of
modeling, experimentation, and simulation. Machine learning provides a powerful
information processing framework that can enrich, and possibly even transform,
current lines of fluid mechanics research and industrial applications.Comment: To appear in the Annual Reviews of Fluid Mechanics, 202
Probabilistic Modeling Processes for Oil and Gas
Different uncertainties are researched for providing safe and effective development of hydrocarbon deposits and rational operation of oil and gas systems (OGS). The original models and methods, applicable in education and practice for solving problems of system engineering, are proposed. These models allow us to analyze natural and technogenic threats for oil and gas systems on a probabilistic level for a given prognostic time. Transformation and adaptation of models are demonstrated by examples connected with non-destructive testing. The measures of counteraction to threats for the typical manufacturing processes of gas preparation equipment on enterprise are analyzed. The risks for pipelines, pumping liquefied natural gas across the South American territory, are predicted. Results of probabilistic modeling of the sea gas and oil-producing systems from their vulnerability point of view (including various scenarios of possible terrorist influences) are analyzed and interpreted
Model Predictive Evolutionary Temperature Control via Neural-Network-Based Digital Twins
In this study, we propose a population-based, data-driven intelligent controller that leverages neural-network-based digital twins for hypothesis testing. Initially, a diverse set of control laws is generated using genetic programming with the digital twin of the system, facilitating a robust response to unknown disturbances. During inference, the trained digital twin is utilized to virtually test alternative control actions for a multi-objective optimization task associated with each control action. Subsequently, the best policy is applied to the system. To evaluate the proposed model predictive control pipeline, experiments are conducted on a multi-mode heat transfer test rig. The objective is to achieve homogeneous cooling over the surface, minimizing the occurrence of hot spots and energy consumption. The measured variable vector comprises high dimensional infrared camera measurements arranged as a sequence (655,360 inputs), while the control variable includes power settings for fans responsible for convective cooling (3 outputs). Disturbances are induced by randomly altering the local heat loads. The findings reveal that by utilizing an evolutionary algorithm on measured data, a population of control laws can be effectively learned in the virtual space. This empowers the system to deliver robust performance. Significantly, the digital twin-assisted, population-based model predictive control (MPC) pipeline emerges as a superior approach compared to individual control models, especially when facing sudden and random changes in local heat loads. Leveraging the digital twin to virtually test alternative control policies leads to substantial improvements in the controllerâs performance, even with limited training data
Multidisciplinary Design Optimization for Space Applications
Multidisciplinary Design Optimization (MDO) has been increasingly studied in aerospace engineering with the main purpose of reducing monetary and schedule costs. The traditional design approach of optimizing each discipline separately and manually iterating to achieve good solutions is substituted by exploiting the interactions between the disciplines and concurrently optimizing every subsystem. The target of the research was the development of a flexible software suite capable of concurrently optimizing the design of a rocket propellant launch vehicle for multiple objectives. The possibility of combining the advantages of global and local searches have been exploited in both the MDO architecture and in the selected and self developed optimization methodologies. Those have been compared according to computational efficiency and performance criteria. Results have been critically analyzed to identify the most suitable optimization approach for the targeted MDO problem
Spatio-Temporal Patterns act as Computational Mechanisms governing Emergent behavior in Robotic Swarms
open access articleOur goal is to control a robotic swarm without removing its swarm-like nature. In other words, we aim to intrinsically control a robotic swarm emergent behavior. Past attempts at governing robotic swarms or their selfcoordinating emergent behavior, has proven ineffective, largely due to the swarmâs inherent randomness (making it difficult to predict) and utter simplicity (they lack a leader, any kind of centralized control, long-range communication, global knowledge, complex internal models and only operate on a couple of basic, reactive rules). The main problem is that emergent phenomena itself is not fully understood, despite being at the forefront of current research. Research into 1D and 2D Cellular Automata has uncovered a hidden computational layer which bridges the micromacro gap (i.e., how individual behaviors at the micro-level influence the global behaviors on the macro-level). We hypothesize that there also lie embedded computational mechanisms at the heart of a robotic swarmâs emergent behavior. To test this theory, we proceeded to simulate robotic swarms (represented as both particles and dynamic networks) and then designed local rules to induce various types of intelligent, emergent behaviors (as well as designing genetic algorithms to evolve robotic swarms with emergent behaviors). Finally, we analysed these robotic swarms and successfully confirmed our hypothesis; analyzing their developments and interactions over time revealed various forms of embedded spatiotemporal patterns which store, propagate and parallel process information across the swarm according to some internal, collision-based logic (solving the mystery of how simple robots are able to self-coordinate and allow global behaviors to emerge across the swarm)
Deriving Protein Structures Efficiently by Integrating Experimental Data into Biomolecular Simulations
Proteine sind molekulare Nanomaschinen in biologischen Zellen. Sie sind wesentliche Bausteine aller bekannten Lebensformen, von Einzellern bis hin zu Menschen, und erfĂŒllen vielfĂ€ltige Funktionen, wie beispielsweise den Sauerstofftransport im Blut oder als Bestandteil von Haaren. Störungen ihrer physiologischen Funktion können jedoch schwere degenerative Krankheiten wie Alzheimer und Parkinson verursachen. Die Entwicklung wirksamer Therapien fĂŒr solche Proteinfehlfaltungserkrankungen erfordert ein tiefgreifendes VerstĂ€ndnis der molekularen Struktur und Dynamik von Proteinen. Da Proteine aufgrund ihrer lichtmikroskopisch nicht mehr auflösbaren GröĂe nur indirekt beobachtet werden können, sind experimentelle Strukturdaten meist uneindeutig. Dieses Problem lĂ€sst sich in silico mittels physikalischer Modellierung biomolekularer Dynamik lösen. In diesem Feld haben sich datengestĂŒtzte Molekulardynamiksimulationen als neues Paradigma fĂŒr das ZusammenfĂŒgen der einzelnen Datenbausteine zu einem schlĂŒssigen Gesamtbild der enkodierten Proteinstruktur etabliert. Die Strukturdaten werden dabei als integraler Bestandteil in ein physikbasiertes Modell eingebunden. In dieser Arbeit untersuche ich, wie sogenannte strukturbasierte Modelle verwendet werden können, um mehrdeutige Strukturdaten zu komplementieren und die enthaltenen Informationen zu extrahieren. Diese Modelle liefern eine effiziente Beschreibung der aus der evolutionĂ€r optimierten nativen Struktur eines Proteins resultierenden Dynamik. Mithilfe meiner systematischen Simulationsmethode XSBM können biologische Kleinwinkelröntgenstreudaten mit möglichst geringem Rechenaufwand als physikalische Proteinstrukturen interpretiert werden. Die FunktionalitĂ€t solcher datengestĂŒtzten Methoden hĂ€ngt stark von den verwendeten Simulationsparametern ab. Eine groĂe Herausforderung besteht darin, experimentelle Informationen und theoretisches Wissen in geeigneter Weise relativ zueinander zu gewichten. In dieser Arbeit zeige ich, wie die entsprechenden SimulationsparameterrĂ€ume mit Computational-Intelligence-Verfahren effizient erkundet und funktionale Parameter ausgewĂ€hlt werden können, um die LeistungsfĂ€higkeit komplexer physikbasierter Simulationstechniken zu optimieren. Ich prĂ€sentiere FLAPS, eine datengetriebene metaheuristische Optimierungsmethode zur vollautomatischen, reproduzierbaren Parametersuche fĂŒr biomolekulare Simulationen. FLAPS ist ein adaptiver partikelschwarmbasierter Algorithmus inspiriert vom Verhalten natĂŒrlicher Vogel- und FischschwĂ€rme, der das Problem der relativen Gewichtung verschiedener Kriterien in der multivariaten Optimierung generell lösen kann. Neben massiven Fortschritten in der Verwendung von kĂŒnstlichen Intelligenzen zur Proteinstrukturvorhersage ermöglichen leistungsoptimierte datengestĂŒtzte Simulationen detaillierte Einblicke in die komplexe Beziehung von biomolekularer Struktur, Dynamik und Funktion. Solche computergestĂŒtzten Methoden können ZusammenhĂ€nge zwischen den einzelnen Puzzleteilen experimenteller Strukturinformationen herstellen und so unser VerstĂ€ndnis von Proteinen als den Grundbausteinen des Lebens vertiefen
- âŠ