781 research outputs found
Recommended from our members
Finding High-Dimensional D-OptimalDesigns for Logistic Models via Differential Evolution
D-optimal designs are frequently used in controlled experiments to obtain the most accurateestimate of model parameters at minimal cost. Finding them can be a challenging task, especially whenthere are many factors in a nonlinear model. As the number of factors becomes large and interact withone another, there are many more variables to optimize and the D-optimal design problem becomes highdimensionaland non-separable. Consequently, premature convergence issues arise. Candidate solutions gettrapped in local optima and the classical gradient-based optimization approaches to search for the D-optimaldesigns rarely succeed. We propose a specially designed version of differential evolution (DE) which is arepresentative gradient-free optimization approach to solve such high-dimensional optimization problems.The proposed specially designed DE uses a new novelty-based mutation strategy to explore the variousregions in the search space. The exploration of the regions will be carried out differently from the previouslyexplored regions and the diversity of the population can be preserved. The proposed novelty-based mutationstrategy is collaborated with two common DE mutation strategies to balance exploration and exploitationat the early or medium stage of the evolution. Additionally, we adapt the control parameters of DE as theevolution proceeds. Using logistic models with several factors on various design spaces as examples, oursimulation results show our algorithm can find D-optimal designs efficiently and the algorithm outperformsits competitors. As an application, we apply our algorithm and re-design a 10-factor car refueling experimentwith discrete and continuous factors and selected pairwise interactions. Our proposed algorithm was able toconsistently outperform the other algorithms and find a more efficient D-optimal design for the problem
Performance Analysis and Improvement of Parallel Differential Evolution
Differential evolution (DE) is an effective global evolutionary optimization
algorithm using to solve global optimization problems mainly in a continuous
domain. In this field, researchers pay more attention to improving the
capability of DE to find better global solutions, however, the computational
performance of DE is also a very interesting aspect especially when the problem
scale is quite large. Firstly, this paper analyzes the design of parallel
computation of DE which can easily be executed in Math Kernel Library (MKL) and
Compute Unified Device Architecture (CUDA). Then the essence of the exponential
crossover operator is described and we point out that it cannot be used for
better parallel computation. Later, we propose a new exponential crossover
operator (NEC) that can be executed parallelly with MKL/CUDA. Next, the
extended experiments show that the new crossover operator can speed up DE
greatly. In the end, we test the new parallel DE structure, illustrating that
the former is much faster
Differential Evolution Optimal Parameters Tuning with Artificial Neural Network
Differential evolution (DE) is a simple and efficient population-based stochastic algorithm for solving global numerical optimization problems. DE largely depends on algorithm parameter values and search strategy. Knowledge on how to tune the best values of these parameters is scarce. This paper aims to present a consistent methodology for tuning optimal parameters. At the heart of the methodology is the use of an artificial neural network (ANN) that learns to draw links between the algorithm performance and parameter values. To do so, first, a data-set is generated and normalized, then the ANN approach is performed, and finally, the best parameter values are extracted. The proposed method is evaluated on a set of 24 test problems from the Black-Box Optimization Benchmarking (BBOB) benchmark. Experimental results show that three distinct cases may arise with the application of this method. For each case, specifications about the procedure to follow are given. Finally, a comparison with four tuning rules is performed in order to verify and validate the proposed method’s performance. This study provides a thorough insight into optimal parameter tuning, which may be of great use for users.The authors appreciate the support to the government of the Basque Country through research programs Grants N. ELKARTEK 20/71 and ELKARTEK: KK-2019/00099
Analysis of the computational complexity of solving random satisfiability problems using branch and bound search algorithms
The computational complexity of solving random 3-Satisfiability (3-SAT)
problems is investigated. 3-SAT is a representative example of hard
computational tasks; it consists in knowing whether a set of alpha N randomly
drawn logical constraints involving N Boolean variables can be satisfied
altogether or not. Widely used solving procedures, as the
Davis-Putnam-Loveland-Logeman (DPLL) algorithm, perform a systematic search for
a solution, through a sequence of trials and errors represented by a search
tree. In the present study, we identify, using theory and numerical
experiments, easy (size of the search tree scaling polynomially with N) and
hard (exponential scaling) regimes as a function of the ratio alpha of
constraints per variable. The typical complexity is explicitly calculated in
the different regimes, in very good agreement with numerical simulations. Our
theoretical approach is based on the analysis of the growth of the branches in
the search tree under the operation of DPLL. On each branch, the initial 3-SAT
problem is dynamically turned into a more generic 2+p-SAT problem, where p and
1-p are the fractions of constraints involving three and two variables
respectively. The growth of each branch is monitored by the dynamical evolution
of alpha and p and is represented by a trajectory in the static phase diagram
of the random 2+p-SAT problem. Depending on whether or not the trajectories
cross the boundary between phases, single branches or full trees are generated
by DPLL, resulting in easy or hard resolutions.Comment: 37 RevTeX pages, 15 figures; submitted to Phys.Rev.
Slip-Mediated Dewetting of Polymer Microdroplets
Classical hydrodynamic models predict that infinite work is required to move
a three-phase contact line, defined here as the line where a liquid/vapor
interface intersects a solid surface. Assuming a slip boundary condition, in
which the liquid slides against the solid, such an unphysical prediction is
avoided. In this article, we present the results of experiments in which a
contact line moves and where slip is a dominating and controllable factor.
Spherical cap shaped polystyrene microdroplets, with non-equilibrium contact
angle, are placed on solid self-assembled monolayer coatings from which they
dewet. The relaxation is monitored using \textit{in situ} atomic force
microscopy. We find that slip has a strong influence on the droplet evolutions,
both on the transient non-spherical shapes and contact line dynamics. The
observations are in agreement with scaling analysis and boundary element
numerical integration of the governing Stokes equations, including a Navier
slip boundary condition.Comment: 19 pages, 4 figures + 6 figures in supporting informatio
Active Brownian Particles. From Individual to Collective Stochastic Dynamics
We review theoretical models of individual motility as well as collective
dynamics and pattern formation of active particles. We focus on simple models
of active dynamics with a particular emphasis on nonlinear and stochastic
dynamics of such self-propelled entities in the framework of statistical
mechanics. Examples of such active units in complex physico-chemical and
biological systems are chemically powered nano-rods, localized patterns in
reaction-diffusion system, motile cells or macroscopic animals. Based on the
description of individual motion of point-like active particles by stochastic
differential equations, we discuss different velocity-dependent friction
functions, the impact of various types of fluctuations and calculate
characteristic observables such as stationary velocity distributions or
diffusion coefficients. Finally, we consider not only the free and confined
individual active dynamics but also different types of interaction between
active particles. The resulting collective dynamical behavior of large
assemblies and aggregates of active units is discussed and an overview over
some recent results on spatiotemporal pattern formation in such systems is
given.Comment: 161 pages, Review, Eur Phys J Special-Topics, accepte
Viscosity of fragile glass-forming melts obtained by high rate calorimetry
Die Kenntnis des temperaturabhängigen Viskositätsverhaltens η(T) einer Glasschmelze ist für Technik und Wissenschaft aufgrund von Heißumform- und Abkühlprozessen, als auch zur Gewinnung tiefergehender Erkenntnisse über die Schmelzstruktur, z. B. die Fragilität einer Schmelze oder die Rolle der Netzwerkbildner und Netzwerkwandler von großem Interesse. Leider ist es nicht einfach, mit vergleichsweise langsam laufenden viskosimetrischen Methoden die gesamte Viskosität oberhalb der Glasübergangstemperatur Tg bei einigen Glassystemen aufgrund der bei den Messungen auftretenden Kristallisation zu messen. Diese Kristallisation beeinflusst stark die Viskosität der resultierenden heterogenen Systeme. Als Ergebnis wird eine Lücke in den Viskositätsdaten zwischen Tg und der Temperatur der stabilen Flüssigkeit erhalten. Weiterhin wurde in der Literatur gezeigt, dass man die aus kalorimetrischen Experimenten abgeleitete -lg qc vs. 1/Tf-Kurve auf die lg η vs. 1/T-Beziehung überlagern kann, indem man einen sogenannten parallelen Verschiebungsfaktor lg K verwendet. Somit ermöglicht K das Abrufen von η, ohne dass Viskositätsmessungen durchgeführt werden müssen. Dies ist von entscheidender Bedeutung, da die Kristallisationsneigung der hier untersuchten Gläser die Bestimmung der Flüssigkeitsviskosität durch rheologische Methoden in einem weiten Temperaturbereich oberhalb von Tg verhindert. Diese kumulative Doktorarbeit wurde entwickelt, um einen kombinierten experimentellen Ansatz basierend auf Kalorimetrie und Viskosimetrie zu testen und um die Lücke in den Viskositätsdaten für mehrere zerbrechliche Glassysteme zu schließen, die aufgrund schneller Kristallisation auftritt. Ein neues kommerzielles Hochgeschwindigkeitskalorimeter wurde mit konventioneller Kalorimetrie kombiniert, um Abkühl- und Aufheizraten bis zu 40.000 K s−1 (2.400.000 K min−1) abzudecken und durch Anwendung von parallelen Verschiebungsfaktoren Konset, Kpeak und Kend Viskositäten von Tg bis hinunter nach η = 104,9 Pa s zu erhalten. Im Detail handelt es sich bei den in dieser Studie getesteten zerbrechlichen Schmelzen um Silikatgläser (Lithiumdisilikat, Diopsid und Standardglas DGG I), ein Fluorphosphatglas (N-PK52A), Telluritgläser (reines TeO2 und zwei Natriumtelluritgläser) und Zr-basierte metallische Gläser (Vitreloy 105 und AMZ4). Es wurde festgestellt, dass dieser kombinierte experimentelle Ansatz geeignet ist, die η(T)-Lücke für die genannten Schmelzen zu verengen. Neue Daten wurden vom Glasübergangsbereich bis hinunter nach η = 106,3 Pa s für das Fluorphosphatglas, 105,5 Pa s für Silikatgläser, 105,3 Pa s für Telluritgläser und 104,9 Pa s für metallische Gläser erhalten. Eine anschließende stark eingeschränkte Parametrierung der verfügbaren Viskositätsdaten ermöglicht die Beschreibung des gesamten Viskositätsbereichs. Nach dieser experimentellen Vorgehensweise verbleibt eine kleinere Lücke in den Viskositätsdaten aller Gläser. Zusätzlich für reines TeO2, dessen schwache Glasbildungsfähigkeit und starke Kristallisationsneigung zuverlässige Messungen im unterkühlten Zustand verhindern, liefert die Parametrisierung den ersten experimentell ermittelten Viskositätsverlauf. Im Fall der metallischen Gläser deuten die bereits verfügbaren gemessenen Viskositätsdaten aus der Literatur auf die Existenz eines ungewöhnlichen Viskositätsverhaltens hin, den „fragile-to-strong“ (F–S) Übergangs. Die in dieser Dissertation bereitgestellten neuen Daten und Anpassungen grenzten das Auftreten dieses F-S-Übergangs auf eine Region bei höherer Temperatur ein.Knowledge about the temperature-dependent viscosity behaviour η(T) of a glass melt holds strong interest for both, technology and science due to hot-forming and cooling processes as well as to gain deeper knowledge about the melt structure, e.g. the fragility of a melt or the role of the network formers and modifiers. Unfortunately, it is not easy to measure the entire viscosity above the glass transition temperature Tg with comparatively slow running viscometric methods for some glass systems due to crystallisation occurring during the measurements. This crystallisation strongly influences the viscosity of the resulting heterogeneous systems. As a result, a gap in viscosity data is obtained between Tg and the temperature of the stable liquid. Further, it has been demonstrated in the literature that one can superimpose the -lg qc vs 1/Tf curve derived from calorimetry experiments on the lg η vs 1/T relationship by using a so-called parallel shift factor lg K. Thus, K enables retrieving η without the need to conduct viscosity measurements. This holds crucial importance since the proneness to crystallise of the glasses studied here prevents determination of liquid viscosity by rheological methods within a wide temperature range above Tg. This cumulative doctoral thesis was designed to test a combined experimental approach based on calorimetry and viscometry to narrow the gap in viscosity data for several fragile glass systems that appears due to fast crystallisation. A new commercial high-rate calorimeter was combined with conventional calorimetry to cover cooling and heating rates up to 40,000 K s-1 (2,400,000 K min-1) and thereby receive viscosities from Tg down to η = 104.9 Pa s by applying parallel shift factors Konset, Kpeak and Kend. In detail, the fragile melts tested in this study are silicate glasses (lithium disilicate, diopside and standard glass DGG I), a fluorophosphate glass (N-PK52A), tellurite glasses (pure TeO2 and two sodium tellurite glasses) and Zr–based bulk metallic glasses (Vitreloy 105 and AMZ4). It was found that this combined experimental approach is suitable to narrow the η(T) gap for the mentioned melts. New data were obtained from the glass transition region down to η = 106.3 Pa s for the fluorophosphate glass, 105.5 Pa s for silicate glasses, 105.3 Pa s for tellurite glasses and 104.9 Pa s for metallic glasses. A subsequent highly-constrained parameterisation of the available viscosity data makes it possible to describe the entire viscosity range. A smaller gap remains in the viscosity data of all glasses after this experimental procedure. Additionally, for pure TeO2, whose weak glass-forming ability and strong tendency to crystallise prevent reliable measurements in the undercooled state, the parameterisation provides the first experimentally-determined viscosity curve. In case of the metallic glasses, the already-available measured viscosity data from literature suggest the existence of an unusual viscosity behaviour, the fragile-to-strong (F–S) transition. The new data and fits provided in this thesis narrowed down the appearance of this F-S transition to a region at higher temperature
- …