833 research outputs found
Iteration Complexity of Variational Quantum Algorithms
There has been much recent interest in near-term applications of quantum
computers. Variational quantum algorithms (VQA), wherein an optimization
algorithm implemented on a classical computer evaluates a parametrized quantum
circuit as an objective function, are a leading framework in this space.
In this paper, we analyze the iteration complexity of VQA, that is, the
number of steps VQA required until the iterates satisfy a surrogate measure of
optimality. We argue that although VQA procedures incorporate algorithms that
can, in the idealized case, be modeled as classic procedures in the
optimization literature, the particular nature of noise in near-term devices
invalidates the claim of applicability of off-the-shelf analyses of these
algorithms. Specifically, the form of the noise makes the evaluations of the
objective function via circuits biased, necessitating the perspective of
convergence analysis of variants of these classical optimization procedures,
wherein the evaluations exhibit systematic bias. We apply our reasoning to the
most often used procedures, including SPSA the parameter shift rule, which can
be seen as zeroth-order, or derivative-free, optimization algorithms with
biased function evaluations. We show that the asymptotic rate of convergence is
unaffected by the bias, but the level of bias contributes unfavorably to both
the constant therein, and the asymptotic distance to stationarity.Comment: 39 pages, 11 figure
Towards a solution of the closure problem for convective atmospheric boundary-layer turbulence
We consider the closure problem for turbulence in the dry convective atmospheric boundary
layer (CBL). Transport in the CBL is carried by small scale eddies near the surface and large
plumes in the well mixed middle part up to the inversion that separates the CBL from the
stably stratified air above. An analytically tractable model based on a multivariate Delta-PDF
approach is developed. It is an extension of the model of Gryanik and Hartmann [1] (GH02)
that additionally includes a term for background turbulence. Thus an exact solution is derived
and all higher order moments (HOMs) are explained by second order moments, correlation
coefficients and the skewness. The solution provides a proof of the extended universality
hypothesis of GH02 which is the refinement of the Millionshchikov hypothesis (quasi-
normality of FOM). This refined hypothesis states that CBL turbulence can be considered as
result of a linear interpolation between the Gaussian and the very skewed turbulence regimes.
Although the extended universality hypothesis was confirmed by results of field
measurements, LES and DNS simulations (see e.g. [2-4]), several questions remained
unexplained. These are now answered by the new model including the reasons of the
universality of the functional form of the HOMs, the significant scatter of the values of the
coefficients and the source of the magic of the linear interpolation. Finally, the closures
61
predicted by the model are tested against measurements and LES data. Some of the other
issues of CBL turbulence, e.g. familiar kurtosis-skewness relationships and relation of area
coverage parameters of plumes (so called filling factors) with HOM will be discussed also
Optimal Estimation Methodologies for Panel Data Regression Models
This survey study discusses main aspects to optimal estimation methodologies
for panel data regression models. In particular, we present current
methodological developments for modeling stationary panel data as well as
robust methods for estimation and inference in nonstationary panel data
regression models. Some applications from the network econometrics and high
dimensional statistics literature are also discussed within a stationary time
series environment
Untangling hotel industry’s inefficiency: An SFA approach applied to a renowned Portuguese hotel chain
The present paper explores the technical efficiency of four hotels from Teixeira Duarte Group - a renowned Portuguese hotel chain. An efficiency ranking is established from these four hotel units located in Portugal using Stochastic Frontier Analysis. This methodology allows to discriminate between measurement error and systematic inefficiencies in the estimation process enabling to investigate the main inefficiency causes. Several suggestions concerning efficiency improvement are undertaken for each hotel studied.info:eu-repo/semantics/publishedVersio
Advanced and novel modeling techniques for simulation, optimization and monitoring chemical engineering tasks with refinery and petrochemical unit applications
Engineers predict, optimize, and monitor processes to improve safety and profitability. Models automate these tasks and determine precise solutions. This research studies and applies advanced and novel modeling techniques to automate and aid engineering decision-making. Advancements in computational ability have improved modeling software’s ability to mimic industrial problems. Simulations are increasingly used to explore new operating regimes and design new processes. In this work, we present a methodology for creating structured mathematical models, useful tips to simplify models, and a novel repair method to improve convergence by populating quality initial conditions for the simulation’s solver. A crude oil refinery application is presented including simulation, simplification tips, and the repair strategy implementation. A crude oil scheduling problem is also presented which can be integrated with production unit models. Recently, stochastic global optimization (SGO) has shown to have success of finding global optima to complex nonlinear processes. When performing SGO on simulations, model convergence can become an issue. The computational load can be decreased by 1) simplifying the model and 2) finding a synergy between the model solver repair strategy and optimization routine by using the initial conditions formulated as points to perturb the neighborhood being searched. Here, a simplifying technique to merging the crude oil scheduling problem and the vertically integrated online refinery production optimization is demonstrated. To optimize the refinery production a stochastic global optimization technique is employed. Process monitoring has been vastly enhanced through a data-driven modeling technique Principle Component Analysis. As opposed to first-principle models, which make assumptions about the structure of the model describing the process, data-driven techniques make no assumptions about the underlying relationships. Data-driven techniques search for a projection that displays data into a space easier to analyze. Feature extraction techniques, commonly dimensionality reduction techniques, have been explored fervidly to better capture nonlinear relationships. These techniques can extend data-driven modeling’s process-monitoring use to nonlinear processes. Here, we employ a novel nonlinear process-monitoring scheme, which utilizes Self-Organizing Maps. The novel techniques and implementation methodology are applied and implemented to a publically studied Tennessee Eastman Process and an industrial polymerization unit
Statistical Learning and Stochastic Process for Robust Predictive Control of Vehicle Suspension Systems
Predictive controllers play an important role in today's industry because of their capability
of verifying optimum control signals for nonlinear systems in a real-time fashion.
Due to their mathematical properties, such controllers are best suited for control problems
with constraints. Also, these interesting controllers can be equipped with different types
of optimization and learning modules. The main goal of this thesis is to explore the potential of predictive controllers for a challenging automotive problem, known as active vehicle suspension control.
In this context, it is intended to explore both modeling and optimization modules
using different statistical methodologies ranging from statistical learning to random process
control. Among the variants of predictive controllers, learning-based model predictive
controller (LBMPC) is becoming more and more interesting to the researchers of control
society due to its structural flexibility and optimal performance. The current investigation
will contribute to the improvement of LBMPC by adopting different statistical learning
strategies and forecasting methods to improve the efficiency and robustness of learning
performed in LBMPC. Also, advanced probabilistic tools such as reinforcement learning,
absorbing state stochastic process, graphical modelling, and bootstrapping are used to
quantify different sources of uncertainty which can affect the performance of the LBMPC
when it is used for vehicle suspension control. Moreover, a comparative study is conducted
using gradient-based as well as deterministic and stochastic direct search optimization
algorithms for calculating the optimal control commands.
By combining the well-established control and statistical theories, a novel variant of
LBMPC is developed which not only affords stability and robustness, but also surpasses
a wide range of conventional controllers for the vehicle suspension control problem. The
findings of the current investigation can be interesting to the researchers of automotive
industry (in particular those interested in automotive control), as several open issues regarding the potential of statistical tools for improving the performance of controllers for
vehicle suspension problem are addressed
Information-Theoretic Active Perception for Multi-Robot Teams
Multi-robot teams that intelligently gather information have the potential to transform industries as diverse as agriculture, space exploration, mining, environmental monitoring, search and rescue, and construction. Despite large amounts of research effort on active perception problems, there still remain significant challenges. In this thesis, we present a variety of information-theoretic control policies that enable teams of robots to efficiently estimate different quantities of interest. Although these policies are intractable in general, we develop a series of approximations that make them suitable for real time use.
We begin by presenting a unified estimation and control scheme based on Shannon\u27s mutual information that lets small teams of robots equipped with range-only sensors track a single static target. By creating approximate representations, we substantially reduce the complexity of this approach, letting the team track a mobile target. We then scale this approach to larger teams that need to localize a large and unknown number of targets.
We also examine information-theoretic control policies to autonomously construct 3D maps with ground and aerial robots. By using Cauchy-Schwarz quadratic mutual information, we show substantial computational improvements over similar information-theoretic measures. To map environments faster, we adopt a hierarchical planning approach which incorporates trajectory optimization so that robots can quickly determine feasible and locally optimal trajectories. Finally, we present a high-level planning algorithm that enables heterogeneous robots to cooperatively construct maps
Particle Swarm Optimization
Particle swarm optimization (PSO) is a population based stochastic optimization technique influenced by the social behavior of bird flocking or fish schooling.PSO shares many similarities with evolutionary computation techniques such as Genetic Algorithms (GA). The system is initialized with a population of random solutions and searches for optima by updating generations. However, unlike GA, PSO has no evolution operators such as crossover and mutation. In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles. This book represents the contributions of the top researchers in this field and will serve as a valuable tool for professionals in this interdisciplinary field
Estimation and control of non-linear and hybrid systems with applications to air-to-air guidance
Issued as Progress report, and Final report, Project no. E-21-67
- …