7,268 research outputs found
Deterministic Mean-field Ensemble Kalman Filtering
The proof of convergence of the standard ensemble Kalman filter (EnKF) from
Legland etal. (2011) is extended to non-Gaussian state space models. A
density-based deterministic approximation of the mean-field limit EnKF
(DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given
a certain minimal order of convergence between the two, this extends
to the deterministic filter approximation, which is therefore asymptotically
superior to standard EnKF when the dimension . The fidelity of
approximation of the true distribution is also established using an extension
of total variation metric to random measures. This is limited by a Gaussian
bias term arising from non-linearity/non-Gaussianity of the model, which exists
for both DMFEnKF and standard EnKF. Numerical results support and extend the
theory
Adaptive matched field processing in an uncertain propagation environment
Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution January 1992Adaptive array processing algorithms have achieved widespread use because they are
very effective at rejecting unwanted signals (i.e., controlling sidelobe levels) and in
general have very good resolution (i.e., have narrow mainlobes). However, many
adaptive high-resolution array processing algorithms suffer a significant degradation
in performance in the presence of environmental mismatch. This sensitivity to environmental
mismatch is of particular concern in problems such as long-range acoustic
array processing in the ocean where the array processor's knowledge of the propagation
characteristics of the ocean is imperfect. An Adaptive Minmax Matched Field
Processor has been developed which combines adaptive matched field processing and
minmax approximation techniques to achieve the effective interference rejection characteristic
of adaptive processors while limiting the sensitivity of the processor to
environmental mismatch.
The derivation of the algorithm is carried out within the framework of minmax
signal processing. The optimal array weights are those which minimize the maximum
conditional mean squared estimation error at the output of a linear weight-and-sum
beamformer. The error is conditioned on the propagation characteristics of the environment
and the maximum is evaluated over the range of environmental conditions in
which the processor is expected to operate. The theorems developed using this framework
characterize the solutions to the minmax array weight problem, and relate the
optimal minmax array weights to the solution to a particular type of Wiener filtering
problem. This relationship makes possible the development of an efficient algorithm
for calculating the optimal minmax array weights and the associated estimate of the
signal power emitted by a source at the array focal point. An important feature of
this algorithm is that it is guarenteed to converge to an exact solution for the array
weights and estimated signal power in a finite number of iterations. The Adaptive Minmax Matched Field Processor can also be interpreted as a two-stage
Minimum Variance Distortionless Response (MVDR) Matched Field Processor.
The first stage of this processor generates an estimate of the replica vector of the signal
emitted by a source at the array focal point, and the second stage is a traditional
MVDR Matched Field Processor implemented using the estimate of the signal replica
vector.
Computer simulations using several environmental models and types of environmental
uncertainty have shown that the resolution and interference rejection capability
of the Adaptive Minmax Matched Field Processor is close to that of a traditional
MVDR Matched Field Processor which has perfect knowledge of the characteristics
of the propagation environment and far exceeds that of the Bartlett Matched Field
Processor. In addition, the simulations show that the Adaptive Minmax Matched
Field Processor is able to maintain it's accuracy, resolution and interference rejection
capability when it's knowledge of the environment is only approximate, and is therefore
much less sensitive to environmental mismatch than is the traditional MVDR
Matched Field Processor.The National
Science Foundation, the General Electric Foundation, the Office of Naval Research,
the Defense Advanced Research Projects Agency, and the Woods Hole Oceanographic
Institution
Evolutionary Decomposition of Complex Design Spaces
This dissertation investigates the support of conceptual engineering design through the
decomposition of multi-dimensional search spaces into regions of high performance. Such
decomposition helps the designer identify optimal design directions by the elimination of
infeasible or undesirable regions within the search space. Moreover, high levels of
interaction between the designer and the model increases overall domain knowledge and
significantly reduces uncertainty relating to the design task at hand.
The aim of the research is to develop the archetypal Cluster Oriented Genetic Algorithm
(COGA) which achieves search space decomposition by using variable mutation
(vmCOGA) to promote diverse search and an Adaptive Filter (AF) to extract solutions of
high performance [Parmee 1996a, 1996b]. Since COGAs are primarily used to decompose
design domains of unknown nature within a real-time environment, the elimination of
apriori knowledge, speed and robustness are paramount. Furthermore COGA should
promote the in-depth exploration of the entire search space, sampling all optima and the
surrounding areas. Finally any proposed system should allow for trouble free integration
within a Graphical User Interface environment.
The replacement of the variable mutation strategy with a number of algorithms which
increase search space sampling are investigated. Utility is then increased by incorporating
a control mechanism that maintains optimal performance by adapting each algorithm
throughout search by means of a feedback measure based upon population convergence.
Robustness is greatly improved by modifying the Adaptive Filter through the introduction
of a process that ensures more accurate modelling of the evolving population.
The performance of each prospective algorithm is assessed upon a suite of two-dimensional
test functions using a set of novel performance metrics. A six dimensional
test function is also developed where the areas of high performance are explicitly known,
thus allowing for evaluation under conditions of increased dimensionality. Further
complexity is introduced by two real world models described by both continuous and
discrete parameters. These relate to the design of conceptual airframes and cooling hole
geometries within a gas turbine.
Results are promising and indicate significant improvement over the vmCOGA in terms of
all desired criteria. This further supports the utilisation of COGA as a decision support
tool during the conceptual phase of design.British Aerospace plc, Warton and
Rolls Royce plc, Filto
Practical implementation of nonlinear time series methods: The TISEAN package
Nonlinear time series analysis is becoming a more and more reliable tool for
the study of complicated dynamics from measurements. The concept of
low-dimensional chaos has proven to be fruitful in the understanding of many
complex phenomena despite the fact that very few natural systems have actually
been found to be low dimensional deterministic in the sense of the theory. In
order to evaluate the long term usefulness of the nonlinear time series
approach as inspired by chaos theory, it will be important that the
corresponding methods become more widely accessible. This paper, while not a
proper review on nonlinear time series analysis, tries to make a contribution
to this process by describing the actual implementation of the algorithms, and
their proper usage. Most of the methods require the choice of certain
parameters for each specific time series application. We will try to give
guidance in this respect. The scope and selection of topics in this article, as
well as the implementational choices that have been made, correspond to the
contents of the software package TISEAN which is publicly available from
http://www.mpipks-dresden.mpg.de/~tisean . In fact, this paper can be seen as
an extended manual for the TISEAN programs. It fills the gap between the
technical documentation and the existing literature, providing the necessary
entry points for a more thorough study of the theoretical background.Comment: 27 pages, 21 figures, downloadable software at
http://www.mpipks-dresden.mpg.de/~tisea
Optimisation of Mobile Communication Networks - OMCO NET
The mini conference âOptimisation of Mobile Communication Networksâ focuses on advanced methods for search and optimisation applied to wireless communication networks. It is sponsored by Research & Enterprise Fund Southampton Solent University.
The conference strives to widen knowledge on advanced search methods capable of optimisation of wireless communications networks. The aim is to provide a forum for exchange of recent knowledge, new ideas and trends in this progressive and challenging area. The conference will popularise new successful approaches on resolving hard tasks such as minimisation of transmit power, cooperative and optimal routing
The instanton method and its numerical implementation in fluid mechanics
A precise characterization of structures occurring in turbulent fluid flows
at high Reynolds numbers is one of the last open problems of classical physics.
In this review we discuss recent developments related to the application of
instanton methods to turbulence. Instantons are saddle point configurations of
the underlying path integrals. They are equivalent to minimizers of the related
Freidlin-Wentzell action and known to be able to characterize rare events in
such systems. While there is an impressive body of work concerning their
analytical description, this review focuses on the question on how to compute
these minimizers numerically. In a short introduction we present the relevant
mathematical and physical background before we discuss the stochastic Burgers
equation in detail. We present algorithms to compute instantons numerically by
an efficient solution of the corresponding Euler-Lagrange equations. A second
focus is the discussion of a recently developed numerical filtering technique
that allows to extract instantons from direct numerical simulations. In the
following we present modifications of the algorithms to make them efficient
when applied to two- or three-dimensional fluid dynamical problems. We
illustrate these ideas using the two-dimensional Burgers equation and the
three-dimensional Navier-Stokes equations
Adaptive Multi-Fidelity Modeling for Efficient Design Exploration Under Uncertainty
This thesis work introduces a novel multi-fidelity modeling framework, which is designed to address the practical challenges encountered in Aerospace vehicle design when 1) multiple low-fidelity models exist, 2) each low-fidelity model may only be correlated with the high-fidelity model in part of the design domain, and 3) models may contain noise or uncertainty. The proposed approach approximates a high-fidelity model by consolidating multiple low-fidelity models using the localized Galerkin formulation. Also, two adaptive sampling methods are developed to efficiently construct an accurate model. The first acquisition formulation, expected effectiveness, searches for the global optimum and is useful for modeling engineering objectives. The second acquisition formulation, expected usefulness, identifies feasible design domains and is useful for constrained design exploration. The proposed methods can be applied to any engineering systems with complex and demanding simulation models
- âŠ