713 research outputs found

    Hilbert-space methods in experimental design

    Get PDF

    Simulation-based optimal Bayesian experimental design for nonlinear systems

    Get PDF
    The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters. Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter estimation problems arising in detailed combustion kinetics.Comment: Preprint 53 pages, 17 figures (54 small figures). v1 submitted to the Journal of Computational Physics on August 4, 2011; v2 submitted on August 12, 2012. v2 changes: (a) addition of Appendix B and Figure 17 to address the bias in the expected utility estimator; (b) minor language edits; v3 submitted on November 30, 2012. v3 changes: minor edit

    Control of quantum phenomena: Past, present, and future

    Full text link
    Quantum control is concerned with active manipulation of physical and chemical processes on the atomic and molecular scale. This work presents a perspective of progress in the field of control over quantum phenomena, tracing the evolution of theoretical concepts and experimental methods from early developments to the most recent advances. The current experimental successes would be impossible without the development of intense femtosecond laser sources and pulse shapers. The two most critical theoretical insights were (1) realizing that ultrafast atomic and molecular dynamics can be controlled via manipulation of quantum interferences and (2) understanding that optimally shaped ultrafast laser pulses are the most effective means for producing the desired quantum interference patterns in the controlled system. Finally, these theoretical and experimental advances were brought together by the crucial concept of adaptive feedback control, which is a laboratory procedure employing measurement-driven, closed-loop optimization to identify the best shapes of femtosecond laser control pulses for steering quantum dynamics towards the desired objective. Optimization in adaptive feedback control experiments is guided by a learning algorithm, with stochastic methods proving to be especially effective. Adaptive feedback control of quantum phenomena has found numerous applications in many areas of the physical and chemical sciences, and this paper reviews the extensive experiments. Other subjects discussed include quantum optimal control theory, quantum control landscapes, the role of theoretical control designs in experimental realizations, and real-time quantum feedback control. The paper concludes with a prospective of open research directions that are likely to attract significant attention in the future.Comment: Review article, final version (significantly updated), 76 pages, accepted for publication in New J. Phys. (Focus issue: Quantum control

    Change-point Problem and Regression: An Annotated Bibliography

    Get PDF
    The problems of identifying changes at unknown times and of estimating the location of changes in stochastic processes are referred to as the change-point problem or, in the Eastern literature, as disorder . The change-point problem, first introduced in the quality control context, has since developed into a fundamental problem in the areas of statistical control theory, stationarity of a stochastic process, estimation of the current position of a time series, testing and estimation of change in the patterns of a regression model, and most recently in the comparison and matching of DNA sequences in microarray data analysis. Numerous methodological approaches have been implemented in examining change-point models. Maximum-likelihood estimation, Bayesian estimation, isotonic regression, piecewise regression, quasi-likelihood and non-parametric regression are among the methods which have been applied to resolving challenges in change-point problems. Grid-searching approaches have also been used to examine the change-point problem. Statistical analysis of change-point problems depends on the method of data collection. If the data collection is ongoing until some random time, then the appropriate statistical procedure is called sequential. If, however, a large finite set of data is collected with the purpose of determining if at least one change-point occurred, then this may be referred to as non-sequential. Not surprisingly, both the former and the latter have a rich literature with much of the earlier work focusing on sequential methods inspired by applications in quality control for industrial processes. In the regression literature, the change-point model is also referred to as two- or multiple-phase regression, switching regression, segmented regression, two-stage least squares (Shaban, 1980), or broken-line regression. The area of the change-point problem has been the subject of intensive research in the past half-century. The subject has evolved considerably and found applications in many different areas. It seems rather impossible to summarize all of the research carried out over the past 50 years on the change-point problem. We have therefore confined ourselves to those articles on change-point problems which pertain to regression. The important branch of sequential procedures in change-point problems has been left out entirely. We refer the readers to the seminal review papers by Lai (1995, 2001). The so called structural change models, which occupy a considerable portion of the research in the area of change-point, particularly among econometricians, have not been fully considered. We refer the reader to Perron (2005) for an updated review in this area. Articles on change-point in time series are considered only if the methodologies presented in the paper pertain to regression analysis

    Optimal Design of Validation Experiments for the Prediction of Quantities of Interest

    Full text link
    Numerical predictions of quantities of interest measured within physical systems rely on the use of mathematical models that should be validated, or at best, not invalidated. Model validation usually involves the comparison of experimental data (outputs from the system of interest) and model predictions, both obtained at a specific validation scenario. The design of this validation experiment should be directly relevant to the objective of the model, that of predicting a quantity of interest at a prediction scenario. In this paper, we address two specific issues arising when designing validation experiments. The first issue consists in determining an appropriate validation scenario in cases where the prediction scenario cannot be carried out in a controlled environment. The second issue concerns the selection of observations when the quantity of interest cannot be readily observed. The proposed methodology involves the computation of influence matrices that characterize the response surface of given model functionals. Minimization of the distance between influence matrices allow one for selecting a validation experiment most representative of the prediction scenario. We illustrate our approach on two numerical examples. The first example considers the validation of a simple model based on an ordinary differential equation governing an object in free fall to put in evidence the importance of the choice of the validation experiment. The second numerical experiment focuses on the transport of a pollutant and demonstrates the impact that the choice of the quantity of interest has on the validation experiment to be performed.Comment: 31 pages, 10 figure

    Enabling Automated, Reliable and Efficient Aerodynamic Shape Optimization With Output-Based Adapted Meshes

    Full text link
    Simulation-based aerodynamic shape optimization has been greatly pushed forward during the past several decades, largely due to the developments of computational fluid dynamics (CFD), geometry parameterization methods, mesh deformation techniques, sensitivity computation, and numerical optimization algorithms. Effective integration of these components has made aerodynamic shape optimization a highly automated process, requiring less and less human interference. Mesh generation, on the other hand, has become the main overhead of setting up the optimization problem. Obtaining a good computational mesh is essential in CFD simulations for accurate output predictions, which as a result significantly affects the reliability of optimization results. However, this is in general a nontrivial task, heavily relying on the user’s experience, and it can be worse with the emerging high-fidelity requirements or in the design of novel configurations. On the other hand, mesh quality and the associated numerical errors are typically only studied before and after the optimization, leaving the design search path unveiled to numerical errors. This work tackles these issues by integrating an additional component, output-based mesh adaptation, within traditional aerodynamic shape optimizations. First, we develop a more suitable error estimator for optimization problems by taking into account errors in both the objective and constraint outputs. The localized output errors are then used to drive mesh adaptation to achieve the desired accuracy on both the objective and constraint outputs. With the variable fidelity offered by the adaptive meshes, multi-fidelity optimization frameworks are developed to tightly couple mesh adaptation and shape optimization. The objective functional and its sensitivity are first evaluated on an initial coarse mesh, which is then subsequently adapted as the shape optimization proceeds. The effort to set up the optimization is minimal since the initial mesh can be fairly coarse and easy to generate. Meanwhile, the proposed framework saves computational costs by reducing the mesh size at the early stages of the optimization, when the design is far from optimal, and avoiding exhaustive search on low-fidelity meshes when the outputs are inaccurate. To further improve the computational efficiency, we also introduce new methods to accelerate the error estimation and mesh adaptation using machine learning techniques. Surrogate models are developed to predict the localized output error and optimal mesh anisotropy to guide the adaptation. The proposed machine learning approaches demonstrate good performance in two-dimensional test problems, encouraging more study and developments to incorporate them within aerodynamic optimization techniques. Although CFD has been extensively used in aircraft design and optimization, the design automation, reliability, and efficiency are largely limited by the mesh generation process and the fixed-mesh optimization paradigm. With the emerging high-fidelity requirements and the further developments of unconventional configurations, CFD-based optimization has to be made more accurate and more efficient to achieve higher design reliability and lower computational cost. Furthermore, future aerodynamic optimization needs to avoid unnecessary overhead in mesh generation and optimization setup to further automate the design process. The author expects the methods developed in this work to be the keys to enable more automated, reliable, and efficient aerodynamic shape optimization, making CFD-based optimization a more powerful tool in aircraft design.PHDAerospace EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163034/1/cgderic_1.pd

    Why experimenters should not randomize, and what they should do instead

    Get PDF
    This paper discusses experimental design for the case that (i) we are given a distribution of covariates from a pre-selected random sample, and (ii) we are interested in the average treatment effect (ATE) of some binary treatment. We show that in general there is a unique optimal non-random treatment assignment if there are continuous covariates. We argue that experimenters should choose this assignment. The optimal assignment minimizes the risk (e.g., expected squared error) of treatment effects estimators. We provide explicit expressions for the risk, and discuss algorithms which minimize it. The objective of controlled trials is to have treatment groups which are similar a priori (balanced), so we can "compare apples with apples." The expressions for risk derived in this paper provide an operationalization of the notion of balance. The intuition for our non-randomization result is similar to the reasons for not using randomized estimators -adding noise can never decrease risk. The formal setup we consider is decision-theoretic and nonparametric. In simulations and an application to project STAR we find that optimal designs have mean squared errors of up to 20% less than randomized designs

    Model-Oriented Data Analysis; Proceedings of an IIASA Workshop, Eisenach, GDR, March 9-13, 1987)

    Get PDF
    The main topics of this workshop were (1) optimal experimental design, (2) regression analysis, and (3) model testing and applications. Under the topic "Optimal experimental design" new optimality criteria based on asymptotic properties of relevant statistics were discussed. The use of additional restrictions on the designs were also discussed, inadequate and nonlinear models were considered and Bayesian approaches to the design problem in the nonlinear case were a focal point of the special session. It was emphasized that experimental design is a field of much current interest. During the sessions devoted to "Regression analysis" it became clear that there is an essential progress in statistics for nonlinear models. Here, besides the asymptotic behavior of several estimators the non-asymptotic properties of some interesting statistics were discussed. The distribution of the maximum-likelihood (ML) estimator in normal models and alternative estimators to the least-squares or ML estimators were discussed intensively. Several approaches to "resampling" were considered in connection with linear, nonlinear and semiparametric models. Some new results were reported concerning simulated likelihoods which provide a powerful tool for statistics in several types of models. The advantages and problems of bootstrapping, jackknifing and related methods were considered in a number of papers. Under the topic of "Model testing and applications" the papers covered a broad spectrum of problems. Methods for the detection of outliers and the consequences of transformations of data were discussed. Furthermore, robust regression methods, empirical Bayesian approaches and the stability of estimators were considered, together with numerical problems in data analysis and the use of computer packages

    An application of genetic algorithms to chemotherapy treatment.

    Get PDF
    The present work investigates methods for optimising cancer chemotherapy within the bounds of clinical acceptability and making this optimisation easily accessible to oncologists. Clinical oncologists wish to be able to improve existing treatment regimens in a systematic, effective and reliable way. In order to satisfy these requirements a novel approach to chemotherapy optimisation has been developed, which utilises Genetic Algorithms in an intelligent search process for good chemotherapy treatments. The following chapters consequently address various issues related to this approach. Chapter 1 gives some biomedical background to the problem of cancer and its treatment. The complexity of the cancer phenomenon, as well as the multi-variable and multi-constrained nature of chemotherapy treatment, strongly support the use of mathematical modelling for predicting and controlling the development of cancer. Some existing mathematical models, which describe the proliferation process of cancerous cells and the effect of anti-cancer drugs on this process, are presented in Chapter 2. Having mentioned the control of cancer development, the relevance of optimisation and optimal control theory becomes evident for achieving the optimal treatment outcome subject to the constraints of cancer chemotherapy. A survey of traditional optimisation methods applicable to the problem under investigation is given in Chapter 3 with the conclusion that the constraints imposed on cancer chemotherapy and general non-linearity of the optimisation functionals associated with the objectives of cancer treatment often make these methods of optimisation ineffective. Contrariwise, Genetic Algorithms (GAs), featuring the methods of evolutionary search and optimisation, have recently demonstrated in many practical situations an ability to quickly discover useful solutions to highly-constrained, irregular and discontinuous problems that have been difficult to solve by traditional optimisation methods. Chapter 4 presents the essence of Genetic Algorithms, as well as their salient features and properties, and prepares the ground for the utilisation of Genetic Algorithms for optimising cancer chemotherapy treatment. The particulars of chemotherapy optimisation using Genetic Algorithms are given in Chapter 5 and Chapter 6, which present the original work of this thesis. In Chapter 5 the optimisation problem of single-drug chemotherapy is formulated as a search task and solved by several numerical methods. The results obtained from different optimisation methods are used to assess the quality of the GA solution and the effectiveness of Genetic Algorithms as a whole. Also, in Chapter 5 a new approach to tuning GA factors is developed, whereby the optimisation performance of Genetic Algorithms can be significantly improved. This approach is based on statistical inference about the significance of GA factors and on regression analysis of the GA performance. Being less computationally intensive compared to the existing methods of GA factor adjusting, the newly developed approach often gives better tuning results. Chapter 6 deals with the optimisation of multi-drug chemotherapy, which is a more practical and challenging problem. Its practicality can be explained by oncologists' preferences to administer anti-cancer drugs in various combinations in order to better cope with the occurrence of drug resistant cells. However, the imposition of strict toxicity constraints on combining various anticancer drugs together, makes the optimisation problem of multi-drug chemotherapy very difficult to solve, especially when complex treatment objectives are considered. Nevertheless, the experimental results of Chapter 6 demonstrate that this problem is tractable to Genetic Algorithms, which are capable of finding good chemotherapeutic regimens in different treatment situations. On the basis of these results a decision has been made to encapsulate Genetic Algorithms into an independent optimisation module and to embed this module into a more general and user-oriented environment - the Oncology Workbench. The particulars of this encapsulation and embedding are also given in Chapter 6. Finally, Chapter 7 concludes the present work by summarising the contributions made to the knowledge of the subject treated and by outlining the directions for further investigations. The main contributions are: (1) a novel application of the Genetic Algorithm technique in the field of cancer chemotherapy optimisation, (2) the development of a statistical method for tuning the values of GA factors, and (3) the development of a robust and versatile optimisation utility for a clinically usable decision support system. The latter contribution of this thesis creates an opportunity to widen the application domain of Genetic Algorithms within the field of drug treatments and to allow more clinicians to benefit from utilising the GA optimisation

    High Power Gain Guided Index Antiguided Fiber Lasers and Amplifiers

    Get PDF
    Abstract Increasing the core size of high-power fiber lasers and amplifiers is highly desired in order to mitigate the unwanted nonlinear optical effects and raise the optical damage threshold. If the core size of conventional index-guided (IG) optical fibers increases, the fiber will become multimode, because it is very difficult to control and fine-tune the index step between the core and cladding to satisfy the single mode condition. Siegman proposed Gain-guided index-antiguided (GG-IAG) fibers as a possible platform for ultra-large-core single-mode operation for lasers and amplifiers. In this thesis, the beam-quality factor M2 for the fundamental LP01 mode of a step-index fiber with finite and infinite cladding diameter is calculated in the presence of gain as a function of the complex generalized V number. The numerical results agree with analytical work that obtained in our group. It is shown that the M2 value of a single-mode gain-guided fiber laser can be arbitrarily large. The results are important for the interpretation of the beam-quality measurements in recent experiments on single-mode gain-guided fiber lasers. It is also shown that the conventional infinite cladding diameter approximation cannot be used for index-antiguided gain-guided fibers, and the rigorous analysis is required for accurate prediction of the beam quality factor, as reported in recent experimental measurements. We also highlight the key reasons behind the poor power efficiency observed in multiple experiments in gain guided index-antiguided (GG-IAG) fiber amplifiers and lasers. We show that by properly designing the fiber geometrical characteristics, it is possible to considerably improve the power efficiency of GG-IAG fiber amplifiers in end-pumping schemes
    • …
    corecore