240 research outputs found

    Positive Definite Solutions of the Nonlinear Matrix Equation X+AHXˉ−1A=IX+A^{\mathrm{H}}\bar{X}^{-1}A=I

    Get PDF
    This paper is concerned with the positive definite solutions to the matrix equation X+AHXˉ−1A=IX+A^{\mathrm{H}}\bar{X}^{-1}A=I where XX is the unknown and AA is a given complex matrix. By introducing and studying a matrix operator on complex matrices, it is shown that the existence of positive definite solutions of this class of nonlinear matrix equations is equivalent to the existence of positive definite solutions of the nonlinear matrix equation W+BTW−1B=IW+B^{\mathrm{T}}W^{-1}B=I which has been extensively studied in the literature, where BB is a real matrix and is uniquely determined by A.A. It is also shown that if the considered nonlinear matrix equation has a positive definite solution, then it has the maximal and minimal solutions. Bounds of the positive definite solutions are also established in terms of matrix AA. Finally some sufficient conditions and necessary conditions for the existence of positive definite solutions of the equations are also proposed

    Asynchronous and Multiprecision Linear Solvers - Scalable and Fault-Tolerant Numerics for Energy Efficient High Performance Computing

    Get PDF
    Asynchronous methods minimize idle times by removing synchronization barriers, and therefore allow the efficient usage of computer systems. The implied high tolerance with respect to communication latencies improves the fault tolerance. As asynchronous methods also enable the usage of the power and energy saving mechanisms provided by the hardware, they are suitable candidates for the highly parallel and heterogeneous hardware platforms that are expected for the near future

    Multi–layer health–aware economic predictive control of a pasteurization pilot plant

    Get PDF
    This paper proposes two different health-aware economic predictive control strategies that aim at minimizing the damage of components in a pasteurization plant. The damage is assessed with a rainflow-counting algorithm that allows estimating the components’ fatigue. By using the results obtained from this algorithm, a simplified model that characterizes the health of the system is developed and integrated into the predictive controller. The overall control objective is modified by adding an extra criterion that takes into account the accumulated damage. The first strategy is a single-layer predictive controller with an integral action to eliminate the steady-state error that appears when adding the extra criterion. In order to achieve the best minimal accumulated damage and operational costs, the single-layer approach is improved with a multi-layer control scheme, where the solution of the dynamic optimization problem is obtained from the model in two different time scales. Finally, to achieve the advisable trade-off between minimal accumulated damage and operational costs, both control strategies are compared in simulation over a utility-scale pasteurization plant.Peer ReviewedPostprint (author's final draft

    Efficient Contextual Measures for Classification of Multispectral Image Data

    Get PDF
    The most common method for labeling multispectral image data classifies each pixel entirely on the basis of its own spectral signature. Such a method neither utilizes contextual information in the image nor does it incorporate secondary information related to the scene. This exclusion is generally due to the poor cost/performance efficiency of most contextual algorithms and a lack of knowledge concerning how to relate variables from different sources. In this research, several efficient spatial context measures are developed from different structural models for four-nearest-neighbor neighborhoods. Most of these measures rely on simple manipulations of label probabilities generated by a noncontextual classifier. They are efficient computationally and are effective in improving classification accuracy over the noncontextual result. Among other schemata, the measures include: average label probabilities in a neighborhood; label probabilities; combined as a function of a metric in the label probability space; and context through semantic constraints within a Bayesian framework. In addition, an efficient implementation of a contextual classifier based on compound decision theory is developed through a simplification of the structure of the contextual prior probability^ No accuracy is lost through the simplification, but computational speed is increased 15-fold. Finally, a procedure to combine label probabilities from independent data sources is proposed. A mechanism for combining the label probabilities from each of the sources as a function of their independent classification accuracies is created and evaluated

    Efficient Numerical Solution of Large Scale Algebraic Matrix Equations in PDE Control and Model Order Reduction

    Get PDF
    Matrix Lyapunov and Riccati equations are an important tool in mathematical systems theory. They are the key ingredients in balancing based model order reduction techniques and linear quadratic regulator problems. For small and moderately sized problems these equations are solved by techniques with at least cubic complexity which prohibits their usage in large scale applications. Around the year 2000 solvers for large scale problems have been introduced. The basic idea there is to compute a low rank decomposition of the quadratic and dense solution matrix and in turn reduce the memory and computational complexity of the algorithms. In this thesis efficiency enhancing techniques for the low rank alternating directions implicit iteration based solution of large scale matrix equations are introduced and discussed. Also the applicability in the context of real world systems is demonstrated. The thesis is structured in seven central chapters. After the introduction chapter 2 introduces the basic concepts and notations needed as fundamental tools for the remainder of the thesis. The next chapter then introduces a collection of test examples spanning from easily scalable academic test systems to badly conditioned technical applications which are used to demonstrate the features of the solvers. Chapter four and five describe the basic solvers and the modifications taken to make them applicable to an even larger class of problems. The following two chapters treat the application of the solvers in the context of model order reduction and linear quadratic optimal control of PDEs. The final chapter then presents the extensive numerical testing undertaken with the solvers proposed in the prior chapters. Some conclusions and an appendix complete the thesis

    Circuit mechanisms for the chemical modulation of cortex-wide network interactions and behavioral variability

    Get PDF
    Influential theories postulate distinct roles of catecholamines and acetylcholine in cognition and behavior. However, previous physiological work reported similar effects of these neuromodulators on the response properties (specifically, the gain) of individual cortical neurons. Here, we show a double dissociation between the effects of catecholamines and acetylcholine at the level of large-scale interactions between cortical areas in humans. A pharmacological boost of catecholamine levels increased cortex-wide interactions during a visual task, but not rest. An acetylcholine boost decreased interactions during rest, but not task. Cortical circuit modeling explained this dissociation by differential changes in two circuit properties: the local excitation-inhibition balance (more strongly increased by catecholamines) and intracortical transmission (more strongly reduced by acetylcholine). The inferred catecholaminergic mechanism also predicted noisier decision-making, which we confirmed for both perceptual and value-based choice behavior. Our work highlights specific circuit mechanisms for shaping cortical network interactions and behavioral variability by key neuromodulatory systems

    Advanced data analysis for traction force microscopy and data-driven discovery of physical equations

    Get PDF
    The plummeting cost of collecting and storing data and the increasingly available computational power in the last decade have led to the emergence of new data analysis approaches in various scientific fields. Frequently, the new statistical methodology is employed for analyzing data involving incomplete or unknown information. In this thesis, new statistical approaches are developed for improving the accuracy of traction force microscopy (TFM) and data-driven discovery of physical equations. TFM is a versatile method for the reconstruction of a spatial image of the traction forces exerted by cells on elastic gel substrates. The traction force field is calculated from a linear mechanical model connecting the measured substrate displacements with the sought-for cell-generated stresses in real or Fourier space, which is an inverse and ill-posed problem. This inverse problem is commonly solved making use of regularization methods. Here, we systematically test the performance of new regularization methods and Bayesian inference for quantifying the parameter uncertainty in TFM. We compare two classical schemes, L1- and L2-regularization with three previously untested schemes, namely Elastic Net regularization, Proximal Gradient Lasso, and Proximal Gradient Elastic Net. We find that Elastic Net regularization, which combines L1 and L2 regularization, outperforms all other methods with regard to accuracy of traction reconstruction. Next, we develop two methods, Bayesian L2 regularization and Advanced Bayesian L2 regularization, for automatic, optimal L2 regularization. We further combine the Bayesian L2 regularization with the computational speed of Fast Fourier Transform algorithms to develop a fully automated method for noise reduction and robust, standardized traction-force reconstruction that we call Bayesian Fourier transform traction cytometry (BFTTC). This method is made freely available as a software package with graphical user-interface for intuitive usage. Using synthetic data and experimental data, we show that these Bayesian methods enable robust reconstruction of traction without requiring a difficult selection of regularization parameters specifically for each data set. Next, we employ our methodology developed for the solution of inverse problems for automated, data-driven discovery of ordinary differential equations (ODEs), partial differential equations (PDEs), and stochastic differential equations (SDEs). To find the equations governing a measured time-dependent process, we construct dictionaries of non-linear candidate equations. These candidate equations are evaluated using the measured data. With this approach, one can construct a likelihood function for the candidate equations. Optimization yields a linear, inverse problem which is to be solved under a sparsity constraint. We combine Bayesian compressive sensing using Laplace priors with automated thresholding to develop a new approach, namely automatic threshold sparse Bayesian learning (ATSBL). ATSBL is a robust method to identify ODEs, PDEs, and SDEs involving Gaussian noise, which is also referred to as type I noise. We extensively test the method with synthetic datasets describing physical processes. For SDEs, we combine data-driven inference using ATSBL with a novel entropy-based heuristic for discarding data points with high uncertainty. Finally, we develop an automatic iterative sampling optimization technique akin to Umbrella sampling. Therewith, we demonstrate that data-driven inference of SDEs can be substantially improved through feedback during the inference process if the stochastic process under investigation can be manipulated either experimentally or in simulations
    • …
    corecore