22 research outputs found

    Multi-Quality Auto-Tuning by Contract Negotiation

    Get PDF
    A characteristic challenge of software development is the management of omnipresent change. Classically, this constant change is driven by customers changing their requirements. The wish to optimally leverage available resources opens another source of change: the software systems environment. Software is tailored to specific platforms (e.g., hardware architectures) resulting in many variants of the same software optimized for different environments. If the environment changes, a different variant is to be used, i.e., the system has to reconfigure to the variant optimized for the arisen situation. The automation of such adjustments is subject to the research community of self-adaptive systems. The basic principle is a control loop, as known from control theory. The system (and environment) is continuously monitored, the collected data is analyzed and decisions for or against a reconfiguration are computed and realized. Central problems in this field, which are addressed in this thesis, are the management of interdependencies between non-functional properties of the system, the handling of multiple criteria subject to decision making and the scalability. In this thesis, a novel approach to self-adaptive software--Multi-Quality Auto-Tuning (MQuAT)--is presented, which provides design and operation principles for software systems which automatically provide the best possible utility to the user while producing the least possible cost. For this purpose, a component model has been developed, enabling the software developer to design and implement self-optimizing software systems in a model-driven way. This component model allows for the specification of the structure as well as the behavior of the system and is capable of covering the runtime state of the system. The notion of quality contracts is utilized to cover the non-functional behavior and, especially, the dependencies between non-functional properties of the system. At runtime the component model covers the runtime state of the system. This runtime model is used in combination with the contracts to generate optimization problems in different formalisms (Integer Linear Programming (ILP), Pseudo-Boolean Optimization (PBO), Ant Colony Optimization (ACO) and Multi-Objective Integer Linear Programming (MOILP)). Standard solvers are applied to derive solutions to these problems, which represent reconfiguration decisions, if the identified configuration differs from the current. Each approach is empirically evaluated in terms of its scalability showing the feasibility of all approaches, except for ACO, the superiority of ILP over PBO and the limits of all approaches: 100 component types for ILP, 30 for PBO, 10 for ACO and 30 for 2-objective MOILP. In presence of more than two objective functions the MOILP approach is shown to be infeasible

    Sparse Cholesky factorization by Kullback-Leibler minimization

    Get PDF
    We propose to compute a sparse approximate inverse Cholesky factor L of a dense covariance matrix Θ by minimizing the Kullback-Leibler divergence between the Gaussian distributions N(0,Θ) and N(0,L−⊀L−1), subject to a sparsity constraint. Surprisingly, this problem has a closed-form solution that can be computed efficiently, recovering the popular Vecchia approximation in spatial statistics. Based on recent results on the approximate sparsity of inverse Cholesky factors of Θ obtained from pairwise evaluation of Green's functions of elliptic boundary-value problems at points {x_i}_(1≀i≀N) ⊂ ℝ^d, we propose an elimination ordering and sparsity pattern that allows us to compute Ï”-approximate inverse Cholesky factors of such Θ in computational complexity O(Nlog(N/Ï”)^d) in space and O(Nlog(N/Ï”)^(2d)) in time. To the best of our knowledge, this is the best asymptotic complexity for this class of problems. Furthermore, our method is embarrassingly parallel, automatically exploits low-dimensional structure in the data, and can perform Gaussian-process regression in linear (in N) space complexity. Motivated by the optimality properties of our methods, we propose methods for applying it to the joint covariance of training and prediction points in Gaussian-process regression, greatly improving stability and computational cost. Finally, we show how to apply our method to the important setting of Gaussian processes with additive noise, sacrificing neither accuracy nor computational complexity

    Target tracking using a joint acoustic video system

    Get PDF
    In this paper, we present a particle filter that exploits multi modal information for robust target tracking. We demonstrate a Bayesian framework for combining acoustic and video information using a state space approach. A proposal strategy for joint acoustic and video state-space tracking using particle filters is given by carefully placing the random support of the joint filter where the final posterior is likely to lie. By using the Kullback-Leibler divergence measure, it is shown that the joint filter posterior estimate decreases the worst case divergence of the individual modalities. Hence, the joint tracking filter is robust against video and acoustic occlusions. We also introduce a time-delay variable to the joint state space to handle the acoustic-video data synchronization issue, caused by acoustic propagation delay. Computer simulations are presented with field and synthetic data to demonstrate the filter’s performance

    Acoustic multi target tracking using direction-of-arrival batches

    Get PDF
    In this paper, we propose a particle filter acoustic direction-of-arrival (DOA) tracker to track multiple maneuvering targets using a state space approach. The particle filter determines its state vector using a batch of DOA estimates. The filter likelihood treats the observations as an image, using template models derived from the state update equation, and also incorporates the possibility of missing data as well as spurious DOA observations. The particle filter handles multiple targets, using a partitioned state-vector approach. The particle filter solution is compared with three other methods: the extended Kalman filter, Laplacian filter, and another particle filter that uses the acoustic microphone outputs directly. We discuss the advantages and disadvantages of these methods for our problem. In addition, we also demonstrate an autonomous system for multiple target DOA tracking with automatic target initialization and deletion. The initialization system uses a track-before-detect approach and employs the matching pursuit idea to initialize multiple targets. Computer simulations are presented to show the performances of the algorithms

    A computational multi-scale approach for brittle materials

    Get PDF
    Materials of industrial interest often show a complex microstructure which directly influences their macroscopic material behavior. For simulations on the component scale, multi-scale methods may exploit this microstructural information. This work is devoted to a multi-scale approach for brittle materials. Based on a homogenization result for free discontinuity problems, we present FFT-based methods to compute the effective crack energy of heterogeneous materials with complex microstructures

    Fault-tolerant software: dependability/performance trade-offs, concurrency and system support

    Get PDF
    PhD ThesisAs the use of computer systems becomes more and more widespread in applications that demand high levels of dependability, these applications themselves are growing in complexity in a rapid rate, especially in the areas that require concurrent and distributed computing. Such complex systems are very prone to faults and errors. No matter how rigorously fault avoidance and fault removal techniques are applied, software design faults often remain in systems when they are delivered to the customers. In fact, residual software faults are becoming the significant underlying cause of system failures and the lack of dependability. There is tremendous need for systematic techniques for building dependable software, including the fault tolerance techniques that ensure software-based systems to operate dependably even when potential faults are present. However, although there has been a large amount of research in the area of fault-tolerant software, existing techniques are not yet sufficiently mature as a practical engineering discipline for realistic applications. In particular, they are often inadequate when applied to highly concurrent and distributed software. This thesis develops new techniques for building fault-tolerant software, addresses the problem of achieving high levels of dependability in concurrent and distributed object systems, and studies system-level support for implementing dependable software. Two schemes are developed - the t/(n-l)-VP approach is aimed at increasing software reliability and controlling additional complexity, while the SCOP approach presents an adaptive way of dynamically adjusting software reliability and efficiency aspects. As a more general framework for constructing dependable concurrent and distributed software, the Coordinated Atomic (CA) Action scheme is examined thoroughly. Key properties of CA actions are formalized, conceptual model and mechanisms for handling application level exceptions are devised, and object-based diversity techniques are introduced to cope with potential software faults. These three schemes are evaluated analytically and validated by controlled experiments. System-level support is also addressed with a multi-level system architecture. An architectural pattern for implementing fault-tolerant objects is documented in detail to capture existing solutions and our previous experience. An industrial safety-critical application, the Fault-Tolerant Production Cell, is used as a case study to examine most of the concepts and techniques developed in this research.ESPRIT

    Reduced complexity adaptive filtering algorithms with applications to communications systems

    Get PDF
    This thesis develops new adaptive filtering algorithms suitable for communications applications with the aim of reducing the computational complexity of the implementation. Low computational complexity of the adaptive filtering algorithm can, for example, reduce the required power consumption of the implementation. A low power consumption is important in wireless applications, particularly at the mobile terminal side, where the physical size of the mobile terminal and long battery life are crucial. We focus on the implementation of two types of adaptive filters: linearly-constrained minimum-variance (LCMV) adaptive filters and conventional training-based adaptive filters. For LCMV adaptive filters, normalized data-reusing algorithms are proposed which can trade off convergence speed and computational complexity by varying the number of data-reuses in the coefficient update. Furthermore, we propose a transformation of the input signal to the LCMV adaptive filter, which properly reduces the dimension of the coefficient update. It is shown that transforming the input signal using successive Householder transformations renders a particularly efficient implementation. The approach allows any unconstrained adaptation algorithm to be applied to linearly constrained problems. In addition, a family of algorithms is proposed using the framework of set-membership filtering (SMF). These algorithms combine a bounded error specification on the adaptive filter with the concept of data-reusing. The resulting algorithms have low average computational complexity because coefficient update is not performed at each iteration. In addition, the adaptation algorithm can be adjusted to achieve a desired computational complexity by allowing a variable number of data-reuses for the filter update. Finally, we propose a framework combining sparse update in time with sparse update of filter coefficients. This type of partial-update (PU) adaptive filters are suitable for applications where the required order of the adaptive filter is conflicting with tight constraints for the processing power.reviewe

    Inference, Computation, and Games

    Get PDF
    In this thesis, we use statistical inference and competitive games to design algorithms for computational mathematics. In the first part, comprising chapters two through six, we use ideas from Gaussian process statistics to obtain fast solvers for differential and integral equations. We begin by observing the equivalence of conditional (near-)independence of Gaussian processes and the (near-)sparsity of the Cholesky factors of its precision and covariance matrices. This implies the existence of a large class of dense matrices with almost sparse Cholesky factors, thereby greatly increasing the scope of application of sparse Cholesky factorization. Using an elimination ordering and sparsity pattern motivated by the screening effect in spatial statistics, we can compute approximate Cholesky factors of the covariance matrices of Gaussian processes admitting a screening effect in near-linear computational complexity. These include many popular smoothness priors such as the Matérn class of covariance functions. In the special case of Green's matrices of elliptic boundary value problems (with possibly unknown elliptic operators of arbitrarily high order, with possibly rough coefficients), we can use tools from numerical homogenization to prove the exponential accuracy of our method. This result improves the state-of-the-art for solving general elliptic integral equations and provides the first proof of an exponential screening effect. We also derive a fast solver for elliptic partial differential equations, with accuracy-vs-complexity guarantees that improve upon the state-of-the-art. Furthermore, the resulting solver is performant in practice, frequently beating established algebraic multigrid libraries such as AMGCL and Trilinos on a series of challenging problems in two and three dimensions. Finally, for any given covariance matrix, we obtain a closed-form expression for its optimal (in terms of Kullback-Leibler divergence) approximate inverse-Cholesky factorization subject to a sparsity constraint, recovering Vecchia approximation and factorized sparse approximate inverses. Our method is highly robust, embarrassingly parallel, and further improves our asymptotic results on the solution of elliptic integral equations. We also provide a way to apply our techniques to sums of independent Gaussian processes, resolving a major limitation of existing methods based on the screening effect. As a result, we obtain fast algorithms for large-scale Gaussian process regression problems with possibly noisy measurements. In the second part of this thesis, comprising chapters seven through nine, we study continuous optimization through the lens of competitive games. In particular, we consider competitive optimization, where multiple agents attempt to minimize conflicting objectives. In the single-agent case, the updates of gradient descent are minimizers of quadratically regularized linearizations of the loss function. We propose to generalize this idea by using the Nash equilibria of quadratically regularized linearizations of the competitive game as updates (linearize the game). We provide fundamental reasons why the natural notion of linearization for competitive optimization problems is given by the multilinear (as opposed to linear) approximation of the agents' loss functions. The resulting algorithm, which we call competitive gradient descent, thus provides a natural generalization of gradient descent to competitive optimization. By using ideas from information geometry, we extend CGD to competitive mirror descent (CMD) that can be applied to a vast range of constrained competitive optimization problems. CGD and CMD resolve the cycling problem of simultaneous gradient descent and show promising results on problems arising in constrained optimization, robust control theory, and generative adversarial networks. Finally, we point out the GAN-dilemma that refutes the common interpretation of GANs as approximate minimizers of a divergence obtained in the limit of a fully trained discriminator. Instead, we argue that GAN performance relies on the implicit competitive regularization (ICR) due to the simultaneous optimization of generator and discriminator and support this hypothesis with results on low-dimensional model problems and GANs on CIFAR10.</p
    corecore