11,060 research outputs found

    Program Synthesis and Linear Operator Semantics

    Full text link
    For deterministic and probabilistic programs we investigate the problem of program synthesis and program optimisation (with respect to non-functional properties) in the general setting of global optimisation. This approach is based on the representation of the semantics of programs and program fragments in terms of linear operators, i.e. as matrices. We exploit in particular the fact that we can automatically generate the representation of the semantics of elementary blocks. These can then can be used in order to compositionally assemble the semantics of a whole program, i.e. the generator of the corresponding Discrete Time Markov Chain (DTMC). We also utilise a generalised version of Abstract Interpretation suitable for this linear algebraic or functional analytical framework in order to formulate semantical constraints (invariants) and optimisation objectives (for example performance requirements).Comment: In Proceedings SYNT 2014, arXiv:1407.493

    Special Session on Industry 4.0

    Get PDF
    No abstract available

    Tensor Computation: A New Framework for High-Dimensional Problems in EDA

    Get PDF
    Many critical EDA problems suffer from the curse of dimensionality, i.e. the very fast-scaling computational burden produced by large number of parameters and/or unknown variables. This phenomenon may be caused by multiple spatial or temporal factors (e.g. 3-D field solvers discretizations and multi-rate circuit simulation), nonlinearity of devices and circuits, large number of design or optimization parameters (e.g. full-chip routing/placement and circuit sizing), or extensive process variations (e.g. variability/reliability analysis and design for manufacturability). The computational challenges generated by such high dimensional problems are generally hard to handle efficiently with traditional EDA core algorithms that are based on matrix and vector computation. This paper presents "tensor computation" as an alternative general framework for the development of efficient EDA algorithms and tools. A tensor is a high-dimensional generalization of a matrix and a vector, and is a natural choice for both storing and solving efficiently high-dimensional EDA problems. This paper gives a basic tutorial on tensors, demonstrates some recent examples of EDA applications (e.g., nonlinear circuit modeling and high-dimensional uncertainty quantification), and suggests further open EDA problems where the use of tensor computation could be of advantage.Comment: 14 figures. Accepted by IEEE Trans. CAD of Integrated Circuits and System

    Driven by Compression Progress: A Simple Principle Explains Essential Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes

    Get PDF
    I argue that data becomes temporarily interesting by itself to some self-improving, but computationally limited, subjective observer once he learns to predict or compress the data in a better way, thus making it subjectively simpler and more beautiful. Curiosity is the desire to create or discover more non-random, non-arbitrary, regular data that is novel and surprising not in the traditional sense of Boltzmann and Shannon but in the sense that it allows for compression progress because its regularity was not yet known. This drive maximizes interestingness, the first derivative of subjective beauty or compressibility, that is, the steepness of the learning curve. It motivates exploring infants, pure mathematicians, composers, artists, dancers, comedians, yourself, and (since 1990) artificial systems.Comment: 35 pages, 3 figures, based on KES 2008 keynote and ALT 2007 / DS 2007 joint invited lectur

    Robust H∞ control for networked systems with random packet losses

    Get PDF
    Copyright [2007] IEEE. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Brunel University's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.In this paper, the robust Hinfin control problem Is considered for a class of networked systems with random communication packet losses. Because of the limited bandwidth of the channels, such random packet losses could occur, simultaneously, in the communication channels from the sensor to the controller and from the controller to the actuator. The random packet loss is assumed to obey the Bernoulli random binary distribution, and the parameter uncertainties are norm-bounded and enter into both the system and output matrices. In the presence of random packet losses, an observer-based feedback controller is designed to robustly exponentially stabilize the networked system in the sense of mean square and also achieve the prescribed Hinfin disturbance-rejection-attenuation level. Both the stability-analysis and controller-synthesis problems are thoroughly investigated. It is shown that the controller-design problem under consideration is solvable if certain linear matrix inequalities (LMIs) are feasible. A simulation example is exploited to demonstrate the effectiveness of the proposed LMI approach

    Probabilistic micromechanics and macromechanics of polymer matrix composites

    Get PDF
    A probabilistic evaluation of an eight ply graphite-epoxy quasi-isotropic laminate was completed using the Integrated Composite Analyzer (ICAN) in conjunction with Monte Carlo simulation and Fast Probability Integration (FPI) techniques. Probabilistic input included fiber and matrix properties, fiber misalignment, fiber volume ratio, void volume ratio, ply thickness and ply layup angle. Cumulative distribution functions (CDFs) for select laminate properties are given. To reduce the number of simulations, a Fast Probability Integration (FPI) technique was used to generate CDFs for the select properties in the absence of fiber misalignment. These CDFs were compared to a second Monte Carlo simulation done without fiber misalignment effects. It was found that FPI requires fewer simulations to obtain the cumulative distribution functions as opposed to Monte Carlo simulation techniques. Furthermore, FPI provides valuable information regarding the sensitivities of composite properties to the constituent properties, fiber volume ratio and void volume ratio
    • …
    corecore