39,850 research outputs found

    Preserving constraints in horizontal model transformations

    Get PDF
    Graph rewriting is gaining credibility in the model transformation field, and tools are increasingly used to specify transformation activities. However, their use is often limited by special features of graph transformation approaches, which might not be familiar to experts in the modeling domain. On the other hand, transformations for specific domains may require special constraints to be enforced on transformation results. Preserving such constraints by manual definition of graph transformations can be a cumbersome and error-prone activity. We explore the problem of ensuring that possible violations of constraints following a transformation are repaired in a way coherent with the intended meaning of the transformation. In particular, we consider the use of transformation units within the DPO approach for intra-model transformations, where the modeling language is expressed via a type graph and graph conditions. We derive additional rules in a unit from a declarative rule expressing the principal objective of the transformation, so that the constraints set by the type graph and the graph conditions hold after the application of the unit. The approach is illustrated with reference to a diagrammatic reasoning system

    Yang-Mills Theory on a Cylinder Coupled to Point Particles

    Full text link
    We study a model of quantum Yang-Mills theory with a finite number of gauge invariant degrees of freedom. The gauge field has only a finite number of degrees of freedom since we assume that space-time is a two dimensional cylinder. We couple the gauge field to matter, modeled by either one or two nonrelativistic point particles. These problems can be solved {\it without any gauge fixing}, by generalizing the canonical quantization methods of Ref.\[rajeev] to the case including matter. For this, we make use of the geometry of the space of connections, which has the structure of a Principal Fiber Bundle with an infinite dimensional fiber. We are able to reduce both problems to finite dimensional, exactly solvable, quantum mechanics problems. In the case of one particle, we find that the ground state energy will diverge in the limit of infinite radius of space, consistent with confinement. In the case of two particles, this does not happen if they can form a color singlet bound state (`meson').Comment: 37 pages, UR-1327 ER-40685-77

    Variational methods, multisymplectic geometry and continuum mechanics

    Full text link
    This paper presents a variational and multisymplectic formulation of both compressible and incompressible models of continuum mechanics on general Riemannian manifolds. A general formalism is developed for non-relativistic first-order multisymplectic field theories with constraints, such as the incompressibility constraint. The results obtained in this paper set the stage for multisymplectic reduction and for the further development of Veselov-type multisymplectic discretizations and numerical algorithms. The latter will be the subject of a companion paper

    Wide baseline stereo matching with convex bounded-distortion constraints

    Full text link
    Finding correspondences in wide baseline setups is a challenging problem. Existing approaches have focused largely on developing better feature descriptors for correspondence and on accurate recovery of epipolar line constraints. This paper focuses on the challenging problem of finding correspondences once approximate epipolar constraints are given. We introduce a novel method that integrates a deformation model. Specifically, we formulate the problem as finding the largest number of corresponding points related by a bounded distortion map that obeys the given epipolar constraints. We show that, while the set of bounded distortion maps is not convex, the subset of maps that obey the epipolar line constraints is convex, allowing us to introduce an efficient algorithm for matching. We further utilize a robust cost function for matching and employ majorization-minimization for its optimization. Our experiments indicate that our method finds significantly more accurate maps than existing approaches

    A Graph Rewriting Approach for Transformational Design of Digital Systems

    Get PDF
    Transformational design integrates design and verification. It combines “correctness by construction” and design creativity by the use of pre-proven behaviour preserving transformations as design steps. The formal aspects of this methodology are hidden in the transformations. A constraint is the availability of a design representation with a compositional formal semantics. Graph representations are useful design representations because of their visualisation of design information. In this paper graph rewriting theory, as developed in the last twenty years in mathematics, is shown to be a useful basis for a formal framework for transformational design. The semantic aspects of graphs which are no part of graph rewriting theory are included by the use of attributed graphs. The used attribute algebra, table algebra, is a relation algebra derived from database theory. The combination of graph rewriting, table algebra and transformational design is new

    DROP: Dimensionality Reduction Optimization for Time Series

    Full text link
    Dimensionality reduction is a critical step in scaling machine learning pipelines. Principal component analysis (PCA) is a standard tool for dimensionality reduction, but performing PCA over a full dataset can be prohibitively expensive. As a result, theoretical work has studied the effectiveness of iterative, stochastic PCA methods that operate over data samples. However, termination conditions for stochastic PCA either execute for a predetermined number of iterations, or until convergence of the solution, frequently sampling too many or too few datapoints for end-to-end runtime improvements. We show how accounting for downstream analytics operations during DR via PCA allows stochastic methods to efficiently terminate after operating over small (e.g., 1%) subsamples of input data, reducing whole workload runtime. Leveraging this, we propose DROP, a DR optimizer that enables speedups of up to 5x over Singular-Value-Decomposition-based PCA techniques, and exceeds conventional approaches like FFT and PAA by up to 16x in end-to-end workloads
    corecore