15,439 research outputs found

    A design method for parallel programs

    Get PDF

    Feasible approach for the computer implementation of parametric visual calculating

    Get PDF
    Thesis (S.M. in Architecture Studies)--Massachusetts Institute of Technology, Dept. of Architecture, 2013.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (p. 62-66).Computational design tools in architecture currently fall into two broad categories: Tools for representation and tools for generative design, including scripting. However, both categories address only relatively methodical aspects of designing, and do little to support the design freedom and serendipitous creativity that, for example, is afforded by iterative sketching. Calculating with visual rules provides an explicit notation for such artistic processes of seeing and drawing. Shape grammars have validated this approach by formalizing many existing designs and styles as visual rule-sets. In this way, visual rules store and transfer design knowledge. Visual calculating in a more general sense supports creativity by allowing a designer to apply any rule she wants, and to capriciously see and re-see the design. In contrast to other explicit design methodologies, visual calculating defines a decomposition into parts only after the design is calculated, thus allowing formalization without impeding design freedom. Located at the intersection between design and computation, the computer implementation of visual calculating presents an opportunity for more designerly computational design tools. Since parametric visual calculating affords the largest set of design possibilities, the computer implementation of parametric visual calculating will allow flexible, rule-based design tools that intelligently combine design freedom with computational processing power. In order to compute with shapes, a symbolic representation for shapes is necessary. This thesis examines several symbolic representations for shapes, including graphs. Especially close attention is given to graph-based representations, since graphs are well suited to represent parametric shapes. Based on this analysis, this thesis proposes a new graph for parametric shapes that is clearer, more compact and closer the original formulation of visual calculating than existing approaches, while also strongly supporting design freedom. The thesis provides algorithms and heuristics to construct this "inverted" graph, for connected and unconnected shapes.by Thomas Alois Wortmann.S.M.in Architecture Studie

    Non-Markovian Monte Carlo Algorithm for the Constrained Markovian Evolution in QCD

    Full text link
    We revisit the challenging problem of finding an efficient Monte Carlo (MC) algorithm solving the constrained evolution equations for the initial-state QCD radiation. The type of the parton (quark, gluon) and the energy fraction x of the parton exiting emission chain (entering hard process) are predefined, i.e. constrained throughout the evolution. Such a constraint is mandatory for any realistic MC for the initial state QCD parton shower. We add one important condition: the MC algorithm must not require the a priori knowledge of the full numerical exact solutions of the evolution equations, as is the case in the popular ``Markovian MC for backward evolution''. Our aim is to find at least one solution of this problem that would function in practice. Finding such a solution seems to be definitely within the reach of the currently available computer CPUs and the sophistication of the modern MC techniques. We describe in this work the first example of an efficient solution of this kind. Its numerical implementation is still restricted to the pure gluon-strahlung. As expected, it is not in the class of the so-called Markovian MCs. For this reason we refer to it as belonging to a class of non-Markovian MCs. We show that numerical results of our new MC algorithm agree very well (to 0.2%) with the results of the other MC program of our own (unconstrained Markovian) and another non-MC program QCDnum16. This provides a proof of the existence of the new class of MC techniques, to be exploited in the precision perturbative QCD calculations for the Large Hadron Collider

    An Arbitrary Curvilinear Coordinate Method for Particle-In-Cell Modeling

    Full text link
    A new approach to the kinetic simulation of plasmas in complex geometries, based on the Particle-in- Cell (PIC) simulation method, is explored. In the two dimensional (2d) electrostatic version of our method, called the Arbitrary Curvilinear Coordinate PIC (ACC-PIC) method, all essential PIC operations are carried out in 2d on a uniform grid on the unit square logical domain, and mapped to a nonuniform boundary-fitted grid on the physical domain. As the resulting logical grid equations of motion are not separable, we have developed an extension of the semi-implicit Modified Leapfrog (ML) integration technique to preserve the symplectic nature of the logical grid particle mover. A generalized, curvilinear coordinate formulation of Poisson's equations to solve for the electrostatic fields on the uniform logical grid is also developed. By our formulation, we compute the plasma charge density on the logical grid based on the particles' positions on the logical domain. That is, the plasma particles are weighted to the uniform logical grid and the self-consistent mean electrostatic fields obtained from the solution of the logical grid Poisson equation are interpolated to the particle positions on the logical grid. This process eliminates the complexity associated with the weighting and interpolation processes on the nonuniform physical grid and allows us to run the PIC method on arbitrary boundary-fitted meshes.Comment: Submitted to Computational Science & Discovery December 201

    A numerical algorithm for L2L_2 semi-discrete optimal transport in 3D

    Get PDF
    This paper introduces a numerical algorithm to compute the L2L_2 optimal transport map between two measures μ\mu and ν\nu, where μ\mu derives from a density ρ\rho defined as a piecewise linear function (supported by a tetrahedral mesh), and where ν\nu is a sum of Dirac masses. I first give an elementary presentation of some known results on optimal transport and then observe a relation with another problem (optimal sampling). This relation gives simple arguments to study the objective functions that characterize both problems. I then propose a practical algorithm to compute the optimal transport map between a piecewise linear density and a sum of Dirac masses in 3D. In this semi-discrete setting, Aurenhammer et.al [\emph{8th Symposium on Computational Geometry conf. proc.}, ACM (1992)] showed that the optimal transport map is determined by the weights of a power diagram. The optimal weights are computed by minimizing a convex objective function with a quasi-Newton method. To evaluate the value and gradient of this objective function, I propose an efficient and robust algorithm, that computes at each iteration the intersection between a power diagram and the tetrahedral mesh that defines the measure μ\mu. The numerical algorithm is experimented and evaluated on several datasets, with up to hundred thousands tetrahedra and one million Dirac masses.Comment: 23 pages, 14 figure

    Risk-sensitive Inverse Reinforcement Learning via Semi- and Non-Parametric Methods

    Full text link
    The literature on Inverse Reinforcement Learning (IRL) typically assumes that humans take actions in order to minimize the expected value of a cost function, i.e., that humans are risk neutral. Yet, in practice, humans are often far from being risk neutral. To fill this gap, the objective of this paper is to devise a framework for risk-sensitive IRL in order to explicitly account for a human's risk sensitivity. To this end, we propose a flexible class of models based on coherent risk measures, which allow us to capture an entire spectrum of risk preferences from risk-neutral to worst-case. We propose efficient non-parametric algorithms based on linear programming and semi-parametric algorithms based on maximum likelihood for inferring a human's underlying risk measure and cost function for a rich class of static and dynamic decision-making settings. The resulting approach is demonstrated on a simulated driving game with ten human participants. Our method is able to infer and mimic a wide range of qualitatively different driving styles from highly risk-averse to risk-neutral in a data-efficient manner. Moreover, comparisons of the Risk-Sensitive (RS) IRL approach with a risk-neutral model show that the RS-IRL framework more accurately captures observed participant behavior both qualitatively and quantitatively, especially in scenarios where catastrophic outcomes such as collisions can occur.Comment: Submitted to International Journal of Robotics Research; Revision 1: (i) Clarified minor technical points; (ii) Revised proof for Theorem 3 to hold under weaker assumptions; (iii) Added additional figures and expanded discussions to improve readabilit

    Notions of optimal transport theory and how to implement them on a computer

    Get PDF
    This article gives an introduction to optimal transport, a mathematical theory that makes it possible to measure distances between functions (or distances between more general objects), to interpolate between objects or to enforce mass/volume conservation in certain computational physics simulations. Optimal transport is a rich scientific domain, with active research communities, both on its theoretical aspects and on more applicative considerations, such as geometry processing and machine learning. This article aims at explaining the main principles behind the theory of optimal transport, introduce the different involved notions, and more importantly, how they relate, to let the reader grasp an intuition of the elegant theory that structures them. Then we will consider a specific setting, called semi-discrete, where a continuous function is transported to a discrete sum of Dirac masses. Studying this specific setting naturally leads to an efficient computational algorithm, that uses classical notions of computational geometry, such as a generalization of Voronoi diagrams called Laguerre diagrams.Comment: 32 pages, 17 figure

    Hyperuniformity of Quasicrystals

    Full text link
    Hyperuniform systems, which include crystals, quasicrystals and special disordered systems, have attracted considerable recent attention, but rigorous analyses of the hyperuniformity of quasicrystals have been lacking because the support of the spectral intensity is dense and discontinuous. We employ the integrated spectral intensity, Z(k)Z(k), to quantitatively characterize the hyperuniformity of quasicrystalline point sets generated by projection methods. The scaling of Z(k)Z(k) as kk tends to zero is computed for one-dimensional quasicrystals and shown to be consistent with independent calculations of the variance, σ2(R)\sigma^2(R), in the number of points contained in an interval of length 2R2R. We find that one-dimensional quasicrystals produced by projection from a two-dimensional lattice onto a line of slope 1/τ1/\tau fall into distinct classes determined by the width of the projection window. For a countable dense set of widths, Z(k)k4Z(k) \sim k^4; for all others, Z(k)k2Z(k)\sim k^2. This distinction suggests that measures of hyperuniformity define new classes of quasicrystals in higher dimensions as well.Comment: 12 pages, 14 figure
    corecore