501 research outputs found
The OpenModelica integrated environment for modeling, simulation, and model-based development
OpenModelica is a unique large-scale integrated open-source Modelica- and FMI-based modeling, simulation, optimization, model-based analysis and development environment. Moreover, the OpenModelica environment provides a number of facilities such as debugging; optimization; visualization and 3D animation; web-based model editing and simulation; scripting from Modelica, Python, Julia, and Matlab; efficient simulation and co-simulation of FMI-based models; compilation for embedded systems; Modelica- UML integration; requirement verification; and generation of parallel code for multi-core architectures. The environment is based on the equation-based object-oriented Modelica language and currently uses the MetaModelica extended version of Modelica for its model compiler implementation. This overview paper gives an up-to-date description of the capabilities of the system, short overviews of used open source symbolic and numeric algorithms with pointers to published literature, tool integration aspects, some lessons learned, and the main vision behind its development.Fil: Fritzson, Peter. Linköping University; SueciaFil: Pop, Adrian. Linköping University; SueciaFil: Abdelhak, Karim. Fachhochschule Bielefeld; AlemaniaFil: Asghar, Adeel. Linköping University; SueciaFil: Bachmann, Bernhard. Fachhochschule Bielefeld; AlemaniaFil: Braun, Willi. Fachhochschule Bielefeld; AlemaniaFil: Bouskela, Daniel. Electricité de France; FranciaFil: Braun, Robert. Linköping University; SueciaFil: Buffoni, Lena. Linköping University; SueciaFil: Casella, Francesco. Politecnico di Milano; ItaliaFil: Castro, Rodrigo Daniel. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Investigación en Ciencias de la Computación. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Investigación en Ciencias de la Computación; ArgentinaFil: Franke, Rüdiger. Abb Group; AlemaniaFil: Fritzson, Dag. Linköping University; SueciaFil: Gebremedhin, Mahder. Linköping University; SueciaFil: Heuermann, Andreas. Linköping University; SueciaFil: Lie, Bernt. University of South-Eastern Norway; NoruegaFil: Mengist, Alachew. Linköping University; SueciaFil: Mikelsons, Lars. Linköping University; SueciaFil: Moudgalya, Kannan. Indian Institute Of Technology Bombay; IndiaFil: Ochel, Lennart. Linköping University; SueciaFil: Palanisamy, Arunkumar. Linköping University; SueciaFil: Ruge, Vitalij. Fachhochschule Bielefeld; AlemaniaFil: Schamai, Wladimir. Danfoss Power Solutions GmbH & Co; AlemaniaFil: Sjolund, Martin. Linköping University; SueciaFil: Thiele, Bernhard. Linköping University; SueciaFil: Tinnerholm, John. Linköping University; SueciaFil: Ostlund, Per. Linköping University; Sueci
The Quantum Monadology
The modern theory of functional programming languages uses monads for
encoding computational side-effects and side-contexts, beyond bare-bone program
logic. Even though quantum computing is intrinsically side-effectful (as in
quantum measurement) and context-dependent (as on mixed ancillary states),
little of this monadic paradigm has previously been brought to bear on quantum
programming languages.
Here we systematically analyze the (co)monads on categories of parameterized
module spectra which are induced by Grothendieck's "motivic yoga of operations"
-- for the present purpose specialized to HC-modules and further to set-indexed
complex vector spaces. Interpreting an indexed vector space as a collection of
alternative possible quantum state spaces parameterized by quantum measurement
results, as familiar from Proto-Quipper-semantics, we find that these
(co)monads provide a comprehensive natural language for functional quantum
programming with classical control and with "dynamic lifting" of quantum
measurement results back into classical contexts.
We close by indicating a domain-specific quantum programming language (QS)
expressing these monadic quantum effects in transparent do-notation, embeddable
into the recently constructed Linear Homotopy Type Theory (LHoTT) which
interprets into parameterized module spectra. Once embedded into LHoTT, this
should make for formally verifiable universal quantum programming with linear
quantum types, classical control, dynamic lifting, and notably also with
topological effects.Comment: 120 pages, various figure
Recommended from our members
Hybrid Analog-Digital Co-Processing for Scientific Computation
In the past 10 years computer architecture research has moved to more heterogeneity and less adherence to conventional abstractions. Scientists and engineers hold an unshakable belief that computing holds keys to unlocking humanity's Grand Challenges. Acting on that belief they have looked deeper into computer architecture to find specialized support for their applications. Likewise, computer architects have looked deeper into circuits and devices in search of untapped performance and efficiency. The lines between computer architecture layers---applications, algorithms, architectures, microarchitectures, circuits and devices---have blurred. Against this backdrop, a menagerie of computer architectures are on the horizon, ones that forgo basic assumptions about computer hardware, and require new thinking of how such hardware supports problems and algorithms.
This thesis is about revisiting hybrid analog-digital computing in support of diverse modern workloads. Hybrid computing had extensive applications in early computing history, and has been revisited for small-scale applications in embedded systems. But architectural support for using hybrid computing in modern workloads, at scale and with high accuracy solutions, has been lacking.
I demonstrate solving a variety of scientific computing problems, including stochastic ODEs, partial differential equations, linear algebra, and nonlinear systems of equations, as case studies in hybrid computing. I solve these problems on a system of multiple prototype analog accelerator chips built by a team at Columbia University. On that team I made contributions toward programming the chips, building the digital interface, and validating the chips' functionality. The analog accelerator chip is intended for use in conjunction with a conventional digital host computer.
The appeal and motivation for using an analog accelerator is efficiency and performance, but it comes with limitations in accuracy and problem sizes that we have to work around.
The first problem is how to do problems in this unconventional computation model. Scientific computing phrases problems as differential equations and algebraic equations. Differential equations are a continuous view of the world, while algebraic equations are a discrete one. Prior work in analog computing mostly focused on differential equations; algebraic equations played a minor role in prior work in analog computing. The secret to using the analog accelerator to support modern workloads on conventional computers is that these two viewpoints are interchangeable. The algebraic equations that underlie most workloads can be solved as differential equations,
and differential equations are naturally solvable in the analog accelerator chip. A hybrid analog-digital computer architecture can focus on solving linear and nonlinear algebra problems to support many workloads.
The second problem is how to get accurate solutions using hybrid analog-digital computing. The reason that the analog computation model gives less accurate solutions is it gives up representing numbers as digital binary numbers, and instead uses the full range of analog voltage and current to represent real numbers. Prior work has established that encoding data in analog signals gives an energy efficiency advantage as long as the analog data precision is limited. While the analog accelerator alone may be useful for energy-constrained applications where inputs and outputs are imprecise, we are more interested in using analog in conjunction with digital for precise solutions. This thesis gives novel insight that the trick to do so is to solve nonlinear problems where low-precision guesses are useful for conventional digital algorithms.
The third problem is how to solve large problems using hybrid analog-digital computing. The reason the analog computation model can't handle large problems is it gives up step-by-step discrete-time operation, instead allowing variables to evolve smoothly in continuous time. To make that happen the analog accelerator works by chaining hardware for mathematical operations end-to-end. During computation analog data flows through the hardware with no overheads in control logic and memory accesses. The downside is then the needed hardware size grows alongside problem sizes. While scientific computing researchers have for a long time split large problems into smaller subproblems to fit in digital computer constraints, this thesis is a first attempt to consider these divide-and-conquer algorithms as an essential tool in using the analog model of computation.
As we enter the post-Moore’s law era of computing, unconventional architectures will offer specialized models of computation that uniquely support specific problem types. Two prominent examples are deep neural networks and quantum computers. Recent trends in computer science research show these unconventional architectures will soon have broad adoption. In this thesis I show another specialized, unconventional architecture is to use analog accelerators to solve problems in scientific computing. Computer architecture researchers will discover other important models of computation in the future. This thesis is an example of the discovery process, implementation, and evaluation of how an unconventional architecture supports specialized workloads
Improving Model-Based Software Synthesis: A Focus on Mathematical Structures
Computer hardware keeps increasing in complexity. Software design needs to keep up with this. The right models and abstractions empower developers to leverage the novelties of modern hardware. This thesis deals primarily with Models of Computation, as a basis for software design, in a family of methods called software synthesis.
We focus on Kahn Process Networks and dataflow applications as abstractions, both for programming and for deriving an efficient execution on heterogeneous multicores. The latter we accomplish by exploring the design space of possible mappings of computation and data to hardware resources. Mapping algorithms are not at the center of this thesis, however. Instead, we examine the mathematical structure of the mapping
space, leveraging its inherent symmetries or geometric properties to improve mapping methods in general.
This thesis thoroughly explores the process of model-based design, aiming to go beyond the more established software synthesis on dataflow applications. We starting with the problem of assessing these methods through benchmarking, and go on to formally examine the general goals of benchmarks. In this context, we also consider the role modern machine learning methods play in benchmarking.
We explore different established semantics, stretching the limits of Kahn Process Networks. We also discuss novel models, like Reactors, which are designed to be a deterministic, adaptive model with time as a first-class citizen. By investigating abstractions and transformations in the Ohua language for implicit dataflow programming, we also focus on programmability.
The focus of the thesis is in the models and methods, but we evaluate them in diverse use-cases, generally centered around Cyber-Physical Systems. These include the 5G telecommunication standard, automotive and signal processing domains. We even go beyond embedded systems and discuss use-cases in GPU programming and microservice-based architectures
OpenCog Hyperon: A Framework for AGI at the Human Level and Beyond
An introduction to the OpenCog Hyperon framework for Artificiai General
Intelligence is presented. Hyperon is a new, mostly from-the-ground-up
rewrite/redesign of the OpenCog AGI framework, based on similar conceptual and
cognitive principles to the previous OpenCog version, but incorporating a
variety of new ideas at the mathematical, software architecture and
AI-algorithm level. This review lightly summarizes: 1) some of the history
behind OpenCog and Hyperon, 2) the core structures and processes underlying
Hyperon as a software system, 3) the integration of this software system with
the SingularityNET ecosystem's decentralized infrastructure, 4) the cognitive
model(s) being experimentally pursued within Hyperon on the hopeful path to
advanced AGI, 5) the prospects seen for advanced aspects like reflective
self-modification and self-improvement of the codebase, 6) the tentative
development roadmap and various challenges expected to be faced, 7) the
thinking of the Hyperon team regarding how to guide this sort of work in a
beneficial direction ... and gives links and references for readers who wish to
delve further into any of these aspects
Variational geometric modeling with black box constraints and DAGs
CAD modelers enable designers to construct complex 3D shapes with high-level B-Rep operators. This avoids the burden of low level geometric manipulations. However a gap still exists between the shape that the designers have in mind and the way they have to decompose it into a sequence of modeling steps. To bridge this gap, Variational Modeling enables designers to specify constraints the shape must respect. The constraints are converted into an explicit system of mathematical equations (potentially with some inequalities) which the modeler numerically solves. However, most of available programs are 2D sketchers, basically because in higher dimension some constraints may have complex mathematical expressions. This paper introduces a new approach to sketch constrained 3D shapes. The main idea is to replace explicit systems of mathematical equations with (mainly) Computer Graphics routines considered as Black Box Constraints. The obvious difficulty is that the arguments of all routines must have known numerical values. The paper shows how to solve this issue, i.e., how to solve and optimize without equations. The feasibility and promises of this approach are illustrated with the developed DECO (Deformation by Constraints) prototype.The authors would like to thank the two French Institutes Carnot ARTS and Carnot STAR for their support to this research project. Lincong Fang thanks for their support the National Natural Science Foundation of China (No. 61272300), the Zhejiang Provincial Natural Science Foundation of China (LQ13F020003) and the China Scholarship Council
What is Robotics: Why Do We Need It and How Can We Get It?
Robotics is an emerging synthetic science concerned with programming work. Robot technologies are quickly advancing beyond the insights of the existing science. More secure intellectual foundations will be required to achieve better, more reliable and safer capabilities as their penetration into society deepens. Presently missing foundations include the identification of fundamental physical limits, the development of new dynamical systems theory and the invention of physically grounded programming languages. The new discipline needs a departmental home in the universities which it can justify both intellectually and by its capacity to attract new diverse populations inspired by the age old human fascination with robots.
For more information: Kod*la
- …