7,538 research outputs found
An Adynamical, Graphical Approach to Quantum Gravity and Unification
We use graphical field gradients in an adynamical, background independent
fashion to propose a new approach to quantum gravity and unification. Our
proposed reconciliation of general relativity and quantum field theory is based
on a modification of their graphical instantiations, i.e., Regge calculus and
lattice gauge theory, respectively, which we assume are fundamental to their
continuum counterparts. Accordingly, the fundamental structure is a graphical
amalgam of space, time, and sources (in parlance of quantum field theory)
called a "spacetimesource element." These are fundamental elements of space,
time, and sources, not source elements in space and time. The transition
amplitude for a spacetimesource element is computed using a path integral with
discrete graphical action. The action for a spacetimesource element is
constructed from a difference matrix K and source vector J on the graph, as in
lattice gauge theory. K is constructed from graphical field gradients so that
it contains a non-trivial null space and J is then restricted to the row space
of K, so that it is divergence-free and represents a conserved exchange of
energy-momentum. This construct of K and J represents an adynamical global
constraint between sources, the spacetime metric, and the energy-momentum
content of the element, rather than a dynamical law for time-evolved entities.
We use this approach via modified Regge calculus to correct proper distance in
the Einstein-deSitter cosmology model yielding a fit of the Union2 Compilation
supernova data that matches LambdaCDM without having to invoke accelerating
expansion or dark energy. A similar modification to lattice gauge theory
results in an adynamical account of quantum interference.Comment: 47 pages text, 14 figures, revised per recent results, e.g., dark
energy result
A flexible model for dynamic linking in Java and C#
Dynamic linking supports flexible code deployment, allowing partially linked code to link further code on the fly, as needed.
Thus, end-users enjoy the advantage of automatically receiving any updates, without any need for any explicit actions on their side,
such as re-compilation, or re-linking. On the down side, two executions of a program may link in different versions of code, which
in some cases causes subtle errors, and may mystify end-users.
Dynamic linking in Java and C# are similar: the same linking phases are involved, soundness is based on similar ideas, and
executions which do not throw linking errors give the same result. They are, however, not identical: the linking phases are combined
differently, and take place in different order. Consequently, linking errors may be detected at different times by Java and C# runtime
systems.
We develop a non-deterministic model, which describes the behaviour of both Java and C# program executions. The nondeterminism
allows us to describe the design space, to distill the similarities between the two languages, and to use one proof of
soundness for both. We also prove that all execution strategies are equivalent with respect to terminating executions that do not
throw link errors: they give the same results
A versatile approach to calculus and numerical methods
Traditionally the calculus is the study of the symbolic algorithms for differentiation and
integration, the relationship between them, and their use in solving problems. Only at
the end of the course, when all else fails, are numerical methods introduced, such as the
Newton-Raphson method of solving equations, or Simpsonās rule for calculating areas.
The problem with such an approach is that it often produces students who are very well
versed in the algorithms and can solve the most fiendish of symbolic problems, yet
have no understanding of the meaning of what they are doing. Given the arrival of
computer software which can carry out these algorithms mechanically, the question
arises as to what parts of calculus need to be studied in the curriculum of the future. It
is my contention that such a study can use the computer technology to produce a far
more versatile approach to the subject, in which the numerical and graphical
representations may be used from the outset to produce insights into the fundamental
meanings, in which a wider understanding of the processes of change and growth will
be possible than the narrow band of problems that can be solved by traditional symbolic
methods of the calculus
Issues about the Adoption of Formal Methods for Dependable Composition of Web Services
Web Services provide interoperable mechanisms for describing, locating and
invoking services over the Internet; composition further enables to build
complex services out of simpler ones for complex B2B applications. While
current studies on these topics are mostly focused - from the technical
viewpoint - on standards and protocols, this paper investigates the adoption of
formal methods, especially for composition. We logically classify and analyze
three different (but interconnected) kinds of important issues towards this
goal, namely foundations, verification and extensions. The aim of this work is
to individuate the proper questions on the adoption of formal methods for
dependable composition of Web Services, not necessarily to find the optimal
answers. Nevertheless, we still try to propose some tentative answers based on
our proposal for a composition calculus, which we hope can animate a proper
discussion
CapablePtrs: Securely Compiling Partial Programs using the Pointers-as-Capabilities Principle
Capability machines such as CHERI provide memory capabilities that can be
used by compilers to provide security benefits for compiled code (e.g., memory
safety). The C to CHERI compiler, for example, achieves memory safety by
following a principle called "pointers as capabilities" (PAC). Informally, PAC
says that a compiler should represent a source language pointer as a machine
code capability. But the security properties of PAC compilers are not yet well
understood. We show that memory safety is only one aspect, and that PAC
compilers can provide significant additional security guarantees for partial
programs: the compiler can provide guarantees for a compilation unit, even if
that compilation unit is later linked to attacker-controlled machine code. This
paper is the first to study the security of PAC compilers for partial programs
formally. We prove for a model of such a compiler that it is fully abstract.
The proof uses a novel proof technique (dubbed TrICL, read trickle), which is
of broad interest because it reuses and extends the compiler correctness
relation in a natural way, as we demonstrate. We implement our compiler on top
of the CHERI platform and show that it can compile legacy C code with minimal
code changes. We provide performance benchmarks that show how performance
overhead is proportional to the number of cross-compilation-unit function
calls
Taverna Workflows: Syntax and Semantics
This paper presents the formal syntax and the operational semantics of Taverna, a workflow management system with a large user base among the e-Science community. Such formal foundation, which has so far been lacking, opens the way to the translation between Taverna workflows and other process models. In particular, the ability to automatically compile a simple domain-specific process description into Taverna facilitates its adoption by e-scientists who are not expert workflow developers. We demonstrate this potential through a practical use case
Beyond Good and Evil: Formalizing the Security Guarantees of Compartmentalizing Compilation
Compartmentalization is good security-engineering practice. By breaking a
large software system into mutually distrustful components that run with
minimal privileges, restricting their interactions to conform to well-defined
interfaces, we can limit the damage caused by low-level attacks such as
control-flow hijacking. When used to defend against such attacks,
compartmentalization is often implemented cooperatively by a compiler and a
low-level compartmentalization mechanism. However, the formal guarantees
provided by such compartmentalizing compilation have seen surprisingly little
investigation.
We propose a new security property, secure compartmentalizing compilation
(SCC), that formally characterizes the guarantees provided by
compartmentalizing compilation and clarifies its attacker model. We reconstruct
our property by starting from the well-established notion of fully abstract
compilation, then identifying and lifting three important limitations that make
standard full abstraction unsuitable for compartmentalization. The connection
to full abstraction allows us to prove SCC by adapting established proof
techniques; we illustrate this with a compiler from a simple unsafe imperative
language with procedures to a compartmentalized abstract machine.Comment: Nit
- ā¦