721,843 research outputs found
On Languages Accepted by P/T Systems Composed of joins
Recently, some studies linked the computational power of abstract computing
systems based on multiset rewriting to models of Petri nets and the computation
power of these nets to their topology. In turn, the computational power of
these abstract computing devices can be understood by just looking at their
topology, that is, information flow.
Here we continue this line of research introducing J languages and proving
that they can be accepted by place/transition systems whose underlying net is
composed only of joins. Moreover, we investigate how J languages relate to
other families of formal languages. In particular, we show that every J
language can be accepted by a log n space-bounded non-deterministic Turing
machine with a one-way read-only input. We also show that every J language has
a semilinear Parikh map and that J languages and context-free languages (CFLs)
are incomparable
On Statistical Query Sampling and NMR Quantum Computing
We introduce a ``Statistical Query Sampling'' model, in which the goal of an
algorithm is to produce an element in a hidden set with
reasonable probability. The algorithm gains information about through
oracle calls (statistical queries), where the algorithm submits a query
function and receives an approximation to . We
show how this model is related to NMR quantum computing, in which only
statistical properties of an ensemble of quantum systems can be measured, and
in particular to the question of whether one can translate standard quantum
algorithms to the NMR setting without putting all of their classical
post-processing into the quantum system. Using Fourier analysis techniques
developed in the related context of {em statistical query learning}, we prove a
number of lower bounds (both information-theoretic and cryptographic) on the
ability of algorithms to produces an , even when the set is fairly
simple. These lower bounds point out a difficulty in efficiently applying NMR
quantum computing to algorithms such as Shor's and Simon's algorithm that
involve significant classical post-processing. We also explicitly relate the
notion of statistical query sampling to that of statistical query learning.
An extended abstract appeared in the 18th Aunnual IEEE Conference of
Computational Complexity (CCC 2003), 2003.
Keywords: statistical query, NMR quantum computing, lower boundComment: 17 pages, no figures. Appeared in 18th Aunnual IEEE Conference of
Computational Complexity (CCC 2003
A comparative study of formalisms for programming language definition : a thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Computer Science at Massey University
This study looks at a number of methods for defining the full syntax and semantics of computer programming languages. The syntax, especially the nature of context-dependent conditions in it, is first examined, then some extensions of context-free grammars are compared to see to what extent they can encompass the full context-conditions of typical programming languages. It is found that several syntax extensions are inadequate in this regard, and that the ability to calculate complicated functions and conditions, and to eventually delete the values of such functions, is needed. This ability may be obtained either by allowing unrestricted rules and meta-variables in the phrase-structure, or by associating mathematical functions either with individual production rules or with the whole context-free structure, to transform it into an 'abstract syntax'. Since the form of a definition of a programming language semantics depends critically on how one conceives "meaning", five main types of semantics are considered: these are called 'natural', 'prepositional', 'functional', and 'structural' semantics, as well as a semantics based on string rewriting rules. The five types are compared for their success in defining the semantics of computing languages, of the example Algol-like language ALEX in particular. Among other conclusions, it is found that the semantics of structures and computations on structures is the only type sufficiently comprehensive, precise, and readable
Survey of context provisioning middleware
In the scope of ubiquitous computing, one of the key issues is the awareness of context, which includes diverse aspects of the user's situation including his activities, physical surroundings, location, emotions and social relations, device and network characteristics and their interaction with each other. This contextual knowledge is typically acquired from physical, virtual or logical sensors. To overcome problems of heterogeneity and hide complexity, a significant number of middleware approaches have been proposed for systematic and coherent access to manifold context parameters. These frameworks deal particularly with context representation, context management and reasoning, i.e. deriving abstract knowledge from raw sensor data. This article surveys not only related work in these three categories but also the required evaluation principles. © 2009-2012 IEEE
Composable security of delegated quantum computation
Delegating difficult computations to remote large computation facilities,
with appropriate security guarantees, is a possible solution for the
ever-growing needs of personal computing power. For delegated computation
protocols to be usable in a larger context---or simply to securely run two
protocols in parallel---the security definitions need to be composable. Here,
we define composable security for delegated quantum computation. We distinguish
between protocols which provide only blindness---the computation is hidden from
the server---and those that are also verifiable---the client can check that it
has received the correct result. We show that the composable security
definition capturing both these notions can be reduced to a combination of
several distinct "trace-distance-type" criteria---which are, individually,
non-composable security definitions.
Additionally, we study the security of some known delegated quantum
computation protocols, including Broadbent, Fitzsimons and Kashefi's Universal
Blind Quantum Computation protocol. Even though these protocols were originally
proposed with insufficient security criteria, they turn out to still be secure
given the stronger composable definitions.Comment: 37+9 pages, 13 figures. v3: minor changes, new references. v2:
extended the reduction between composable and local security to include
entangled inputs, substantially rewritten the introduction to the Abstract
Cryptography (AC) framewor
WCET Computation of Safety-Critical Avionics Programs: Challenges, Achievements and Perspectives
Time-critical avionics software products must compute their output in due time. If it is not the case, the safety of the avionics systems to which they belong might be affected. Consequently, the Worst Case Excution Time of the tasks of such programs must be computed safely, i.e., they must not be under-estimated. Since computing the exact WCET of a real-size software product task is not possible (undecidability), "safe WCET" means over-estimated WCET. Here we have an industrial issue in the sense that too over-estimating the WCET leads to a waste of CPU power. Hence, the computation a safe and precise WCET is the big challenge. Solutions to that problem cannot only rely on the technique for computing the WCET. Indeed, both hardware and software must be designed to be as deterministic as possible. For its Flight controls software products, Airbus has always been applying these principles but, since the A380, the use of more complex processors required to move from a technique based on measurements to a new one based on static analysis by Abstract Interpretation. Another kind of avionics applications are the so-called High-performance avionics software products, which are significantly less affected by - rare - delays in the computation of their outputs. In this case, the need for a "safe WCET" is less strong, hence opening the door to different other ways of computing it. In this context, the aim of the talk is to present the challenge of computing WCET in Airbus\u27s industrial context, the achievements in this field and the evocation of some trends and perspectives
An incremental points-to analysis with CFL-reachability
Abstract. Developing scalable and precise points-to analyses is increasingly important for analysing and optimising object-oriented programs where pointers are used pervasively. An incremental analysis for a program updates the existing analysis information after program changes to avoid reanalysing it from scratch. This can be efficiently deployed in software development environments where code changes are often small and frequent. This paper presents an incremental approach for demand-driven context-sensitive points-to analyses based on Context-Free Language (CFL) reachability. By tracing the CFL-reachable paths traversed in computing points-to sets, we can precisely identify and recompute on demand only the points-to sets affected by the program changes made. Combined with a flexible policy for controlling the granularity of traces, our analysis achieves significant speedups with little space overhead over reanalysis from scratch when evaluated with a null dereferencing client using 14 Java benchmarks.
A middleware service for coordinated adaptation of communication services in groups of devices
Abstract—Recent research in pervasive computing has shown that context-awareness and dynamic adaptation are fundamental requirements of mobile distributed applications. However, most approaches that focus on context-aware dynamic adaptation use only the context information available at the mobile device to trigger a local adaptation. However, for distributed collaborative applications this is clearly insufficient, since a same adaptation has to be done, in synch, at all mobile devices of the group, and hence should also be based on a commonly agreed context. Therefore, for such kinds of applications one requires mechanisms and protocols to exchange the context information among the devices and to coordinate of the adaptation operations at a group of mobile device. In this paper we present a middleware service for coordinated adaptation of communication services in groups of devices. At each device this adaptation is achieved with minimal disruption for the application’s remote interactions. This middleware service is based on the notion of global context and a generic protocol for global context election and synchronization of the adaptation steps, which we called Moratus. Our middleware service was implemented using JGroups and evaluated for groups of up to 30 devices, showing acceptable latency for groups of such size. I
Recent Advances on GPU Computing in Operations Research
Abstract-In the last decade, Graphics Processing Units (GPUs) have gained an increasing popularity as accelerators for High Performance Computing (HPC) applications. Recent GPUs are not only powerful graphics engines but also highly threaded parallel computing processors that can achieve sustainable speedup as compared with CPUs. In this context, researchers try to exploit the capability of this architecture to solve difficult problems in many domains in science and engineering. In this article, we present recent advances on GPU Computing in Operations Research. We focus in particular on Integer Programming and Linear Programming
- …