185 research outputs found
Two-Way Automata Making Choices Only at the Endmarkers
The question of the state-size cost for simulation of two-way
nondeterministic automata (2NFAs) by two-way deterministic automata (2DFAs) was
raised in 1978 and, despite many attempts, it is still open. Subsequently, the
problem was attacked by restricting the power of 2DFAs (e.g., using a
restricted input head movement) to the degree for which it was already possible
to derive some exponential gaps between the weaker model and the standard
2NFAs. Here we use an opposite approach, increasing the power of 2DFAs to the
degree for which it is still possible to obtain a subexponential conversion
from the stronger model to the standard 2DFAs. In particular, it turns out that
subexponential conversion is possible for two-way automata that make
nondeterministic choices only when the input head scans one of the input tape
endmarkers. However, there is no restriction on the input head movement. This
implies that an exponential gap between 2NFAs and 2DFAs can be obtained only
for unrestricted 2NFAs using capabilities beyond the proposed new model. As an
additional bonus, conversion into a machine for the complement of the original
language is polynomial in this model. The same holds for making such machines
self-verifying, halting, or unambiguous. Finally, any superpolynomial lower
bound for the simulation of such machines by standard 2DFAs would imply LNL.
In the same way, the alternating version of these machines is related to L =?
NL =? P, the classical computational complexity problems.Comment: 23 page
SAGE: Software-based Attestation for GPU Execution
With the application of machine learning to security-critical and sensitive
domains, there is a growing need for integrity and privacy in computation using
accelerators, such as GPUs. Unfortunately, the support for trusted execution on
GPUs is currently very limited - trusted execution on accelerators is
particularly challenging since the attestation mechanism should not reduce
performance. Although hardware support for trusted execution on GPUs is
emerging, we study purely software-based approaches for trusted GPU execution.
A software-only approach offers distinct advantages: (1) complement
hardware-based approaches, enhancing security especially when vulnerabilities
in the hardware implementation degrade security, (2) operate on GPUs without
hardware support for trusted execution, and (3) achieve security without
reliance on secrets embedded in the hardware, which can be extracted as history
has shown. In this work, we present SAGE, a software-based attestation
mechanism for GPU execution. SAGE enables secure code execution on NVIDIA GPUs
of the Ampere architecture (A100), providing properties of code integrity and
secrecy, computation integrity, as well as data integrity and secrecy - all in
the presence of malicious code running on the GPU and CPU. Our evaluation
demonstrates that SAGE is already practical today for executing code in a
trustworthy way on GPUs without specific hardware support.Comment: 14 pages, 2 reference pages, 6 figure
Life of occam-Pi
This paper considers some questions prompted by a brief review of the history of computing. Why is programming so hard? Why is concurrency considered an “advanced” subject? What’s the matter with Objects? Where did all the Maths go? In searching for answers, the paper looks at some concerns over fundamental ideas within object orientation (as represented by modern programming languages), before focussing on the concurrency model of communicating processes and its particular expression in the occam family of languages. In that focus, it looks at the history of occam, its underlying philosophy (Ockham’s Razor), its semantic foundation on Hoare’s CSP, its principles of process oriented design and its development over almost three decades into occam-? (which blends in the concurrency dynamics of Milner’s ?-calculus). Also presented will be an urgent need for rationalisation – occam-? is an experiment that has demonstrated significant results, but now needs time to be spent on careful review and implementing the conclusions of that review. Finally, the future is considered. In particular, is there a future
Reasoning and Self-Knowledge
What is the relation between reasoning and self-knowledge? According to Shoemaker (1988), a certain kind of reasoning requires self-knowledge: we cannot rationally revise our beliefs without knowing that we have them, in part because we cannot see that there is a problem with an inconsistent set of propositions unless we are aware of believing them. In this paper, I argue that this view is mistaken. A second account, versions of which can be found in Shoemaker (1988 and 2009) and Byrne (2005), claims that we can reason our way from belief about the world to self-knowledge about such belief. While Shoemaker’s “zany argument” fails to show how such reasoning can issue in self-knowledge, Byrne’s account, which centres on the epistemic rule “If p, believe that you believe that p”, is more successful. Two interesting objections are that the epistemic rule embodies a mad inference (Boyle 2011) and that it makes us form first-order beliefs, rather than revealing them (Gertler 2011). I sketch responses to both objections
The financial auditing of distributed ledgers, blockchain and cryptocurrencies
The internet and digital transfer of money is set to fundamentally change the way financial audits are conducted. This paper critically assesses the way that such assets are currently audited when stored in distributed ledgers, transmitted via a blockchain or whose value is stored in crypto rather than sovereign currency form. In it, we identify the self-verifying nature of such financial data which negates the need for traditional audit methods. Despite the promise of such methods, we highlight the many weaknesses that still exist in the blockchain and how these presents issues for verification. We address distributed transaction and custody records and how these present auditing challenges. We suggest how auditors can use smart contracts to address these and at the same time provide arbitration and oversight. Our contribution is to propose a protocol to audit the movement of blockchain transmitted funds in order to make them more robust going forward
Interaction Histories and Short-Term Memory: Enactive Development of Turn-Taking Behaviours in a Childlike Humanoid Robot
In this article, an enactive architecture is described that allows a humanoid robot to learn to compose simple actions into turn-taking behaviours while playing interaction games with a human partner. The robot’s action choices are reinforced by social feedback from the human in the form of visual attention and measures of behavioural synchronisation. We demonstrate that the system can acquire and switch between behaviours learned through interaction based on social feedback from the human partner. The role of reinforcement based on a short-term memory of the interaction was experimentally investigated. Results indicate that feedback based only on the immediate experience was insufficient to learn longer, more complex turn-taking behaviours. Therefore, some history of the interaction must be considered in the acquisition of turn-taking, which can be efficiently handled through the use of short-term memory.Peer reviewedFinal Published versio
A Unique approach to Solve the System of Linear Equations
Systems of linear equations is a set of linear equations with same types of variables. Aside from mathematics, systems of linear equations are used in information theory, communication theory, and related fields. This study is aimed at analyzing the available methods for solving a system of linear equations and develops a new solution which does not involve direct matrix inversion. By comparing the different test results with an existing and very well-known method called gauss elimination method, it has been seen that in terms of numerical accuracy and computing time the proposed approach achieves improve results. Furthermore, even very large systems can be solved by this proposed algorithm given a cluster with sufficient resources
Enhancing efficiency of Byzantine-tolerant coordination protocols via hash functions
Abstract. Distributed protocols resilient to Byzantine failures are notorious to be costly from the computational and communication point of view. In this paper we discuss the role that collision–resistant hash functions can have in enhancing the efficiency of Byzantine–tolerant coordination protocols. In particular, we show two settings in which their use leads to a remarkable improvement of the system performance in case of large data or large populations. More precisely, we show how they can be applied to the implementation of atomic shared objects, and propose a technique that combines randomization and hash functions. We discuss also the earnings of these approaches and compute their complexity.
- …