3,811 research outputs found
KLAIM: A Kernel Language for Agents Interaction and Mobility
We investigate the issue of designing a kernel programming language for mobile computing and describe KLAIM, a language that supports a programming paradigm where processes, like data, can be moved from one computing environment to another. The language consists of a core Linda with multiple tuple spaces and of a set of operators for building processes. KLAIM naturally supports programming with explicit localities. Localities are first-class data (they can be manipulated like any other data), but the language provides coordination mechanisms to control the interaction protocols among located processes. The formal operational semantics is useful for discussing the design of the language and provides guidelines for implementations. KLAIM is equipped with a type system that statically checks access rights violations of mobile agents. Types are used to describe the intentions (read, write, execute, etc.) of processes in relation to the various localities. The type system is used to determine the operations that processes want to perform at each locality, and to check whether they comply with the declared intentions and whether they have the necessary rights to perform the intended operations at the specific localities. Via a series of examples, we show that many mobile code programming paradigms can be naturally implemented in our kernel language. We also present a prototype implementaton of KLAIM in Java
Encrypted Shared Data Spaces
The deployment of Share Data Spaces in open, possibly hostile, environments arises the need of protecting the confidentiality of the data space content. Existing approaches focus on access control mechanisms that protect the data space from untrusted agents. The basic assumption is that the hosts (and their administrators) where the data space is deployed have to be trusted. Encryption schemes can be used to protect the data space content from malicious hosts. However, these schemes do not allow searching on encrypted data. In this paper we present a novel encryption scheme that allows tuple matching on completely encrypted tuples. Since the data space does not need to decrypt tuples to perform the search, tuple confidentiality can be guaranteed even when the data space is deployed on malicious hosts (or an adversary gains access to the host). Our scheme does not require authorised agents to share keys for inserting and retrieving tuples. Each authorised agent can encrypt, decrypt, and search encrypted tuples without having to know other agentsā keys. This is beneficial inasmuch as it simplifies the task of key management. An implementation of an encrypted data space based on this scheme is described and some preliminary performance results are given
Approximately Truthful Multi-Agent Optimization Using Cloud-Enforced Joint Differential Privacy
Multi-agent coordination problems often require agents to exchange state
information in order to reach some collective goal, such as agreement on a
final state value. In some cases, it is feasible that opportunistic agents may
deceptively report false state values for their own benefit, e.g., to claim a
larger portion of shared resources. Motivated by such cases, this paper
presents a multi-agent coordination framework which disincentivizes
opportunistic misreporting of state information. This paper focuses on
multi-agent coordination problems that can be stated as nonlinear programs,
with non-separable constraints coupling the agents. In this setting, an
opportunistic agent may be tempted to skew the problem's constraints in its
favor to reduce its local cost, and this is exactly the behavior we seek to
disincentivize. The framework presented uses a primal-dual approach wherein the
agents compute primal updates and a centralized cloud computer computes dual
updates. All computations performed by the cloud are carried out in a way that
enforces joint differential privacy, which adds noise in order to dilute any
agent's influence upon the value of its cost function in the problem. We show
that this dilution deters agents from intentionally misreporting their states
to the cloud, and present bounds on the possible cost reduction an agent can
attain through misreporting its state. This work extends our earlier work on
incorporating ordinary differential privacy into multi-agent optimization, and
we show that this work can be modified to provide a disincentivize for
misreporting states to the cloud. Numerical results are presented to
demonstrate convergence of the optimization algorithm under joint differential
privacy.Comment: 17 pages, 3 figure
- ā¦