4 research outputs found
A Data Protection Architecture for Derived Data Control in Partially Disconnected Networks
Every organisation needs to exchange and disseminate data constantly amongst its employees, members, customers
and partners. Disseminated data is often sensitive or confidential and access to it should be restricted to
authorised recipients. Several enterprise rights management (ERM) systems and data protection solutions have
been proposed by both academia and industry to enable usage control on disseminated data, i.e. to allow data
originators to retain control over whom accesses their information, under which circumstances, and how it is
used. This is often obtained by means of cryptographic techniques and thus by disseminating encrypted data
that only trustworthy recipients can decrypt. Most of these solutions assume data recipients are connected to
the network and able to contact remote policy evaluation authorities that can evaluate usage control policies and
issue decryption keys. This assumption oversimplifies the problem by neglecting situations where connectivity
is not available, as often happens in crisis management scenarios. In such situations, recipients may not be
able to access the information they have received. Also, while using data, recipients and their applications can
create new derived information, either by aggregating data from several sources or transforming the original
dataâs content or format. Existing solutions mostly neglect this problem and do not allow originators to retain
control over this derived data despite the fact that it may be more sensitive or valuable than the data originally
disseminated.
In this thesis we propose an ERM architecture that caters for both derived data control and usage control in
partially disconnected networks. We propose the use of a novel policy lattice model based on information flow
and mandatory access control. Sets of policies controlling the usage of data can be specified and ordered in a
lattice according to the level of protection they provide. At the same time, their association with specific data
objects is mandated by rules (content verification procedures) defined in a data sharing agreement (DSA) stipulated
amongst the organisations sharing information. When data is transformed, the new policies associated
with it are automatically determined depending on the transformation used and the policies currently associated
with the input data. The solution we propose takes into account transformations that can both increase or reduce
the sensitivity of information, thus giving originators a flexible means to control their data and its derivations.
When data must be disseminated in disconnected environments, the movement of users and the ad hoc connections they establish can be exploited to distribute information. To allow users to decrypt disseminated data
without contacting remote evaluation authorities, we integrate our architecture with a mechanism for authority
devolution, so that users moving in the disconnected area can be granted the right to evaluate policies and issue
decryption keys. This allows recipients to contact any nearby user that is also a policy evaluation authority to
obtain decryption keys. The mechanism has been shown to be efficient so that timely access to data is possible
despite the lack of connectivity. Prototypes of the proposed solutions that protect XML documents have been
developed. A realistic crisis management scenario has been used to show both the flexibility of the presented
approach for derived data control and the efficiency of the authority devolution solution when handling data
dissemination in simulated partially disconnected networks.
While existing systems do not offer any means to control derived data and only offer partial solutions to
the problem of lack of connectivity (e.g. by caching decryption keys), we have defined a set of solutions
that help data originators faced with the shortcomings of current proposals to control their data in innovative,
problem-oriented ways
Formally Verified Bundling and Appraisal of Evidence for Layered Attestations
Remote attestation is a technology for establishing trust in a remote computing system. Core to the integrity of the attestation mechanisms themselves are components that orchestrate, cryptographically bundle, and appraise measurements of the target system. Copland is a domain-specific language for specifying attestation protocols that operate in diverse, layered measurement topologies. In this work we formally define and verify the Copland Virtual Machine alongside a dual generalized appraisal procedure. Together these components provide a principled pipeline to execute and bundle arbitrary Copland-based attestations, then unbundle and evaluate the resulting evidence for measurement content and cryptographic integrity. All artifacts are implemented as monadic, functional programs in the Coq proof assistant and verified with respect to a Copland reference semantics that characterizes attestation-relevant event traces and cryptographic evidence structure. Appraisal soundness is positioned within a novel end-to-end workflow that leverages formal properties of the attestation components to discharge assumptions about honest Copland participants. These assumptions inform an existing model-finder tool that analyzes a Copland scenario in the context of an active adversary attempting to subvert attestation. An initial case study exercises this workflow through the iterative design and analysis of a Copland protocol and accompanying security architecture for an Unpiloted Air Vehicle demonstration platform. We conclude by instantiating a more diverse benchmark of attestation patterns called the "Flexible Mechanisms for Remote Attestation", leveraging Coq's built-in code synthesis to integrate the formal artifacts within an executable attestation environment
Recent trends in applying TPM to cloud computing
Trusted platform modules (TPM) have become important safeâguards against
variety of softwareâbased attacks. By providing a limited set of
cryptographic services through a wellâdefined interface, separated from
the software itself, TPM can serve as a root of trust and as a building
block for higherâlevel security measures. This article surveys the
literature for applications of TPM in the cloudâcomputing environment,
with publication dates comprised between 2013 and 2018. It identifies
the current trends and objectives of this technology in the cloud, and
the type of threats that it mitigates. Toward the end, the main research
gaps are pinpointed and discussed. Since integrity measurement is one
of the main usages of TPM, special attention is paid to the assessment
of run time phases and software layers it is applied to.</p