111,535 research outputs found
CamFlow: Managed Data-sharing for Cloud Services
A model of cloud services is emerging whereby a few trusted providers manage
the underlying hardware and communications whereas many companies build on this
infrastructure to offer higher level, cloud-hosted PaaS services and/or SaaS
applications. From the start, strong isolation between cloud tenants was seen
to be of paramount importance, provided first by virtual machines (VM) and
later by containers, which share the operating system (OS) kernel. Increasingly
it is the case that applications also require facilities to effect isolation
and protection of data managed by those applications. They also require
flexible data sharing with other applications, often across the traditional
cloud-isolation boundaries; for example, when government provides many related
services for its citizens on a common platform. Similar considerations apply to
the end-users of applications. But in particular, the incorporation of cloud
services within `Internet of Things' architectures is driving the requirements
for both protection and cross-application data sharing.
These concerns relate to the management of data. Traditional access control
is application and principal/role specific, applied at policy enforcement
points, after which there is no subsequent control over where data flows; a
crucial issue once data has left its owner's control by cloud-hosted
applications and within cloud-services. Information Flow Control (IFC), in
addition, offers system-wide, end-to-end, flow control based on the properties
of the data. We discuss the potential of cloud-deployed IFC for enforcing
owners' dataflow policy with regard to protection and sharing, as well as
safeguarding against malicious or buggy software. In addition, the audit log
associated with IFC provides transparency, giving configurable system-wide
visibility over data flows. [...]Comment: 14 pages, 8 figure
Checking and Enforcing Security through Opacity in Healthcare Applications
The Internet of Things (IoT) is a paradigm that can tremendously
revolutionize health care thus benefiting both hospitals, doctors and patients.
In this context, protecting the IoT in health care against interference,
including service attacks and malwares, is challenging. Opacity is a
confidentiality property capturing a system's ability to keep a subset of its
behavior hidden from passive observers. In this work, we seek to introduce an
IoT-based heart attack detection system, that could be life-saving for patients
without risking their need for privacy through the verification and enforcement
of opacity. Our main contributions are the use of a tool to verify opacity in
three of its forms, so as to detect privacy leaks in our system. Furthermore,
we develop an efficient, Symbolic Observation Graph (SOG)-based algorithm for
enforcing opacity
Oblivion: Mitigating Privacy Leaks by Controlling the Discoverability of Online Information
Search engines are the prevalently used tools to collect information about
individuals on the Internet. Search results typically comprise a variety of
sources that contain personal information -- either intentionally released by
the person herself, or unintentionally leaked or published by third parties,
often with detrimental effects on the individual's privacy. To grant
individuals the ability to regain control over their disseminated personal
information, the European Court of Justice recently ruled that EU citizens have
a right to be forgotten in the sense that indexing systems, must offer them
technical means to request removal of links from search results that point to
sources violating their data protection rights. As of now, these technical
means consist of a web form that requires a user to manually identify all
relevant links upfront and to insert them into the web form, followed by a
manual evaluation by employees of the indexing system to assess if the request
is eligible and lawful.
We propose a universal framework Oblivion to support the automation of the
right to be forgotten in a scalable, provable and privacy-preserving manner.
First, Oblivion enables a user to automatically find and tag her disseminated
personal information using natural language processing and image recognition
techniques and file a request in a privacy-preserving manner. Second, Oblivion
provides indexing systems with an automated and provable eligibility mechanism,
asserting that the author of a request is indeed affected by an online
resource. The automated ligibility proof ensures censorship-resistance so that
only legitimately affected individuals can request the removal of corresponding
links from search results. We have conducted comprehensive evaluations, showing
that Oblivion is capable of handling 278 removal requests per second, and is
hence suitable for large-scale deployment
Expressing and enforcing user-defined constraints of AADL models
The Architecture Analysis and Design Language AADL allows one to model complete systems, but also to define specific extensions through property sets and library of models. Yet, it does not define an explicit mechanism to enforce some semantics or consistency checks to ensure property sets are correctly used. In this paper, we present REAL (Requirements and Enforcements Analysis Language) as an integrated solution to this issue. REAL is defined as an AADL annex language. It adds the possibility to express constraints as theorems based on set theory to enforce implicit semantics of property sets or AADL models. We illustrate the use of the language on case studies we developed with industrial partners
Calculation of internal and scattered fields of axisymmetric nanoparticles at any point in space
We present a method of simultaneously calculating both the internal and external fields of arbitrarily shaped dielectric and metallic axisymmetric nanoparticles. By using a set of distributed spherical vector wavefunctions that are exact solutions to Maxwell's equations and which form a complete, linearly independent set on the particle surface, we approximate the surface Green functions of particles. In this way we can enforce the boundary conditions at the interface and represent the electromagnetic fields at the surface to an arbitrary precision. With the boundary conditions at the particle surface satisfied, the electromagnetic fields are uniquely determined at any point in space, whether internal or external to the particle. Furthermore, the residual field error at the particle surface can be shown to give an upper bound error for the field solutions at any point in space. We show the accuracy of this method with two important areas studied widely in the literature, photonic nanojets and the internal field structure of nanoparticles
Gradient-based Inference for Networks with Output Constraints
Practitioners apply neural networks to increasingly complex problems in
natural language processing, such as syntactic parsing and semantic role
labeling that have rich output structures. Many such structured-prediction
problems require deterministic constraints on the output values; for example,
in sequence-to-sequence syntactic parsing, we require that the sequential
outputs encode valid trees. While hidden units might capture such properties,
the network is not always able to learn such constraints from the training data
alone, and practitioners must then resort to post-processing. In this paper, we
present an inference method for neural networks that enforces deterministic
constraints on outputs without performing rule-based post-processing or
expensive discrete search. Instead, in the spirit of gradient-based training,
we enforce constraints with gradient-based inference (GBI): for each input at
test-time, we nudge continuous model weights until the network's unconstrained
inference procedure generates an output that satisfies the constraints. We
study the efficacy of GBI on three tasks with hard constraints: semantic role
labeling, syntactic parsing, and sequence transduction. In each case, the
algorithm not only satisfies constraints but improves accuracy, even when the
underlying network is state-of-the-art.Comment: AAAI 201
- …