20,437 research outputs found
A Mediated Definite Delegation Model allowing for Certified Grid Job Submission
Grid computing infrastructures need to provide traceability and accounting of
their users" activity and protection against misuse and privilege escalation. A
central aspect of multi-user Grid job environments is the necessary delegation
of privileges in the course of a job submission. With respect to these generic
requirements this document describes an improved handling of multi-user Grid
jobs in the ALICE ("A Large Ion Collider Experiment") Grid Services. A security
analysis of the ALICE Grid job model is presented with derived security
objectives, followed by a discussion of existing approaches of unrestricted
delegation based on X.509 proxy certificates and the Grid middleware gLExec.
Unrestricted delegation has severe security consequences and limitations, most
importantly allowing for identity theft and forgery of delegated assignments.
These limitations are discussed and formulated, both in general and with
respect to an adoption in line with multi-user Grid jobs. Based on the
architecture of the ALICE Grid Services, a new general model of mediated
definite delegation is developed and formulated, allowing a broker to assign
context-sensitive user privileges to agents. The model provides strong
accountability and long- term traceability. A prototype implementation allowing
for certified Grid jobs is presented including a potential interaction with
gLExec. The achieved improvements regarding system security, malicious job
exploitation, identity protection, and accountability are emphasized, followed
by a discussion of non- repudiation in the face of malicious Grid jobs
A Comparison of Big Data Frameworks on a Layered Dataflow Model
In the world of Big Data analytics, there is a series of tools aiming at
simplifying programming applications to be executed on clusters. Although each
tool claims to provide better programming, data and execution models, for which
only informal (and often confusing) semantics is generally provided, all share
a common underlying model, namely, the Dataflow model. The Dataflow model we
propose shows how various tools share the same expressiveness at different
levels of abstraction. The contribution of this work is twofold: first, we show
that the proposed model is (at least) as general as existing batch and
streaming frameworks (e.g., Spark, Flink, Storm), thus making it easier to
understand high-level data-processing applications written in such frameworks.
Second, we provide a layered model that can represent tools and applications
following the Dataflow paradigm and we show how the analyzed tools fit in each
level.Comment: 19 pages, 6 figures, 2 tables, In Proc. of the 9th Intl Symposium on
High-Level Parallel Programming and Applications (HLPP), July 4-5 2016,
Muenster, German
Scalable Reliable SD Erlang Design
This technical report presents the design of Scalable Distributed (SD) Erlang: a set of language-level changes that aims to enable Distributed Erlang to scale for server applications on commodity hardware with at most 100,000 cores. We cover a number of aspects, specifically anticipated architecture, anticipated failures, scalable data structures, and scalable computation. Other two components that guided us in the design of SD Erlang are design principles and typical Erlang applications. The design principles summarise the type of modifications we aim to allow Erlang scalability. Erlang exemplars help us to identify the main Erlang scalability issues and hypothetically validate the SD Erlang design
Contour: A Practical System for Binary Transparency
Transparency is crucial in security-critical applications that rely on
authoritative information, as it provides a robust mechanism for holding these
authorities accountable for their actions. A number of solutions have emerged
in recent years that provide transparency in the setting of certificate
issuance, and Bitcoin provides an example of how to enforce transparency in a
financial setting. In this work we shift to a new setting, the distribution of
software package binaries, and present a system for so-called "binary
transparency." Our solution, Contour, uses proactive methods for providing
transparency, privacy, and availability, even in the face of persistent
man-in-the-middle attacks. We also demonstrate, via benchmarks and a test
deployment for the Debian software repository, that Contour is the only system
for binary transparency that satisfies the efficiency and coordination
requirements that would make it possible to deploy today.Comment: International Workshop on Cryptocurrencies and Blockchain Technology
(CBT), 201
Load-Balanced Fractional Repetition Codes
We introduce load-balanced fractional repetition (LBFR) codes, which are a
strengthening of fractional repetition (FR) codes. LBFR codes have the
additional property that multiple node failures can be sequentially repaired by
downloading no more than one block from any other node. This allows for better
use of the network, and can additionally reduce the number of disk reads
necessary to repair multiple nodes. We characterize LBFR codes in terms of
their adjacency graphs, and use this characterization to present explicit
constructions LBFR codes with storage capacity comparable existing FR codes.
Surprisingly, in some parameter regimes, our constructions of LBFR codes match
the parameters of the best constructions of FR codes
- …