8,980 research outputs found
How to bring together fault tolerance and data consistency to enable Grid data sharing
One of the predominant themes in the criminal justice literature is that prosecutors dominate the justice system. Over seventy-five years ago, Attorney General Robert Jackson famously proclaimed that the “prosecutor has more control over life, liberty, and reputation than any other person in America.” In one of the most cited law review articles of all time, Bill Stuntz added that prosecutors—not legislators, judges, or police—“are the criminal justice system’s real lawmakers.” And an unchallenged modern consensus holds that prosecutors “rule the criminal justice system.”
This Article applies a critical lens to longstanding claims of prosecutorial preeminence. It reveals a curious echo chamber enabled by a puzzling lack of dissent. With few voices challenging ever-more-strident prosecutor-dominance rhetoric, academic claims became uncritical, imprecise, and ultimately incorrect.
An unchallenged consensus that “prosecutors are the criminal justice system” and that the “institution of the prosecutor has more power than any other in the criminal justice system” has real consequences for criminal justice discourse. Portraying prosecutors as the system’s iron-fisted rulers obscures the complex interplay that actually determines criminal justice outcomes. The overheated rhetoric of prosecutorial preeminence fosters a superficial understanding of the criminal justice system, overlooks the powerful forces that can and do constrain prosecutors, and diverts attention from the most promising sources of reform (legislators, judges, and police) to the least (prosecutors)
Distributed Management of Massive Data: an Efficient Fine-Grain Data Access Scheme
This paper addresses the problem of efficiently storing and accessing massive
data blocks in a large-scale distributed environment, while providing efficient
fine-grain access to data subsets. This issue is crucial in the context of
applications in the field of databases, data mining and multimedia. We propose
a data sharing service based on distributed, RAM-based storage of data, while
leveraging a DHT-based, natively parallel metadata management scheme. As
opposed to the most commonly used grid storage infrastructures that provide
mechanisms for explicit data localization and transfer, we provide a
transparent access model, where data are accessed through global identifiers.
Our proposal has been validated through a prototype implementation whose
preliminary evaluation provides promising results
09191 Abstracts Collection -- Fault Tolerance in High-Performance Computing and Grids
From June 4--8, 2009, the Dagstuhl Seminar 09191 ``Fault Tolerance in High-Performance Computing and Grids \u27\u27 was held
in Schloss Dagstuhl~--~Leibniz Center for Informatics.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available.
Slides of
the talks and abstracts are available online at url{http://www.dagstuhl.de/Materials/index.en.phtml?09191}
A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing
Data Grids have been adopted as the platform for scientific communities that
need to share, access, transport, process and manage large data collections
distributed worldwide. They combine high-end computing technologies with
high-performance networking and wide-area storage management techniques. In
this paper, we discuss the key concepts behind Data Grids and compare them with
other data sharing and distribution paradigms such as content delivery
networks, peer-to-peer networks and distributed databases. We then provide
comprehensive taxonomies that cover various aspects of architecture, data
transportation, data replication and resource allocation and scheduling.
Finally, we map the proposed taxonomy to various Data Grid systems not only to
validate the taxonomy but also to identify areas for future exploration.
Through this taxonomy, we aim to categorise existing systems to better
understand their goals and their methodology. This would help evaluate their
applicability for solving similar problems. This taxonomy also provides a "gap
analysis" of this area through which researchers can potentially identify new
issues for investigation. Finally, we hope that the proposed taxonomy and
mapping also helps to provide an easy way for new practitioners to understand
this complex area of research.Comment: 46 pages, 16 figures, Technical Repor
Developing a distributed electronic health-record store for India
The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India
DataWarp: Building Applications which Make Progress in an Inconsistent World
The usual approach to dealing with imperfections in data is to attempt to eliminate them. However, the nature of modern systems means this is often futile. This paper describes an approach which permits applications to operate notwithstanding inconsistent data. Instead of attempting to extract a single, correct view of the world from its data, a DataWarp application constructs a collection of interpretations. It adopts one of these and continues work. Since it acts on assumptions, the DataWarp application considers its recent work to be provisional, expecting eventually most of these actions will become definitive. Should the application decide to adopt an alternative data view, it may then need to void provisional actions before resuming work. We describe the DataWarp architecture, discuss its implementation and describe an experiment in which a DataWarp application in an environment containing inconsistent data achieves better results than its conventional counterpart
- …