15,208 research outputs found
Derandomized Construction of Combinatorial Batch Codes
Combinatorial Batch Codes (CBCs), replication-based variant of Batch Codes
introduced by Ishai et al. in STOC 2004, abstracts the following data
distribution problem: data items are to be replicated among servers in
such a way that any of the data items can be retrieved by reading at
most one item from each server with the total amount of storage over
servers restricted to . Given parameters and , where and
are constants, one of the challenging problems is to construct -uniform CBCs
(CBCs where each data item is replicated among exactly servers) which
maximizes the value of . In this work, we present explicit construction of
-uniform CBCs with data items. The
construction has the property that the servers are almost regular, i.e., number
of data items stored in each server is in the range . The
construction is obtained through better analysis and derandomization of the
randomized construction presented by Ishai et al. Analysis reveals almost
regularity of the servers, an aspect that so far has not been addressed in the
literature. The derandomization leads to explicit construction for a wide range
of values of (for given and ) where no other explicit construction
with similar parameters, i.e., with , is
known. Finally, we discuss possibility of parallel derandomization of the
construction
Extended Combinatorial Constructions for Peer-to-peer User-Private Information Retrieval
We consider user-private information retrieval (UPIR), an interesting
alternative to private information retrieval (PIR) introduced by Domingo-Ferrer
et al. In UPIR, the database knows which records have been retrieved, but does
not know the identity of the query issuer. The goal of UPIR is to disguise user
profiles from the database. Domingo-Ferrer et al.\ focus on using a
peer-to-peer community to construct a UPIR scheme, which we term P2P UPIR. In
this paper, we establish a strengthened model for P2P UPIR and clarify the
privacy goals of such schemes using standard terminology from the field of
privacy research. In particular, we argue that any solution providing privacy
against the database should attempt to minimize any corresponding loss of
privacy against other users. We give an analysis of existing schemes, including
a new attack by the database. Finally, we introduce and analyze two new
protocols. Whereas previous work focuses on a special type of combinatorial
design known as a configuration, our protocols make use of more general
designs. This allows for flexibility in protocol set-up, allowing for a choice
between having a dynamic scheme (in which users are permitted to enter and
leave the system), or providing increased privacy against other users.Comment: Updated version, which reflects reviewer comments and includes
expanded explanations throughout. Paper is accepted for publication by
Advances in Mathematics of Communication
A unified approach to combinatorial key predistribution schemes for sensor networks
There have been numerous recent proposals for key predistribution schemes for wireless sensor networks based on various types of combinatorial structures such as designs and codes. Many of these schemes have very similar properties and are analysed in a similar manner. We seek to provide a unified framework to study these kinds of schemes. To do so, we define a new, general class of designs, termed “partially balanced t-designs”, that is sufficiently general that it encompasses almost all of the designs that have been proposed for combinatorial key predistribution schemes. However, this new class of designs still has sufficient structure that we are able to derive general formulas for the metrics of the resulting key predistribution schemes. These metrics can be evaluated for a particular scheme simply by substituting appropriate parameters of the underlying combinatorial structure into our general formulas. We also compare various classes of schemes based on different designs, and point out that some existing proposed schemes are in fact identical, even though their descriptions may seem different. We believe that our general framework should facilitate the analysis of proposals for combinatorial key predistribution schemes and their comparison with existing schemes, and also allow researchers to easily evaluate which scheme or schemes present the best combination of performance metrics for a given application scenario
First Evaluation of the CPU, GPGPU and MIC Architectures for Real Time Particle Tracking based on Hough Transform at the LHC
Recent innovations focused around {\em parallel} processing, either through
systems containing multiple processors or processors containing multiple cores,
hold great promise for enhancing the performance of the trigger at the LHC and
extending its physics program. The flexibility of the CMS/ATLAS trigger system
allows for easy integration of computational accelerators, such as NVIDIA's
Tesla Graphics Processing Unit (GPU) or Intel's \xphi, in the High Level
Trigger. These accelerators have the potential to provide faster or more energy
efficient event selection, thus opening up possibilities for new complex
triggers that were not previously feasible. At the same time, it is crucial to
explore the performance limits achievable on the latest generation multicore
CPUs with the use of the best software optimization methods. In this article, a
new tracking algorithm based on the Hough transform will be evaluated for the
first time on a multi-core Intel Xeon E5-2697v2 CPU, an NVIDIA Tesla K20c GPU,
and an Intel \xphi\ 7120 coprocessor. Preliminary time performance will be
presented.Comment: 13 pages, 4 figures, Accepted to JINS
- …