184,536 research outputs found
SoK: Cryptographically Protected Database Search
Protected database search systems cryptographically isolate the roles of
reading from, writing to, and administering the database. This separation
limits unnecessary administrator access and protects data in the case of system
breaches. Since protected search was introduced in 2000, the area has grown
rapidly; systems are offered by academia, start-ups, and established companies.
However, there is no best protected search system or set of techniques.
Design of such systems is a balancing act between security, functionality,
performance, and usability. This challenge is made more difficult by ongoing
database specialization, as some users will want the functionality of SQL,
NoSQL, or NewSQL databases. This database evolution will continue, and the
protected search community should be able to quickly provide functionality
consistent with newly invented databases.
At the same time, the community must accurately and clearly characterize the
tradeoffs between different approaches. To address these challenges, we provide
the following contributions:
1) An identification of the important primitive operations across database
paradigms. We find there are a small number of base operations that can be used
and combined to support a large number of database paradigms.
2) An evaluation of the current state of protected search systems in
implementing these base operations. This evaluation describes the main
approaches and tradeoffs for each base operation. Furthermore, it puts
protected search in the context of unprotected search, identifying key gaps in
functionality.
3) An analysis of attacks against protected search for different base
queries.
4) A roadmap and tools for transforming a protected search system into a
protected database, including an open-source performance evaluation platform
and initial user opinions of protected search.Comment: 20 pages, to appear to IEEE Security and Privac
Investigation into Indexing XML Data Techniques
The rapid development of XML technology improves the WWW, since the XML data has many advantages and has become a common technology for transferring data cross the internet. Therefore, the objective of this research is to investigate and study the XML indexing techniques in terms of their structures. The main goal of this investigation is to identify the main limitations of these techniques and any other open issues.
Furthermore, this research considers most common XML indexing techniques and performs a comparison between them. Subsequently, this work makes an argument to find out these limitations. To conclude, the main problem of all the XML indexing techniques is the trade-off between the
size and the efficiency of the indexes. So, all the indexes become large in order to perform well, and none of them is suitable for all users’ requirements. However, each one of these techniques has some advantages in somehow
Clumping towards a UK National catalogue?
This article presents a clumps-oriented perspective on the idea of a UK national catalogue for HE, arguing that a distributed approach based on Z39.50 has a number of attractive features when compared with the alternative physical union catalogue model, but also noting that the many difficulties currently associated with the distributed approach must be resolved before it can itself be regarded as a practical proposition. It is suggested that the distributed model is sufficiently attractive compared to the physical union model to make the expenditure of additional time, effort and resource worthwhile. 'Dynamic clumping' based on collection level description and other appropriate metadata is seen as the key to user navigation in a distributed national catalogue. Large physical union catalogues like COPAC are assumed to have a role, although updating difficulties and the lack of circulation information may limit its scope
A secure data outsourcing scheme based on Asmuth – Bloom secret sharing
The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Data outsourcing is an emerging paradigm for data management in which a database is provided as a service by third-party service providers. One of the major benefits of offering database as a service is to provide organisations, which are unable to purchase expensive hardware and software to host their databases, with efficient data storage accessible online at a cheap rate. Despite that, several issues of data confidentiality, integrity, availability and efficient indexing of users’ queries at the server side have to be addressed in the data outsourcing paradigm. Service providers have to guarantee that their clients’ data are secured against internal (insider) and external attacks. This paper briefly analyses the existing indexing schemes in data outsourcing and highlights their advantages and disadvantages. Then, this paper proposes a secure data outsourcing scheme based on Asmuth–Bloom secret sharing which tries to address the issues in data outsourcing such as data confidentiality, availability and order preservation for efficient indexing
When the Hammer Meets the Nail: Multi-Server PIR for Database-Driven CRN with Location Privacy Assurance
We show that it is possible to achieve information theoretic location privacy
for secondary users (SUs) in database-driven cognitive radio networks (CRNs)
with an end-to-end delay less than a second, which is significantly better than
that of the existing alternatives offering only a computational privacy. This
is achieved based on a keen observation that, by the requirement of Federal
Communications Commission (FCC), all certified spectrum databases synchronize
their records. Hence, the same copy of spectrum database is available through
multiple (distinct) providers. We harness the synergy between multi-server
private information retrieval (PIR) and database- driven CRN architecture to
offer an optimal level of privacy with high efficiency by exploiting this
observation. We demonstrated, analytically and experimentally with deployments
on actual cloud systems that, our adaptations of multi-server PIR outperform
that of the (currently) fastest single-server PIR by a magnitude of times with
information theoretic security, collusion resiliency, and fault-tolerance
features. Our analysis indicates that multi-server PIR is an ideal
cryptographic tool to provide location privacy in database-driven CRNs, in
which the requirement of replicated databases is a natural part of the system
architecture, and therefore SUs can enjoy all advantages of multi-server PIR
without any additional architectural and deployment costs.Comment: 10 pages, double colum
- …