77 research outputs found

    SoK: Cryptographically Protected Database Search

    Full text link
    Protected database search systems cryptographically isolate the roles of reading from, writing to, and administering the database. This separation limits unnecessary administrator access and protects data in the case of system breaches. Since protected search was introduced in 2000, the area has grown rapidly; systems are offered by academia, start-ups, and established companies. However, there is no best protected search system or set of techniques. Design of such systems is a balancing act between security, functionality, performance, and usability. This challenge is made more difficult by ongoing database specialization, as some users will want the functionality of SQL, NoSQL, or NewSQL databases. This database evolution will continue, and the protected search community should be able to quickly provide functionality consistent with newly invented databases. At the same time, the community must accurately and clearly characterize the tradeoffs between different approaches. To address these challenges, we provide the following contributions: 1) An identification of the important primitive operations across database paradigms. We find there are a small number of base operations that can be used and combined to support a large number of database paradigms. 2) An evaluation of the current state of protected search systems in implementing these base operations. This evaluation describes the main approaches and tradeoffs for each base operation. Furthermore, it puts protected search in the context of unprotected search, identifying key gaps in functionality. 3) An analysis of attacks against protected search for different base queries. 4) A roadmap and tools for transforming a protected search system into a protected database, including an open-source performance evaluation platform and initial user opinions of protected search.Comment: 20 pages, to appear to IEEE Security and Privac

    A Novel Framework for Big Data Security Infrastructure Components

    Get PDF
    Big data encompasses enormous data and management of huge data collected from various sources like online social media contents, log files, sensor records, surveys and online transactions. It is essential to provide new security models, concerns and efficient security designs and approaches for confronting security and privacy aspects of the same. This paper intends to provide initial analysis of the security challenges in Big Data. The paper introduces the basic concepts of Big Data and its enormous growth rate in terms of pita and zettabytes. A model framework for Big Data Infrastructure Security Components Framework (BDAF) is proposed that includes components like Security Life Cycle, Fine-grained data-centric access control policies, the Dynamic Infrastructure Trust Bootstrap Protocol (DITBP). The framework allows deploying trusted remote virtualised data processing environment and federated access control and identity management

    Assessing the vulnerabilities and securing MongoDB and Cassandra databases

    Get PDF
    Due to the increasing amounts and the different kinds of data that need to be stored in the databases, companies, and organizations are rapidly adopting NoSQL databases to compete. These databases were not designed with security as a priority. NoSQL open-source software was primarily developed to handle unstructured data for the purpose of business intelligence and decision support. Over the years, security features have been added to these databases but they are not as robust as they should be, and there is a scope for improvement as the sophistication of the hackers has been increasing. Moreover, the schema-less design of these databases makes it more difficult to implement traditional RDBMS like security features in these databases. Two popular NoSQL databases are MongoDB and Apache Cassandra. Although there is a lot of research related to security vulnerabilities and suggestions to improve the security of NoSQL databases, this research focusses specifically on MongoDB and Cassandra databases. This study aims to identify and analyze all the security vulnerabilities that MongoDB and Cassandra databases have that are specific to them and come up with a step-by-step guide that can help organizations to secure their data stored in these databases. This is very important because the design and vulnerabilities of each NoSQL database are different from one another and hence require security recommendations that are specific to them

    Access control technologies for Big Data management systems: literature review and future trends

    Get PDF
    Abstract Data security and privacy issues are magnified by the volume, the variety, and the velocity of Big Data and by the lack, up to now, of a reference data model and related data manipulation languages. In this paper, we focus on one of the key data security services, that is, access control, by highlighting the differences with traditional data management systems and describing a set of requirements that any access control solution for Big Data platforms may fulfill. We then describe the state of the art and discuss open research issues

    Big Data Security (Volume 3)

    Get PDF
    After a short description of the key concepts of big data the book explores on the secrecy and security threats posed especially by cloud based data storage. It delivers conceptual frameworks and models along with case studies of recent technology

    Droplet: Decentralized Authorization for IoT Data Streams

    Full text link
    This paper presents Droplet, a decentralized data access control service, which operates without intermediate trust entities. Droplet enables data owners to securely and selectively share their encrypted data while guaranteeing data confidentiality against unauthorized parties. Droplet's contribution lies in coupling two key ideas: (i) a new cryptographically-enforced access control scheme for encrypted data streams that enables users to define fine-grained stream-specific access policies, and (ii) a decentralized authorization service that handles user-defined access policies. In this paper, we present Droplet's design, the reference implementation of Droplet, and experimental results of three case-study apps atop of Droplet: Fitbit activity tracker, Ava health tracker, and ECOviz smart meter dashboard

    A Solution for Privacy-Preserving and Security in Cloud for Document Oriented Data (By Using NoSQL Database)

    Get PDF
    Cloud computing delivers massively scalable computing resources as a service with Internet based technologies those can share resources within the cloud users. The cloud offers various types of services that majorly include infrastructure as services, platform as a service, and software as a service and security as a services and deployment model as well. The foremost issues in cloud data security include data security and user privacy, data protection, data availability, data location, and secure transmission. In now day, preserving-privacy of data and user, and manipulating query from big-data is the most challenging problem in the cloud. So many researches were conducted on privacy preserving techniques for sharing data and access control; secure searching on encrypted data and verification of data integrity. This work  included preserving-privacy of document oriented data security, user privacy in the three phases those are data security at rest, at process and at transit by using Full Homomorphic encryption and decryption scheme to achieve afore most mentioned goal. This work implemented on document oriented data only by using NoSQL database and  the encryption/decryption algorithm such as RSA and Paillier’s cryptosystem in Java package with MongoDB, Apache Tomcat Server 9.1, Python, Amazon Web Service mLab for MongoDB as remote server.  Keywords: Privacy-Preserving, NoSQL, MongoDB, Cloud computing, Homomorphic encryption/decryption, public key, private key, RSA Algorithm, Paillier’s cryptosystem DOI: 10.7176/CEIS/11-3-02 Publication date:May 31st 202

    Review on Big Data Promises for Information Security

    Get PDF
    Big information is expounded to technologies for assembling, processing, analyzing and extracting helpful information from terribly giant volumes of structured and unstructured information generated by totally different sources at high speed. Huge information creates essential info security and privacy issues, at identical time huge information analytics guarantees important opportunities for hindrance and detection of advanced cyber-attacks victimisation correlate internal and external security information. We tend to address many challenges to appreciate true potential of massive information for info security. The paper analyzes huge information applications for info security issues, and defines analysis directions on huge information analytics for counterintelligence

    DECOUPLING CONSISTENCY DETERMINATION AND TRUST FROM THE UNDERLYING DISTRIBUTED DATA STORES

    Get PDF
    Building applications on cloud services is cost-effective and allows for rapid development and release cycles. However, relying on cloud services can severely limit applications’ ability to control their own consistency policies, and their ability to control data visibility during replication. To understand the tension between strong consistency and security guarantees on one hand and high availability, flexible replication, and performance on the other, it helps to consider two questions. First, is it possible for an application to achieve stricter consistency guarantees than what the cloud provider offers? If we solely rely on the provider service interface, the answer is no. However, if we allow the applications to determine the implementation and the execution of the consistency protocols, then we can achieve much more. The second question is, can an application relay updates over untrusted replicas without revealing sensitive information while maintaining the desired consistency guarantees? Simply encrypting the data is not enough. Encryption does not eliminate information leakage that comes from the meta-data needed for the execution of any consistency protocol. The alternative to encryption—allowing the flow of updates only through trusted replicas— leads to predefined communication patterns. This approach is prone to failures that can cause partitioning in the system. One way to answer “yes” to this question is to allow trust relationships, defined at the application level, to guide the synchronization protocol. My goal in this thesis is to build systems that take advantage of the performance, scalability, and availability of the cloud storage services while, at the same time, bypassing the limitations imposed by cloud service providers’ design choices. The key to achieving this is pushing application-specific decisions where they belong: the application. I defend the following thesis statement: By decoupling consistency determination and trust from the underlying distributed data store, it is possible to (1) support application-specific consistency guarantees; (2) allow for topology independent replication protocols that do not compromise application privacy. First I design and implement Shell, a system architecture for supporting strict consistency guarantees over eventually consistent data stores. Shell is a software layer designed to isolate consistency implementations and cloud-provider APIs from the application code. Shell consists of four internal modules and an application store, which together abstract consistency-related operations and encapsulate communication with the underlying storage layers. Apart from consistency protocols tailored to application needs, Shell provides application-aware conflict resolution without relying on generic heuristics such as the “last write wins.” Shell does not require the application to maintain dependency-tracking in- formation for the execution of the consistency protocols as other existing approaches do. I experimentally evaluate Shell over two different data-stores using real-application traces. I found that using Shell can reduce the inconsistent updates by 10%. I also measure and show the overheads that come from introducing the Shell layer. Second, I design and implement T.Rex, a system for supporting topology-independent replication without the assumption of trust between all the participating replicas. T.Rex uses role-based access control to enable flexible and secure sharing among users with widely varying collaboration types: both users and data items are assigned roles, and a user can access data only if it shares at least one role. Building on top of this abstraction, T.Rex includes several novel mechanisms: I introduce role proofs to prove role membership to others in the role without leaking information to those not in the role. Additionally, I introduce role coherence to prevent updates from leaking across roles. Finally, I use Bloom filters as opaque digests to enable querying of remote cache state without being able to enumerate it. I combine these mechanisms to develop a novel, cryptographically secure, and efficient anti-entropy protocol, T.Rex-Sync. I evaluate T.Rex on a local test-bed, and I show that it achieves security with modest computational and storage overheads
    • …
    corecore