385 research outputs found

    Advance of the Access Methods

    Get PDF
    The goal of this paper is to outline the advance of the access methods in the last ten years as well as to make review of all available in the accessible bibliography methods

    Multidimensional Range Queries on Modern Hardware

    Full text link
    Range queries over multidimensional data are an important part of database workloads in many applications. Their execution may be accelerated by using multidimensional index structures (MDIS), such as kd-trees or R-trees. As for most index structures, the usefulness of this approach depends on the selectivity of the queries, and common wisdom told that a simple scan beats MDIS for queries accessing more than 15%-20% of a dataset. However, this wisdom is largely based on evaluations that are almost two decades old, performed on data being held on disks, applying IO-optimized data structures, and using single-core systems. The question is whether this rule of thumb still holds when multidimensional range queries (MDRQ) are performed on modern architectures with large main memories holding all data, multi-core CPUs and data-parallel instruction sets. In this paper, we study the question whether and how much modern hardware influences the performance ratio between index structures and scans for MDRQ. To this end, we conservatively adapted three popular MDIS, namely the R*-tree, the kd-tree, and the VA-file, to exploit features of modern servers and compared their performance to different flavors of parallel scans using multiple (synthetic and real-world) analytical workloads over multiple (synthetic and real-world) datasets of varying size, dimensionality, and skew. We find that all approaches benefit considerably from using main memory and parallelization, yet to varying degrees. Our evaluation indicates that, on current machines, scanning should be favored over parallel versions of classical MDIS even for very selective queries

    Implementation and Evaluation of Balanced and Nested Grid (Bang) File Structures

    Get PDF
    Computinq and Information Science

    Analysis and Design of Authentication and Encryption Algorithms for Secure Cloud Systems

    Get PDF
    Along with the fast growth of networks and mobile devices, cloud computing has become one of the most attractive and effective technologies and business solutions nowadays. Increasing numbers of organizations and customers are migrating their businesses and data to the cloud due to the flexibility and cost-efficiency of cloud systems. Preventing unauthorized access of sensitive data in the cloud has been one of the biggest challenges when designing a secure cloud system, and it strongly relies on the chosen authentication and encryption algorithms for providing authenticity and confidentiality, respectively. This thesis investigates various aspects of authentication and encryption algorithms for securing cloud systems, including authenticated encryption modes of operation, block ciphers, password hashing algorithms, and password-less/two-factor authentication mechanisms. Improving Authenticated Encryption Modes. The Galois/Counter Mode (GCM) is an authenticated encryption mode of operation for block ciphers. It has been widely adopted by many network standards and protocols that protect the security of cloud communications, such as TLS v1.2, IEEE 802.1AE and IPsec. Iwata et al. recently found a flaw in GCM's original proofs for non-96-bit nonce cases, and then presented new security bounds for GCM. The new bounds imply that the success probabilities of adversaries for attacking GCM are much larger than the originally expected ones. We propose a simple change to repair GCM. When applied, it will improve the security bounds by a factor of about 2202^{20} while maintaining most of the original proofs. Analyzing Polynomial-Based Message Authentication Codes. We investigate attacks on polynomial-based message authentication code (MAC) schemes including the one adopted in GCM. We demonstrate that constructing successful forgeries of these MAC schemes does not necessarily require hash collisions. This discovery removes certain restrictions in the attacks previously proposed by Procter and Cid. Moreover, utilizing a special design of GCM for processing non-96-bit nonces, we turn these forgery attacks into birthday attacks, which will significantly increase their success probabilities. Therefore, by considering the birthday attacks and the security proof flaw found by Iwata et al., cloud system designers should avoid using GCM with non-96-bit nonces if they do not revise the design of GCM. Analyzing Block Ciphers. We propose a new framework for analyzing symmetric-key ciphers by guessing intermediate states to divide ciphers into small components. This framework is suitable for lightweight ciphers with simple key schedules and block sizes smaller than key lengths. Using this framework, we design new attacks on the block cipher family KATAN. These attacks can recover the master keys of 175-round KATAN32, 130-round KATAN48 and 112-round KATAN64 faster than exhaustive search, and thus reach many more rounds than the existing attacks. We also provide new attacks on 115-round KATAN32 and 100-round KATAN48 in order to demonstrate that this new kind of attack can be more time-efficient and memory-efficient than the existing ones. Designing Password Hashing Algorithms. Securely storing passwords and deriving cryptographic keys from passwords are also crucial for most secure cloud system designs. However, choices of well-studied password hashing algorithms are extremely limited, as their security requirements and design principles are different from common cryptographic primitives. We propose two practical password hashing algorithms, Pleco and Plectron. They are built upon well-understood cryptographic algorithms, and combine the advantages of symmetric-key and asymmetric-key primitives. By employing the Rabin cryptosystem, we prove that the one-wayness of Pleco is at least as strong as the hard problem of integer factorization. In addition, both password hashing algorithms are designed to be sequential memory-hard, in order to thwart large-scale password searching using parallel hardware, such as GPUs, FPGAs, and ASICs. Designing Password-less/Two-Factor Authentication Mechanisms. Motivated by a number of recent industry initiatives, we propose Loxin, an innovative solution for password-less authentication for cloud systems and web applications. Loxin aims to improve on passwords with respect to both usability and security. It utilizes push message services for mobile devices to initiate authentication transactions based on asymmetric-key cryptography, and enables users to access multiple services by using pre-owned identities, such as email addresses. In particular, the Loxin server cannot generate users' authentication credentials, thereby eliminating the potential risk of credential leakage if the Loxin server gets compromised. Furthermore, Loxin is fully compatible with existing password-based authentication systems, and thus can serve as a two-factor authentication mechanism

    An Architecture for distributed multimedia database systems

    Get PDF
    In the past few years considerable demand for user oriented multimedia information systems has developed. These systems must provide a rich set of functionality so that new, complex, and interesting applications can be addressed. This places considerable importance on the management of diverse data types including text, images, audio and video. These requirements generate the need for a new generation of distributed heterogeneous multimedia database systems. In this paper we identify a set of functional requirements for a multimedia server considering database management, object synchronization and integration, and multimedia query processing. A generalization of the requirements to a distributed system is presented, and some of our current research and developing activities are discussed

    Secure Remote Storage of Logs with Search Capabilities

    Get PDF
    Dissertação de Mestrado em Engenharia InformáticaAlong side with the use of cloud-based services, infrastructure and storage, the use of application logs in business critical applications is a standard practice nowadays. Such application logs must be stored in an accessible manner in order to used whenever needed. The debugging of these applications is a common situation where such access is required. Frequently, part of the information contained in logs records is sensitive. This work proposes a new approach of storing critical logs in a cloud-based storage recurring to searchable encryption, inverted indexing and hash chaining techniques to achieve, in a unified way, the needed privacy, integrity and authenticity while maintaining server side searching capabilities by the logs owner. The designed search algorithm enables conjunctive keywords queries plus a fine-grained search supported by field searching and nested queries, which are essential in the referred use case. To the best of our knowledge, the proposed solution is also the first to introduce a query language that enables complex conjunctive keywords and a fine-grained search backed by field searching and sub queries.A gerac¸ ˜ao de logs em aplicac¸ ˜oes e a sua posterior consulta s˜ao fulcrais para o funcionamento de qualquer neg´ocio ou empresa. Estes logs podem ser usados para eventuais ac¸ ˜oes de auditoria, uma vez que estabelecem uma baseline das operac¸ ˜oes realizadas. Servem igualmente o prop´ osito de identificar erros, facilitar ac¸ ˜oes de debugging e diagnosticar bottlennecks de performance. Tipicamente, a maioria da informac¸ ˜ao contida nesses logs ´e considerada sens´ıvel. Quando estes logs s˜ao armazenados in-house, as considerac¸ ˜oes relacionadas com anonimizac¸ ˜ao, confidencialidade e integridade s˜ao geralmente descartadas. Contudo, com o advento das plataformas cloud e a transic¸ ˜ao quer das aplicac¸ ˜oes quer dos seus logs para estes ecossistemas, processos de logging remotos, seguros e confidenciais surgem como um novo desafio. Adicionalmente, regulac¸ ˜ao como a RGPD, imp˜oe que as instituic¸ ˜oes e empresas garantam o armazenamento seguro dos dados. A forma mais comum de garantir a confidencialidade consiste na utilizac¸ ˜ao de t ´ecnicas criptogr ´aficas para cifrar a totalidade dos dados anteriormente `a sua transfer ˆencia para o servidor remoto. Caso sejam necess´ arias capacidades de pesquisa, a abordagem mais simples ´e a transfer ˆencia de todos os dados cifrados para o lado do cliente, que proceder´a `a sua decifra e pesquisa sobre os dados decifrados. Embora esta abordagem garanta a confidencialidade e privacidade dos dados, rapidamente se torna impratic ´avel com o crescimento normal dos registos de log. Adicionalmente, esta abordagem n˜ao faz uso do potencial total que a cloud tem para oferecer. Com base nesta tem´ atica, esta tese prop˜oe o desenvolvimento de uma soluc¸ ˜ao de armazenamento de logs operacionais de forma confidencial, integra e autˆ entica, fazendo uso das capacidades de armazenamento e computac¸ ˜ao das plataformas cloud. Adicionalmente, a possibilidade de pesquisa sobre os dados ´e mantida. Essa pesquisa ´e realizada server-side diretamente sobre os dados cifrados e sem acesso em momento algum a dados n˜ao cifrados por parte do servidor..

    Harnessing the Power of Distributed Computing: Advancements in Scientific Applications, Homomorphic Encryption, and Federated Learning Security

    Get PDF
    Data explosion poses lot of challenges to the state-of-the art systems, applications, and methodologies. It has been reported that 181 zettabytes of data are expected to be generated in 2025 which is over 150\% increase compared to the data that is expected to be generated in 2023. However, while system manufacturers are consistently developing devices with larger storage spaces and providing alternative storage capacities in the cloud at affordable rates, another key challenge experienced is how to effectively process the fraction of large scale of stored data in time-critical conventional systems. One transformative paradigm revolutionizing the processing and management of these large data is distributed computing whose application requires deep understanding. This dissertation focuses on exploring the potential impact of applying efficient distributed computing concepts to long existing challenges or issues in (i) a widely data-intensive scientific application (ii) applying homomorphic encryption to data intensive workloads found in outsourced databases and (iii) security of tokenized incentive mechanism for Federated learning (FL) systems.The first part of the dissertation tackles the Microelectrode arrays (MEAs) parameterization problem from an orthogonal viewpoint enlightened by algebraic topology, which allows us to algebraically parametrize MEAs whose structure and intrinsic parallelism are hard to identify otherwise. We implement a new paradigm, namely Parma, to demonstrate the effectiveness of the proposed approach and report how it outperforms the state-of-the-practice in time, scalability, and memory usage.The second part discusses our work on introducing the concept of parallel caching of secure aggregation to mitigate the performance overhead incurred by the HE module in outsourced databases. The key idea of this optimization approach is caching selected radix-ciphertexts in parallel without violating existing security guarantees of the primitive/base HE scheme. A new radix HE algorithm was designed and applied to both batch and incremental HE schemes, and experiments carried out on six workloads show that the proposed caching boost state-of-the-art HE schemes by high orders of magnitudes.In the third part, I will discuss our work on leveraging the security benefit of blockchains to enhance or protect the fairness and reliability of tokenized incentive mechanism for FL systems. We designed a blockchain-based auditing protocol to mitigate Gaussian attacks and carried out experiments with multiple FL aggregation algorithms, popular data sets and a variety of scales to validate its effectiveness

    Energy efficient security and privacy management in sensor clouds

    Get PDF
    Sensor Cloud is a new model of computing for Wireless Sensor Networks, which facilitates resource sharing and enables large scale sensor networks. A multi-user distributed system, however, where resources are shared, has inherent challenges in security and privacy. The data being generated by the wireless sensors in a sensor cloud need to be protected against adversaries, which may be outsiders as well as insiders. Similarly the code which is disseminated to the sensors by the sensor cloud needs to be protected against inside and outside adversaries. Moreover, since the wireless sensors cannot support complex, energy intensive measures, the security and privacy of the data and the code have to be attained by way of lightweight algorithms. In this work, we first present two data aggregation algorithms, one based on an Elliptic Curve Cryptosystem (ECC) and the other based on symmetric key system, which provide confidentiality and integrity of data against an outside adversary and privacy against an in network adversary. A fine grained access control scheme which works on the securely aggregated data is presented next. This scheme uses Attribute Based Encryption (ABE) to achieve this objective. Finally, to securely and efficiently disseminate code in the sensor cloud, we present a code dissemination algorithm which first reduces the amount of code to be transmitted from the base station. It then uses Symmetric Proxy Re-encryption along with Bloom filters and HMACs to protect the code against eavesdropping and false code injection attacks. --Abstract, page iv

    Scalable Architecture for Integrated Batch and Streaming Analysis of Big Data

    Get PDF
    Thesis (Ph.D.) - Indiana University, Computer Sciences, 2015As Big Data processing problems evolve, many modern applications demonstrate special characteristics. Data exists in the form of both large historical datasets and high-speed real-time streams, and many analysis pipelines require integrated parallel batch processing and stream processing. Despite the large size of the whole dataset, most analyses focus on specific subsets according to certain criteria. Correspondingly, integrated support for efficient queries and post- query analysis is required. To address the system-level requirements brought by such characteristics, this dissertation proposes a scalable architecture for integrated queries, batch analysis, and streaming analysis of Big Data in the cloud. We verify its effectiveness using a representative application domain - social media data analysis - and tackle related research challenges emerging from each module of the architecture by integrating and extending multiple state-of-the-art Big Data storage and processing systems. In the storage layer, we reveal that existing text indexing techniques do not work well for the unique queries of social data, which put constraints on both textual content and social context. To address this issue, we propose a flexible indexing framework over NoSQL databases to support fully customizable index structures, which can embed necessary social context information for efficient queries. The batch analysis module demonstrates that analysis workflows consist of multiple algorithms with different computation and communication patterns, which are suitable for different processing frameworks. To achieve efficient workflows, we build an integrated analysis stack based on YARN, and make novel use of customized indices in developing sophisticated analysis algorithms. In the streaming analysis module, the high-dimensional data representation of social media streams poses special challenges to the problem of parallel stream clustering. Due to the sparsity of the high-dimensional data, traditional synchronization method becomes expensive and severely impacts the scalability of the algorithm. Therefore, we design a novel strategy that broadcasts the incremental changes rather than the whole centroids of the clusters to achieve scalable parallel stream clustering algorithms. Performance tests using real applications show that our solutions for parallel data loading/indexing, queries, analysis tasks, and stream clustering all significantly outperform implementations using current state-of-the-art technologies
    corecore