301 research outputs found

    Chameleon: A Secure Cloud-Enabled and Queryable System with Elastic Properties

    Get PDF
    There are two dominant themes that have become increasingly more important in our technological society. First, the recurrent use of cloud-based solutions which provide infrastructures, computation platforms and storage as services. Secondly, the use of applicational large logs for analytics and operational monitoring in critical systems. Moreover, auditing activities, debugging of applications and inspection of events generated by errors or potential unexpected operations - including those generated as alerts by intrusion detection systems - are common situations where extensive logs must be analyzed, and easy access is required. More often than not, a part of the generated logs can be deemed as sensitive, requiring a privacy-enhancing and queryable solution. In this dissertation, our main goal is to propose a novel approach of storing encrypted critical data in an elastic and scalable cloud-based storage, focusing on handling JSONbased ciphered documents. To this end, we make use of Searchable and Homomorphic Encryption methods to allow operations on the ciphered documents. Additionally, our solution allows for the user to be near oblivious to our system’s internals, providing transparency while in use. The achieved end goal is a unified middleware system capable of providing improved system usability, privacy, and rich querying over the data. This previously mentioned objective is addressed while maintaining server-side auditable logs, allowing for searchable capabilities by the log owner or authorized users, with integrity and authenticity proofs. Our proposed solution, named Chameleon, provides rich querying facilities on ciphered data - including conjunctive keyword, ordering correlation and boolean queries - while supporting field searching and nested aggregations. The aforementioned operations allow our solution to provide data analytics upon ciphered JSON documents, using Elasticsearch as our storage and search engine.O uso recorrente de soluções baseadas em nuvem tornaram-se cada vez mais importantes na nossa sociedade. Tais soluções fornecem infraestruturas, computação e armazenamento como serviços, para alem do uso de logs volumosos de sistemas e aplicações para análise e monitoramento operacional em sistemas críticos. Atividades de auditoria, debugging de aplicações ou inspeção de eventos gerados por erros ou possíveis operações inesperadas - incluindo alertas por sistemas de detecção de intrusão - são situações comuns onde logs extensos devem ser analisados com facilidade. Frequentemente, parte dos logs gerados podem ser considerados confidenciais, exigindo uma solução que permite manter a confidencialidades dos dados durante procuras. Nesta dissertação, o principal objetivo é propor uma nova abordagem de armazenar logs críticos num armazenamento elástico e escalável baseado na cloud. A solução proposta suporta documentos JSON encriptados, fazendo uso de Searchable Encryption e métodos de criptografia homomórfica com provas de integridade e autenticação. O objetivo alcançado é um sistema de middleware unificado capaz de fornecer privacidade, integridade e autenticidade, mantendo registos auditáveis do lado do servidor e permitindo pesquisas pelo proprietário dos logs ou usuários autorizados. A solução proposta, Chameleon, visa fornecer recursos de consulta atuando em cima de dados cifrados - incluindo queries conjuntivas, de ordenação e booleanas - suportando pesquisas de campo e agregações aninhadas. As operações suportadas permitem à nossa solução suportar data analytics sobre documentos JSON cifrados, utilizando o Elasticsearch como armazenamento e motor de busca

    Quality of Service Aware Data Stream Processing for Highly Dynamic and Scalable Applications

    Get PDF
    Huge amounts of georeferenced data streams are arriving daily to data stream management systems that are deployed for serving highly scalable and dynamic applications. There are innumerable ways at which those loads can be exploited to gain deep insights in various domains. Decision makers require an interactive visualization of such data in the form of maps and dashboards for decision making and strategic planning. Data streams normally exhibit fluctuation and oscillation in arrival rates and skewness. Those are the two predominant factors that greatly impact the overall quality of service. This requires data stream management systems to be attuned to those factors in addition to the spatial shape of the data that may exaggerate the negative impact of those factors. Current systems do not natively support services with quality guarantees for dynamic scenarios, leaving the handling of those logistics to the user which is challenging and cumbersome. Three workloads are predominant for any data stream, batch processing, scalable storage and stream processing. In this thesis, we have designed a quality of service aware system, SpatialDSMS, that constitutes several subsystems that are covering those loads and any mixed load that results from intermixing them. Most importantly, we natively have incorporated quality of service optimizations for processing avalanches of geo-referenced data streams in highly dynamic application scenarios. This has been achieved transparently on top of the codebases of emerging de facto standard best-in-class representatives, thus relieving the overburdened shoulders of the users in the presentation layer from having to reason about those services. Instead, users express their queries with quality goals and our system optimizers compiles that down into query plans with an embedded quality guarantee and leaves logistic handling to the underlying layers. We have developed standard compliant prototypes for all the subsystems that constitutes SpatialDSMS

    Big Data in the construction industry: A review of present status, opportunities, and future trends

    Get PDF
    © 2016 Elsevier Ltd The ability to process large amounts of data and to extract useful insights from data has revolutionised society. This phenomenon—dubbed as Big Data—has applications for a wide assortment of industries, including the construction industry. The construction industry already deals with large volumes of heterogeneous data; which is expected to increase exponentially as technologies such as sensor networks and the Internet of Things are commoditised. In this paper, we present a detailed survey of the literature, investigating the application of Big Data techniques in the construction industry. We reviewed related works published in the databases of American Association of Civil Engineers (ASCE), Institute of Electrical and Electronics Engineers (IEEE), Association of Computing Machinery (ACM), and Elsevier Science Direct Digital Library. While the application of data analytics in the construction industry is not new, the adoption of Big Data technologies in this industry remains at a nascent stage and lags the broad uptake of these technologies in other fields. To the best of our knowledge, there is currently no comprehensive survey of Big Data techniques in the context of the construction industry. This paper fills the void and presents a wide-ranging interdisciplinary review of literature of fields such as statistics, data mining and warehousing, machine learning, and Big Data Analytics in the context of the construction industry. We discuss the current state of adoption of Big Data in the construction industry and discuss the future potential of such technologies across the multiple domain-specific sub-areas of the construction industry. We also propose open issues and directions for future work along with potential pitfalls associated with Big Data adoption in the industry

    Data modeling with NoSQL : how, when and why

    Get PDF
    Tese de mestrado integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 201
    corecore