183 research outputs found
Recommended from our members
Making Data Storage Efficient in the Era of Cloud Computing
We enter the era of cloud computing in the last decade, as many paradigm shifts are happening on how people write and deploy applications. Despite the advancement of cloud computing, data storage abstractions have not evolved much, causing inefficiencies in performance, cost, and security.
This dissertation proposes a novel approach to make data storage efficient in the era of cloud computing by building new storage abstractions and systems that bridge the gap between cloud computing and data storage and simplify development. We build four systems to address four data inefficiencies in cloud computing.
The first system, Grandet, solves the data storage inefficiency caused by the paradigm shift from upfront provisioning to a variety of pay-as-you-go cloud services. Grandet is an extensible storage system that significantly reduces storage costs for web applications deployed in the cloud. Under the hood, it supports multiple heterogeneous stores and unifies them by placing each data object at the store deemed most economical. Our results show that Grandet reduces their costs by an average of 42.4%, and it is fast, scalable, and easy to use.
The second system, Unic, solves the data inefficiency caused by the paradigm shift from single-tenancy to multi-tenancy. Unic securely deduplicates general computations. It exports a cache service that allows cloud applications running on behalf of mutually distrusting users to memoize and reuse computation results, thereby improving performance. Unic achieves both integrity and secrecy through a novel use of code attestation, and it provides a simple yet expressive API that enables applications to deduplicate their own rich computations. Our results show that Unic is easy to use, speeds up applications by an average of 7.58x, and with little storage overhead.
The third system, Lambdata, solves the data inefficiency caused by the paradigm shift to serverless computing, where developers only write core business logic, and cloud service providers maintain all the infrastructure. Lambdata is a novel serverless computing system that enables developers to declare a cloud function's data intents, including both data read and data written. Once data intents are made explicit, Lambdata performs a variety of optimizations to improve speed, including caching data locally and scheduling functions based on code and data locality. Our results show that Lambdata achieves an average speedup of 1.51x on the turnaround time of practical workloads and reduces monetary cost by 16.5%.
The fourth system, CleanOS, solves the data inefficiency caused by the paradigm shift from desktop computers to smartphones always connected to the cloud. CleanOS is a new Android-based operating system that manages sensitive data rigorously and maintains a clean environment at all times. It identifies and tracks sensitive data, encrypts it with a key, and evicts that key to the cloud when the data is not in active use on the device. Our results show that CleanOS limits sensitive-data exposure drastically while incurring acceptable overheads on mobile networks
Prochlo: Strong Privacy for Analytics in the Crowd
The large-scale monitoring of computer users' software activities has become
commonplace, e.g., for application telemetry, error reporting, or demographic
profiling. This paper describes a principled systems architecture---Encode,
Shuffle, Analyze (ESA)---for performing such monitoring with high utility while
also protecting user privacy. The ESA design, and its Prochlo implementation,
are informed by our practical experiences with an existing, large deployment of
privacy-preserving software monitoring.
(cont.; see the paper
An Architecture for Managing Data Privacy in Healthcare with Blockchain
With the fast development of blockchain technology in the latest years, its application in
scenarios that require privacy, such as health area, have become encouraged and widely discussed. This paper presents an architecture to ensure the privacy of health-related data, which are stored and shared within a blockchain network in a decentralized manner, through the use of encryption with the RSA, ECC, and AES algorithms. Evaluation tests were performed to verify the impact of cryptography on the proposed architecture in terms of computational effort, memory usage, and execution time. The results demonstrate an impact mainly on the execution time and on the increase in the computational effort for sending data to the blockchain, which is justifiable considering the privacy and security provided with the architecture and encryption.N/
FORTE: an extensible framework for robustness and efficiency in data transfer pipelines
In the age of big data and growing product complexity, it is common to monitor many aspects of a product or system, in order to extract well-founded intelligence and draw conclusions, to continue driving innovation. Automating and scaling processes in data-pipelines becomes essential to keep pace with increasing rates of data generated by such practices, while meeting security, governance, scalability and resource-efficiency demands.We present FORTE, an extensible framework for robustness and transfer-efficiency in data pipelines. We identify sources of potential bottlenecks and explore the design space of approaches to deal with the challenges they pose. We study and evaluate synergetic effects of data compression and in-memory processing as well as task scheduling, in association with pipeline performance.A prototype implementation of FORTE is implemented and studied in a use-case at Volvo Trucks for high-volume production-level data sets, in the order of magnitude of hundreds of gigabytes to terabytes per burst. Various general-purpose lossless data compression algorithms are evaluated, in order to balance compression effectiveness and time in the pipeline.All in all, FORTE enables to deal with trade-offs and achieve benefits in latency and sustainable rate (up to 1.8 times better), effectiveness in resource utilisation, all while also enabling additional features such as integrity verification, logging, monitoring and traceability, as well as cataloguing of transferred data. We also note that the resource efficiency improvements achievable with FORTE, and its extensibility, can imply further benefits regarding scheduling, orchestration and energy-efficiency in such pipelines
A gearbox model for processing large volumes of data by using pipeline systems encapsulated into virtual containers
Software pipelines enable organizations to chain applications for adding value to contents (e.g., confidentially, reliability, and integrity) before either sharing them with partners or sending them to the cloud. However, the pipeline components add overhead when processing large volumes of data, which can become critical in real-world scenarios. This paper presents a gearbox model for processing large volumes of data by using pipeline systems encapsulated into virtual containers. In this model, the gears represent applications, whereas gearboxes represent software pipelines. This model was implemented as a collaborative system that automatically performs Gear up (by using parallel patterns) and/or Gear down (by using in-memory storage) until all gears produce uniform data processing velocities. This model reduces delays and bottlenecks produced by the heterogeneous performance of applications included in software pipelines. The new container tool has been designed to encapsulate both the collaborative system and the software pipelines into a virtual container and deploy it on IT infrastructures. We conducted case studies to evaluate the performance of when processing medical images and PDF repositories. The incorporation of a capsule to a cloud storage service for pre-processing medical imagery was also studied. The experimental evaluation revealed the feasibility of applying the gearbox model to the deployment of software pipelines in real-world scenarios as it can significantly improve the end-user service experience when pre-processing large-scale data in comparison with state-of-the-art solutions such as Sacbe and Parsl.This work has been partially supported by the “Spanish Ministerio de Economia y Competitividad ” under the project grant TIN2016-79637-P “Towards Unification of HPC and Big Data paradigms”
Efficient, Dependable Storage of Human Genome Sequencing Data
A compreensão do genoma humano impacta várias áreas da vida. Os dados oriundos do genoma humano são enormes pois existem milhões de amostras a espera de serem sequenciadas e cada genoma humano sequenciado pode ocupar centenas de gigabytes de espaço de armazenamento. Os genomas humanos são críticos porque são extremamente valiosos para a investigação e porque podem fornecer informações delicadas sobre o estado de saúde dos indivíduos, identificar os seus dadores ou até mesmo revelar informações sobre os parentes destes. O tamanho e a criticidade destes genomas, para além da quantidade de dados produzidos por instituições médicas e de ciências da vida, exigem que os sistemas informáticos sejam escaláveis, ao mesmo tempo que sejam seguros, confiáveis, auditáveis e com custos acessíveis. As infraestruturas de armazenamento existentes são tão caras que não nos permitem ignorar a eficiência de custos no armazenamento de genomas humanos, assim como em geral estas não possuem o conhecimento e os mecanismos adequados para proteger a privacidade dos dadores de amostras biológicas. Esta tese propõe um sistema de armazenamento de genomas humanos eficiente, seguro e auditável para instituições médicas e de ciências da vida. Ele aprimora os ecossistemas de armazenamento tradicionais com técnicas de privacidade, redução do tamanho dos dados e auditabilidade a fim de permitir o uso eficiente e confiável de infraestruturas públicas de computação em nuvem para armazenar genomas humanos. As contribuições desta tese incluem (1) um estudo sobre a sensibilidade à privacidade dos genomas humanos; (2) um método para detetar sistematicamente as porções dos genomas que são sensíveis à privacidade; (3) algoritmos de redução do tamanho de dados, especializados para dados de genomas sequenciados; (4) um esquema de auditoria independente para armazenamento disperso e seguro de dados; e (5) um fluxo de armazenamento completo que obtém garantias razoáveis de proteção, segurança e confiabilidade a custos modestos (por exemplo, menos de 1/Genome/Year) by integrating the proposed mechanisms with appropriate storage configurations
BEEBS: Open Benchmarks for Energy Measurements on Embedded Platforms
This paper presents and justifies an open benchmark suite named BEEBS,
targeted at evaluating the energy consumption of embedded processors.
We explore the possible sources of energy consumption, then select individual
benchmarks from contemporary suites to cover these areas. Version one of BEEBS
is presented here and contains 10 benchmarks that cover a wide range of typical
embedded applications. The benchmark suite is portable across diverse
architectures and is freely available.
The benchmark suite is extensively evaluated, and the properties of its
constituent programs are analysed. Using real hardware platforms we show case
examples which illustrate the difference in power dissipation between three
processor architectures and their related ISAs. We observe significant
differences in the average instruction dissipation between the architectures of
4.4x, specifically 170uW/MHz (ARM Cortex-M0), 65uW/MHz (Adapteva Epiphany) and
88uW/MHz (XMOS XS1-L1)
Modelos de compressão e ferramentas para dados ómicos
The ever-increasing growth of the development of high-throughput sequencing
technologies and as a consequence, generation of a huge volume of data,
has revolutionized biological research and discovery. Motivated by that, we
investigate in this thesis the methods which are capable of providing an
efficient representation of omics data in compressed or encrypted manner,
and then, we employ them to analyze omics data.
First and foremost, we describe a number of measures for the purpose
of quantifying information in and between omics sequences. Then, we
present finite-context models (FCMs), substitution-tolerant Markov models
(STMMs) and a combination of the two, which are specialized in modeling
biological data, in order for data compression and analysis.
To ease the storage of the aforementioned data deluge, we design two lossless
data compressors for genomic and one for proteomic data. The methods
work on the basis of (a) a combination of FCMs and STMMs or (b) the mentioned
combination along with repeat models and a competitive prediction
model. Tested on various synthetic and real data showed their outperformance
over the previously proposed methods in terms of compression ratio.
Privacy of genomic data is a topic that has been recently focused by developments
in the field of personalized medicine. We propose a tool that is
able to represent genomic data in a securely encrypted fashion, and at the
same time, is able to compact FASTA and FASTQ sequences by a factor
of three. It employs AES encryption accompanied by a shuffling mechanism
for improving the data security. The results show it is faster than
general-purpose and special-purpose algorithms.
Compression techniques can be employed for analysis of omics data. Having
this in mind, we investigate the identification of unique regions in a species
with respect to close species, that can give us an insight into evolutionary
traits. For this purpose, we design two alignment-free tools that can accurately
find and visualize distinct regions among two collections of DNA or
protein sequences. Tested on modern humans with respect to Neanderthals,
we found a number of absent regions in Neanderthals that may express new
functionalities associated with evolution of modern humans.
Finally, we investigate the identification of genomic rearrangements, that
have important roles in genetic disorders and cancer, by employing a compression
technique. For this purpose, we design a tool that is able to accurately
localize and visualize small- and large-scale rearrangements between
two genomic sequences. The results of applying the proposed tool on several
synthetic and real data conformed to the results partially reported by
wet laboratory approaches, e.g., FISH analysis.O crescente crescimento do desenvolvimento de tecnologias de sequenciamento
de alto rendimento e, como consequência, a geração de um enorme
volume de dados, revolucionou a pesquisa e descoberta biológica. Motivados
por isso, nesta tese investigamos os métodos que fornecem uma
representação eficiente de dados ómicros de maneira compactada ou criptografada
e, posteriormente, os usamos para análise.
Em primeiro lugar, descrevemos uma série de medidas com o objetivo de
quantificar informação em e entre sequencias ómicas. Em seguida, apresentamos
modelos de contexto finito (FCMs), modelos de Markov tolerantes
a substituição (STMMs) e uma combinação dos dois, especializados na
modelagem de dados biológicos, para compactação e análise de dados.
Para facilitar o armazenamento do dilúvio de dados acima mencionado, desenvolvemos
dois compressores de dados sem perda para dados genómicos e
um para dados proteómicos. Os métodos funcionam com base em (a) uma
combinação de FCMs e STMMs ou (b) na combinação mencionada, juntamente
com modelos de repetição e um modelo de previsão competitiva.
Testados em vários dados sintéticos e reais mostraram a sua eficiência sobre
os métodos do estado-de-arte em termos de taxa de compressão.
A privacidade dos dados genómicos é um tópico recentemente focado nos
desenvolvimentos do campo da medicina personalizada. Propomos uma
ferramenta capaz de representar dados genómicos de maneira criptografada
com segurança e, ao mesmo tempo, compactando as sequencias FASTA e
FASTQ para um fator de três. Emprega criptografia AES acompanhada de
um mecanismo de embaralhamento para melhorar a segurança dos dados.
Os resultados mostram que ´e mais rápido que os algoritmos de uso geral e
específico.
As técnicas de compressão podem ser exploradas para análise de dados
ómicos. Tendo isso em mente, investigamos a identificação de regiões
únicas em uma espécie em relação a espécies próximas, que nos podem
dar uma visão das características evolutivas. Para esse fim, desenvolvemos
duas ferramentas livres de alinhamento que podem encontrar e visualizar
com precisão regiões distintas entre duas coleções de sequências de DNA
ou proteínas. Testados em humanos modernos em relação a neandertais,
encontrámos várias regiões ausentes nos neandertais que podem expressar
novas funcionalidades associadas à evolução dos humanos modernos.
Por último, investigamos a identificação de rearranjos genómicos, que têm
papéis importantes em desordens genéticas e cancro, empregando uma
técnica de compressão. Para esse fim, desenvolvemos uma ferramenta capaz
de localizar e visualizar com precisão os rearranjos em pequena e grande
escala entre duas sequências genómicas. Os resultados da aplicação da ferramenta
proposta, em vários dados sintéticos e reais, estão em conformidade
com os resultados parcialmente relatados por abordagens laboratoriais, por
exemplo, análise FISH.Programa Doutoral em Engenharia Informátic
- …