15 research outputs found

    Storage and aggregation for fast analytics systems

    Get PDF
    Computing in the last decade has been characterized by the rise of data- intensive scalable computing (DISC) systems. In particular, recent years have wit- nessed a rapid growth in the popularity of fast analytics systems. These systems exemplify a trend where queries that previously involved batch-processing (e.g., run- ning a MapReduce job) on a massive amount of data, are increasingly expected to be answered in near real-time with low latency. This dissertation addresses the problem that existing designs for various components used in the software stack for DISC sys- tems do not meet the requirements demanded by fast analytics applications. In this work, we focus specifically on two components: 1. Key-value storage: Recent work has focused primarily on supporting reads with high throughput and low latency. However, fast analytics applications require that new data entering the system (e.g., new web-pages crawled, currently trend- ing topics) be quickly made available to queries and analysis codes. This means that along with supporting reads efficiently, these systems must also support writes with high throughput, which current systems fail to do. In the first part of this work, we solve this problem by proposing a new key-value storage system – called the WriteBuffer (WB) Tree – that provides up to 30× higher write per- formance and similar read performance compared to current high-performance systems. 2. GroupBy-Aggregate: Fast analytics systems require support for fast, incre- mental aggregation of data for with low-latency access to results. Existing techniques are memory-inefficient and do not support incremental aggregation efficiently when aggregate data overflows to disk. In the second part of this dis- sertation, we propose a new data structure called the Compressed Buffer Tree (CBT) to implement memory-efficient in-memory aggregation. We also show how the WB Tree can be modified to support efficient disk-based aggregation.Ph.D

    Analyzing epigenomic data in a large-scale context

    Get PDF
    While large amounts of epigenomic data are publicly available, their retrieval in a form suitable for downstream analysis is a bottleneck in current research. In a typical analysis, users are required to download huge files that span the entire genome, even if they are only interested in a small subset (e.g., promoter regions) or an aggregation thereof. Moreover, complex operations on genome-level data are not always feasible on a local computer due to resource limitations. The DeepBlue Epigenomic Data Server mitigates this issue by providing a robust server that affords a powerful API for searching, filtering, transforming, aggregating, enriching, and downloading data from several epigenomic consortia. Furthermore, its main component implements scalable data storage and Manipulation methods that scale with the increasing amount of epigenetic data, thereby making it the ideal resource for researchers that seek to integrate epigenomic data into their analysis workflow. This work also presents companion tools that utilize the DeepBlue API to enable users not proficient in scripting or programming languages to analyze epigenomic data in a user-friendly way: (i) an R/Bioconductor package that integrates DeepBlue into the R analysis workflow. The extracted data are automatically converted into suitable R data structures for downstream analysis and visualization within the Bioconductor frame- work; (ii) a web portal that enables users to search, select, filter and download the epigenomic data available in the DeepBlue Server. This interface provides elements, such as data tables, grids, data selections, developed for empowering users to find the required epigenomic data in a straightforward interface; (iii) DIVE, a web data analysis tool that allows researchers to perform large-epigenomic data analysis in a programming-free environment. DIVE enables users to compare their datasets to the datasets available in the DeepBlue Server in an intuitive interface, which summarizes the comparison of hundreds of datasets in a simple chart. Furthermore, these tools are integrated, being capable of sharing results among themselves, creating a powerful large-scale epigenomic data analysis environment. The DeepBlue Epigenomic Data Server and its ecosystem was well received by the International Human Epigenome Consortium and already attracted much attention by the epigenomic research community with currently 160 registered users and more than three million anonymous workflow processing requests since its release.Während große Mengen epigenomischer Daten öffentlich verfügbar sind, ist ihre Abfrage in einer für die Downstream-Analyse geeigneten Form ein Engpass in der aktuellen Forschung. Bei einer typischen Analyse müssen Benutzer riesige Dateien herunterladen, die das gesamte Genom umfassen, selbst wenn sie nur an einer kleinen Teilmenge (z.B., Promotorregionen) oder einer Aggregation davon interessiert sind. Darüber hinaus sind komplexe Vorgänge mit Daten auf Genomebene aufgrund von Ressourceneinschränkungen auf einem lokalen Computer nicht immer möglich. Der DeepBlue Epigenomic Data Server behebt dieses Problem, indem er eine leistungsstarke API zum Suchen, Filtern, Umwandeln, Aggregieren, Anreichern und Herunterladen von Daten verschiedener epigenomischer Konsortien bietet. Darüber hinaus implementiert der DeepBlue-Server skalierbare Datenspeicherungs- und manipulationsmethoden, die der zunehmenden Menge epigenetischer Daten gerecht werden. Dadurch ist der DeepBlue Server ideal für Forscher geeignet, die die aktuellen epigenomischen Ressourcen in ihren Analyse-Workflow integrieren möchten. In dieser Arbeit werden zusätzlich Begleittools vorgestellt, die die DeepBlue-API verwenden, um Benutzern, die sich mit Scripting oder Programmiersprachen nicht auskennen, die Möglichkeit zu geben, epigenomische Daten auf benutzerfreundliche Weise zu analysieren: (i) ein R/ Bioconductor-Paket, das DeepBlue in den R-Analyse-Workflow integriert. Die extrahierten Daten werden automatisch in geeignete R-Datenstrukturen für die Downstream-Analyse und Visualisierung innerhalb des Bioconductor-Frameworks konvertiert; (ii) ein Webportal, über das Benutzer die auf dem DeepBlue Server verfügbaren epigenomischen Daten suchen, auswählen, filtern und herunterladen können. Diese Schnittstelle bietet Elemente wie Datentabellen, Raster, Datenselektionen, mit denen Benutzer die erforderlichen epigenomischen Daten in einer einfachen Schnittstelle finden können; (iii) DIVE, ein Webdatenanalysetool, mit dem Forscher umfangreiche epigenomische Datenanalysen in einer programmierungsfreien Umgebung durchführen können. Mit DIVE können Benutzer ihre Datensätze mit den im Deep- Blue Server verfügbaren Datensätzen in einer intuitiven Benutzeroberfläche vergleichen. Dabei kann der Vergleich hunderter Datensätze in einem Diagramm ausgedrückt werden. Aufgrund der großen Datenmenge, die in DIVE verfügbar ist, werden Methoden bereitgestellt, mit denen die ähnlichsten Datensätze für eine vergleichende Analyse vorgeschlagen werden können. Alle zuvor genannten Tools sind miteinander integriert, so dass sie die Ergebnisse untereinander austauschen können, wodurch eine leistungsstarke Umgebung für die Analyse epigenomischer Daten entsteht. Der DeepBlue Epigenomic Data Server und sein Ökosystem wurden vom International Human Epigenome Consortium äußerst gut aufgenommen und erreichten seit ihrer Veröffentlichung große Aufmerksamkeit bei der epigenomischen Forschungsgemeinschaft mit derzeit 160 registrierten Benutzern und mehr als drei Millionen anonymen Verarbeitungsanforderungen

    Development and clinical translation of optical and software methods for endomicroscopic imaging

    Get PDF
    Endomicroscopy is an emerging technology that aims to improve clinical diagnostics by allowing for in vivo microscopy in difficult to reach areas of the body. This is most commonly achieved by using coherent fibre bundles to relay light for illumination and imaging to and from the area under investigation. Endomicroscopy’s attraction for researchers and clinicians is two-fold: on the one hand, its use can reduce the invasiveness of a diagnostic procedure by removing the need for biopsies; On the other hand, it allows for structural and functional in vivo imaging. Endomicroscopic images acquired through optical fibre bundles exhibit artefacts that deteriorate image quality and contrast. This thesis aims to improve an existing endomicroscopy imaging system by exploring two methods that mitigate these artefacts. The first, software-based method takes several processing steps from literature and implements them in an existing endomicroscopy device with a focus on real-time application to enable clinical use, after image quality was found to be inadequate without further processing. A contribution to the field is that two different approaches are implemented and compared in quantitative and qualitative means that have not been compared directly in this manner before. This first attempt at improving endomicroscopy image quality relies solely on digital image processing methods and is developed with a strong focus on real-time applicability in clinical use. Both approaches are compared on pre-clinical and clinical human imaging data. The second method targets the effect of inter-core coupling, which reduces contrast in fibre images. A parallelised confocal imaging method is developed in which a sequence of images is acquired while selectively illuminating groups of fibre cores through the use of a spatial light modulator. A bespoke algorithm creates a composite image in a final processing step. In doing so, unwanted light is detected and removed from the final image. This method is shown to reduce the negative impact of inter-core coupling on image contrast on small imaging targets, while no benefit was found in large, scattering samples
    corecore