13 research outputs found

    Capturing Data Uncertainty in High-Volume Stream Processing

    Get PDF
    We present the design and development of a data stream system that captures data uncertainty from data collection to query processing to final result generation. Our system focuses on data that is naturally modeled as continuous random variables. For such data, our system employs an approach grounded in probability and statistical theory to capture data uncertainty and integrates this approach into high-volume stream processing. The first component of our system captures uncertainty of raw data streams from sensing devices. Since such raw streams can be highly noisy and may not carry sufficient information for query processing, our system employs probabilistic models of the data generation process and stream-speed inference to transform raw data into a desired format with an uncertainty metric. The second component captures uncertainty as data propagates through query operators. To efficiently quantify result uncertainty of a query operator, we explore a variety of techniques based on probability and statistical theory to compute the result distribution at stream speed. We are currently working with a group of scientists to evaluate our system using traces collected from the domains of (and eventually in the real systems for) hazardous weather monitoring and object tracking and monitoring.Comment: CIDR 200

    A Platform for Scalable One-Pass Analytics using MapReduce

    No full text
    Today’s one-pass analytics applications tend to be data-intensive in nature and require the ability to process high volumes of data efficiently. MapReduce is a popular programming model for processing large datasets using a cluster of machines. However, the traditional MapReduce model is not well-suited for one-pass analytics, since it is geared towards batch processing and requires the data set to be fully loaded into the cluster before running analytical queries. This paper examines, from a systems standpoint, what architectural design changes are necessary to bring the benefits of the MapReduce model to incremental one-pass analytics. Our empirical and theoretical analyses of Hadoop-based MapReduce systems show that the widely-used sort-merge implementation for partitioning and parallel processing poses a fundamental barrier to incremental one-pass analytics, despite various optimizations. To address these limitations, we propose a new data analysis platform that employs hash techniques to enable fast in-memory processing, and a new frequent key based technique to extend such processing to workloads that require a large key-state space. Evaluation of our Hadoop-based prototype using real-world workloads shows that our new platform significantly improves the progress of map tasks, allows the reduce progress to keep up with the map progress, with up to 3 orders of magnitude reduction of internal data spills, and enables results to be returned continuously during the job. 1

    Exploiting the Interplay between Memory and Flash Storage in Embedded Sensor Devices

    No full text
    Abstract—Although memory is an important constraint in embedded sensor nodes, existing embedded applications and systems are typically designed to work under the memory constraints of a single platform and do not consider the interplay between memory and flash storage. In this paper, we present the design of a memory-adaptive flash-based embedded sensor system that allows an application to exploit the presence of flash and adapt to different amounts of RAM on the embedded device. We describe how such a system can be exploited by data-centric sensor applications. Our design involves several novel features: flash and memory-efficient storage and indexing, techniques for efficient storage reclamation, and intelligent buffer management to maximize write coalescing. Our results show that our system is highly energy-efficient under different workloads, and can be configured for embedded sensor platforms with memory constraints ranging from a few kilobytes to hundreds of kilobytes. I
    corecore