22 research outputs found

    Constructing fading histograms from data streams

    Get PDF
    The ability to collect data is changing drastically. Nowadays, data are gathered in the form of transient and finite data streams. Memory restrictions preclude keeping all received data in memory. When dealing with massive data streams, it is mandatory to create compact representations of data, also known as synopses structures or summaries. Reducing memory occupancy is of utmost importance when handling a huge amount of data. This paper addresses the problem of constructing histograms from data streams under error constraints. When constructing online histograms from data streams there are two main characteristics to embrace: the updating facility and the error of the histogram. Moreover, in dynamic environments, besides the need of compact summaries to capture the most important properties of data, it is also essential to forget old data. Therefore, this paper presents sliding histograms and fading histograms, an abrupt and a smooth strategies to forget outdated data

    Monitoring Network Data Streams

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Mining complex data in highly streaming environments

    Get PDF
    Data is growing at a rapid rate because of advanced hardware and software technologies and platforms such as e-health systems, sensor networks, and social media. One of the challenging problems is storing, processing and transferring this big data in an efficient and effective way. One solution to tackle these challenges is to construct synopsis by means of data summarization techniques. Motivated by the fact that without summarization, processing, analyzing and communicating this vast amount of data is inefficient, this thesis introduces new summarization frameworks with the main goals of reducing communication costs and accelerating data mining processes in different application scenarios. Specifically, we study the following big data summarizaion techniques:(i) dimensionality reduction;(ii)clustering,and(iii)histogram, considering their importance and wide use in various areas and domains. In our work, we propose three different frameworks using these summarization techniques to cover three different aspects of big data:"Volume","Velocity"and"Variety" in centralized and decentralized platforms. We use dimensionality reduction techniques for summarizing large 2D-arrays, clustering and histograms for processing multiple data streams. With respect to the importance and rapid growth of emerging e-health applications such as tele-radiology and tele-medicine that require fast, low cost, and often lossless access to massive amounts of medical images and data over band limited channels,our first framework attempts to summarize streams of large volume medical images (e.g. X-rays) for the purpose of compression. Significant amounts of correlation and redundancy exist across different medical images. These can be extracted and used as a data summary to achieve better compression, and consequently less storage and less communication overheads on the network. We propose a novel memory-assisted compression framework as a learning-based universal coding, which can be used to complement any existing algorithm to further eliminate redundancies/similarities across images. This approach is motivated by the fact that, often in medical applications, massive amounts of correlated images from the same family are available as training data for learning the dependencies and deriving appropriate reference or synopses models. The models can then be used for compression of any new image from the same family. In particular, dimensionality reduction techniques such as Principal Component Analysis (PCA) and Non-negative Matrix Factorization (NMF) are applied on a set of images from training data to form the required reference models. The proposed memory-assisted compression allows each image to be processed independently of other images, and hence allows individual image access and transmission. In the second part of our work,we investigate the problem of summarizing distributed multidimensional data streams using clustering. We devise a distributed clustering framework, DistClusTree, that extends the centralized ClusTree approach. The main difficulty in distributed clustering is balancing communication costs and clustering quality. We tackle this in DistClusTree through combining spatial index summaries and online tracking for efficient local and global incremental clustering. We demonstrate through extensive experiments the efficacy of the framework in terms of communication costs and approximate clustering quality. In the last part, we use a multidimensional index structure to merge distributed summaries in the form of a centralized histogram as another widely used summarization technique with the application in approximate range query answering. In this thesis, we propose the index-based Distributed Mergeable Summaries (iDMS) framework based on kd-trees that addresses these challenges with data generative models of Gaussian mixture models (GMMs) and a Generative Adversarial Network (GAN). iDMS maintains a global approximate kd-tree at a central site via GMMs or GANs upon new arrivals of streaming data at local sites. Experimental results validate the effectiveness and efficiency of iDMS against baseline distributed settings in terms of approximation error and communication costs

    Change Detection in Streaming Data

    Get PDF
    Change detection is the process of identifying differences in the state of an object or phenomenon by observing it at different times or different locations in space. In the streaming context, it is the process of segmenting a data stream into different segments by identifying the points where the stream dynamics changes. Decentralized change detection can be used in many interesting, and important applications such environmental observing systems, medicare monitoring systems. Although there is great deal of work on distributed detection and data fusion, most of work focuses on the one-time change detection solutions. One-time change detection method requires to proceed data once in response to the change occurring. The trade-off of a continuous distributed detection of changes include detection accuracy, spaceefficiency, detection delay, and communication-efficiency. To achieve these goals, the wildfire warning system is used as a motivating scenario. From the challenges and requirements of the wildfire warning system, the change detection algorithms for streaming data are proposed a part of the solution to the wildfire warning system. By selecting various models of local change detection, different schemes for distributed change detections, and the data exchange protocols, different designs can be achieved. Based on this approach, the contributions of this dissertation are as follows. A general two-window framework for detecting changes in a single data stream is presented. A general synopsis-based change detection framework is proposed. Theoretical and empirical analysis shows that the detection performance of synopsisbased detector is similar to that of non-synopsis change detector if a distance function quantifying the changes is preserved under the process of constructing synopsis. A clustering-based change detection and clustering maintenance method over sliding window is presented. Clustering-based detector can automatically detect the changes in the multivariate streaming data. A framework for decentralized change detection in wireless sensor networks is proposed. A distributed framework for clustering streaming data is proposed by extending the two-phased stream clustering approach which is widely used to cluster a single data stream.Unter Änderungserkennung wird der Prozess der Erkennung von Unterschieden im Zustand eines Objekts oder Phänomens verstanden, wenn dieses zu verschiedenen Zeitpunkten oder an verschiedenen Orten beobachtet wird. Im Kontext der Datenstromverarbeitung stellt dieser Prozess die Segmentierung eines Datenstroms anhand der identifizierten Punkte, an denen sich die Stromdynamiken ändern, dar. Die Fähigkeit, Änderungen in den Stromdaten zu erkennen, darauf zu reagieren und sich daran anzupassen, spielt in vielen Anwendungsbereichen, wie z.B. dem Aktivitätsüberwachung, dem Datenstrom-Mining und Maschinenlernen sowie dem Datenmanagement hinsichtlich Datenmenge und Datenqualität, eine wichtige Rolle. Dezentralisierte Änderungserkennung kann in vielen interessanten und wichtigen Anwendungsbereichen, wie z.B. in Umgebungsüberwachungssystemen oder medizinischen Überwachungssystemen, eingesetzt werden. Obgleich es eine Vielzahl von Arbeiten im Bereich der verteilten Änderungserkennung und Datenfusion gibt, liegt der Fokus dieser Arbeiten meist lediglich auf der Erkennung von einmaligen Änderungen. Die einmalige Änderungserkennungsmethode erfordert die einmalige Verarbeitung der Daten als Antwort auf die auftretende Änderung. Der Kompromiss einer kontinuierlichen, verteilten Erkennung von Änderungen umfasst die Erkennungsgenauigkeit, die Speichereffizienz sowie die Berechnungseffizienz. Um dieses Ziel zu erreichen, wird das Flächenbrandwarnsystem als motivierendes Szenario genutzt. Basierend auf den Herausforderungen und Anforderungen dieses Warnsystems wird ein Algorithmus zur Erkennung von Änderungen in Stromdaten als Teil einer Gesamtlösung für das Flächenbrandwarnsystem vorgestellt. Durch die Auswahl verschiedener Modelle zur lokalen und verteilten Änderungserkennung sowie verschiedener Datenaustauschprotokolle können verschiedene Systemdesigns entwickelt werden. Basierend auf diesem Ansatz leistet diese Dissertation nachfolgend aufgeführte Beiträge. Es wird ein allgemeines 2-Fenster Framework zur Erkennung von Änderungen in einem einzelnen Datenstrom vorgestellt. Weiterhin wird ein allgemeines synopsenbasiertes Framework zur Änderungserkennung beschrieben. Mittels theoretischer und empirischer Analysen wird gezeigt, dass die Erkennungs-Performance des synopsenbasierten Änderungsdetektors ähnlich der eines nicht-synopsenbasierten ist, solange eine Distanzfunktion, welche die Änderungen quantifiziert, während der Erstellung der Synopse eingehalten wird. Es wird Cluster-basierte Änderungserkennung und Cluster-Pflege über gleitenden Fenstern vorgestellt.Weiterhin wird ein Framework zur verteilten Änderungserkennung in drahtlosen Sensornetzwerken beschrieben. Basierend auf dem 2-Phasen Stromdaten-Cluster-Ansatz, welcher weitestgehend zur Clusterung eines einzelnen Datenstroms eingesetzt wird, wird ein verteiltes Framework zur Clusterung von Stromdaten vorgestellt

    Efficiently Processing Complex Queries in Sensor Networks

    Get PDF

    Progressive Query Processing

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Efficient Algorithms to Compute Hierarchical Summaries from Big Data Streams

    Full text link
    Many data stream applications have hierarchical data; containing time, geographic locations, product information, clickstreams, server logs, IP addresses. A hierarchical summary of such volumous data offers multiple advantages including compactness, quick understanding, and abstraction. The goal of this thesis is to design algorithmic approaches for summarizing hierarchical data streams. First, this thesis provides a theoretical analysis of the benchmark hierarchical heavy hitters' algorithms and uncovers their shortcomings such as requiring high theoretical memory, updates and coverage problem. To address these shortcomings, this thesis proposes efficient algorithms which offer deterministic estimation accuracy using O(η/ε) worst-case memory and O(η) worst-case time complexity per item, where ε ∈ [0,1] is a user defined parameter and η is a small constant derived from the data. The proposed hierarchical heavy hitters' algorithms are shown to have improved significantly over existing algorithms both theoretically as well as empirically. Next, this thesis introduces a new concept called hierarchically correlated heavy hitters, which is different from existing hierarchical summarization techniques. The thesis provides a formal definition of the proposed concept and compares it with existing hierarchical summarization approaches both at definition level and empirically. It also proposes an efficient hierarchy-aware algorithm for computing hierarchically correlated heavy hitters. The proposed algorithm offers deterministic estimation accuracy using O(η / (ε_p * ε_s )) worst-case memory and O(η) worst-case time complexity per item, where η is as defined previously, and ε_p ∈ [0,1], ε_s ∈ [0,1] are other user defined parameters. Finally, the thesis proposes a special hierarchical data structure and algorithm to summarize spatiotemporal data. It can be used to extract interesting and useful patterns from high-speed spatiotemporal data streams at multiple spatial and temporal granularities. Theoretical and empirical analysis are provided, which show that the proposed data structure is very efficient concerning data storage and response to queries. It updates a single item in O(1) time and responds to a point query in O(1) time. Importantly, the memory requirement of the proposed data structure is independent of the size of the data and only depends on user-supplied parameters ψ ⃗ and φ ⃗. In summary, this thesis provides a general framework consisting of a set of algorithms and data structures to compute hierarchical summaries of the big data streams. All of the proposed algorithms exploit a lattice structure built from the hierarchical attributes of the data to compute different hierarchical summaries, which can be used to address various data analytic issues in many emerging applications

    Quality-of-Service-Aware Data Stream Processing

    Get PDF
    Data stream processing in the industrial as well as in the academic field has gained more and more importance during the last years. Consider the monitoring of industrial processes as an example. There, sensors are mounted to gather lots of data within a short time range. Storing and post-processing these data may occasionally be useless or even impossible. On the one hand, only a small part of the monitored data is relevant. To efficiently use the storage capacity, only a preselection of the data should be considered. On the other hand, it may occur that the volume of incoming data is generally too high to be stored in time or–in other words–the technical efforts for storing the data in time would be out of scale. Processing data streams in the context of this thesis means to apply database operations to the stream in an on-the-fly manner (without explicitly storing the data). The challenges for this task lie in the limited amount of resources while data streams are potentially infinite. Furthermore, data stream processing must be fast and the results have to be disseminated as soon as possible. This thesis focuses on the latter issue. The goal is to provide a so-called Quality-of-Service (QoS) for the data stream processing task. Therefore, adequate QoS metrics like maximum output delay or minimum result data rate are defined. Thereafter, a cost model for obtaining the required processing resources from the specified QoS is presented. On that basis, the stream processing operations are scheduled. Depending on the required QoS and on the available resources, the weight can be shifted among the individual resources and QoS metrics, respectively. Calculating and scheduling resources requires a lot of expert knowledge regarding the characteristics of the stream operations and regarding the incoming data streams. Often, this knowledge is based on experience and thus, a revision of the resource calculation and reservation becomes necessary from time to time. This leads to occasional interruptions of the continuous data stream processing, of the delivery of the result, and thus, of the negotiated Quality-of-Service. The proposed robustness concept supports the user and facilitates a decrease in the number of interruptions by providing more resources

    Low-latency, query-driven analytics over voluminous multidimensional, spatiotemporal datasets

    Get PDF
    2017 Summer.Includes bibliographical references.Ubiquitous data collection from sources such as remote sensing equipment, networked observational devices, location-based services, and sales tracking has led to the accumulation of voluminous datasets; IDC projects that by 2020 we will generate 40 zettabytes of data per year, while Gartner and ABI estimate 20-35 billion new devices will be connected to the Internet in the same time frame. The storage and processing requirements of these datasets far exceed the capabilities of modern computing hardware, which has led to the development of distributed storage frameworks that can scale out by assimilating more computing resources as necessary. While challenging in its own right, storing and managing voluminous datasets is only the precursor to a broader field of study: extracting knowledge, insights, and relationships from the underlying datasets. The basic building block of this knowledge discovery process is analytic queries, encompassing both query instrumentation and evaluation. This dissertation is centered around query-driven exploratory and predictive analytics over voluminous, multidimensional datasets. Both of these types of analysis represent a higher-level abstraction over classical query models; rather than indexing every discrete value for subsequent retrieval, our framework autonomously learns the relationships and interactions between dimensions in the dataset (including time series and geospatial aspects), and makes the information readily available to users. This functionality includes statistical synopses, correlation analysis, hypothesis testing, probabilistic structures, and predictive models that not only enable the discovery of nuanced relationships between dimensions, but also allow future events and trends to be predicted. This requires specialized data structures and partitioning algorithms, along with adaptive reductions in the search space and management of the inherent trade-off between timeliness and accuracy. The algorithms presented in this dissertation were evaluated empirically on real-world geospatial time-series datasets in a production environment, and are broadly applicable across other storage frameworks

    Behaviour Profiling using Wearable Sensors for Pervasive Healthcare

    Get PDF
    In recent years, sensor technology has advanced in terms of hardware sophistication and miniaturisation. This has led to the incorporation of unobtrusive, low-power sensors into networks centred on human participants, called Body Sensor Networks. Amongst the most important applications of these networks is their use in healthcare and healthy living. The technology has the possibility of decreasing burden on the healthcare systems by providing care at home, enabling early detection of symptoms, monitoring recovery remotely, and avoiding serious chronic illnesses by promoting healthy living through objective feedback. In this thesis, machine learning and data mining techniques are developed to estimate medically relevant parameters from a participant‘s activity and behaviour parameters, derived from simple, body-worn sensors. The first abstraction from raw sensor data is the recognition and analysis of activity. Machine learning analysis is applied to a study of activity profiling to detect impaired limb and torso mobility. One of the advances in this thesis to activity recognition research is in the application of machine learning to the analysis of 'transitional activities': transient activity that occurs as people change their activity. A framework is proposed for the detection and analysis of transitional activities. To demonstrate the utility of transition analysis, we apply the algorithms to a study of participants undergoing and recovering from surgery. We demonstrate that it is possible to see meaningful changes in the transitional activity as the participants recover. Assuming long-term monitoring, we expect a large historical database of activity to quickly accumulate. We develop algorithms to mine temporal associations to activity patterns. This gives an outline of the user‘s routine. Methods for visual and quantitative analysis of routine using this summary data structure are proposed and validated. The activity and routine mining methodologies developed for specialised sensors are adapted to a smartphone application, enabling large-scale use. Validation of the algorithms is performed using datasets collected in laboratory settings, and free living scenarios. Finally, future research directions and potential improvements to the techniques developed in this thesis are outlined
    corecore