4,214 research outputs found

    Coresets-Methods and History: A Theoreticians Design Pattern for Approximation and Streaming Algorithms

    Get PDF
    We present a technical survey on the state of the art approaches in data reduction and the coreset framework. These include geometric decompositions, gradient methods, random sampling, sketching and random projections. We further outline their importance for the design of streaming algorithms and give a brief overview on lower bounding techniques

    Strategies and algorithms for clustering large datasets: a review

    Get PDF
    The exploratory nature of data analysis and data mining makes clustering one of the most usual tasks in these kind of projects. More frequently these projects come from many different application areas like biology, text analysis, signal analysis, etc that involve larger and larger datasets in the number of examples and the number of attributes. Classical methods for clustering data like K-means or hierarchical clustering are beginning to reach its maximum capability to cope with this increase of dataset size. The limitation for these algorithms come either from the need of storing all the data in memory or because of their computational time complexity. These problems have opened an area for the search of algorithms able to reduce this data overload. Some solutions come from the side of data preprocessing by transforming the data to a lower dimensionality manifold that represents the structure of the data or by summarizing the dataset by obtaining a smaller subset of examples that represent an equivalent information. A different perspective is to modify the classical clustering algorithms or to derive other ones able to cluster larger datasets. This perspective relies on many different strategies. Techniques such as sampling, on-line processing, summarization, data distribution and efficient datastructures have being applied to the problem of scaling clustering algorithms. This paper presents a review of different strategies and clustering algorithms that apply these techniques. The aim is to cover the different range of methodologies applied for clustering data and how they can be scaled.Preprin

    The GNAT method for nonlinear model reduction: effective implementation and application to computational fluid dynamics and turbulent flows

    Full text link
    The Gauss--Newton with approximated tensors (GNAT) method is a nonlinear model reduction method that operates on fully discretized computational models. It achieves dimension reduction by a Petrov--Galerkin projection associated with residual minimization; it delivers computational efficency by a hyper-reduction procedure based on the `gappy POD' technique. Originally presented in Ref. [1], where it was applied to implicit nonlinear structural-dynamics models, this method is further developed here and applied to the solution of a benchmark turbulent viscous flow problem. To begin, this paper develops global state-space error bounds that justify the method's design and highlight its advantages in terms of minimizing components of these error bounds. Next, the paper introduces a `sample mesh' concept that enables a distributed, computationally efficient implementation of the GNAT method in finite-volume-based computational-fluid-dynamics (CFD) codes. The suitability of GNAT for parameterized problems is highlighted with the solution of an academic problem featuring moving discontinuities. Finally, the capability of this method to reduce by orders of magnitude the core-hours required for large-scale CFD computations, while preserving accuracy, is demonstrated with the simulation of turbulent flow over the Ahmed body. For an instance of this benchmark problem with over 17 million degrees of freedom, GNAT outperforms several other nonlinear model-reduction methods, reduces the required computational resources by more than two orders of magnitude, and delivers a solution that differs by less than 1% from its high-dimensional counterpart

    Progressive construction of a parametric reduced-order model for PDE-constrained optimization

    Full text link
    An adaptive approach to using reduced-order models as surrogates in PDE-constrained optimization is introduced that breaks the traditional offline-online framework of model order reduction. A sequence of optimization problems constrained by a given Reduced-Order Model (ROM) is defined with the goal of converging to the solution of a given PDE-constrained optimization problem. For each reduced optimization problem, the constraining ROM is trained from sampling the High-Dimensional Model (HDM) at the solution of some of the previous problems in the sequence. The reduced optimization problems are equipped with a nonlinear trust-region based on a residual error indicator to keep the optimization trajectory in a region of the parameter space where the ROM is accurate. A technique for incorporating sensitivities into a Reduced-Order Basis (ROB) is also presented, along with a methodology for computing sensitivities of the reduced-order model that minimizes the distance to the corresponding HDM sensitivity, in a suitable norm. The proposed reduced optimization framework is applied to subsonic aerodynamic shape optimization and shown to reduce the number of queries to the HDM by a factor of 4-5, compared to the optimization problem solved using only the HDM, with errors in the optimal solution far less than 0.1%

    Rule-based preprocessing for data stream mining using complex event processing

    Get PDF
    Data preprocessing is known to be essential to produce accurate data from which mining methods are able to extract valuable knowledge. When data constantly arrives from one or more sources, preprocessing techniques need to be adapted to efficiently handle these data streams. To help domain experts to define and execute preprocessing tasks for data streams, this paper proposes the use of active rule-based systems and, more specifically, complex event processing (CEP) languages and engines. The main contribution of our approach is the formulation of preprocessing procedures as event detection rules, expressed in an SQL-like language, that provide domain experts a simple way to manipulate temporal data. This idea is materialized into a publicly available solution that integrates a CEP engine with a library for online data mining. To evaluate our approach, we present three practical scenarios in which CEP rules preprocess data streams with the aim of adding temporal information, transforming features and handling missing values. Experiments show how CEP rules provide an effective language to express preprocessing tasks in a modular and high-level manner, without significant time and memory overheads. The resulting data streams do not only help improving the predictive accuracy of classification algorithms, but also allow reducing the complexity of the decision models and the time needed for learning in some cases

    Scalable mining for classification rules in relational databases

    Get PDF
    doi:10.1214/lnms/1196285404Data mining is a process of discovering useful patterns (knowledge) hidden in extremely large datasets. Classification is a fundamental data mining function, and some other functions can be reduced to it. In this paper we propose a novel classification algorithm (classifier) called MIND (MINing in Databases). MIND can be phrased in such a way that its implementation is very easy using the extended relational calculus SQL, and this in turn allows the classifier to be built into a relational database system directly. MIND is truly scalable with respect to I/O efficiency, which is important since scalability is a key requirement for any data mining algorithm. We have built a prototype of MIND in the relational database management system DB2 and have benchmarked its performance. We describe the working prototype and report the measured performance with respect to the previous method of choice. MIND scales not only with the size of datasets but also with the number of processors on an IBM SP2 computer system. Even on uniprocessors, MIND scales well beyond dataset sizes previously published for classifiers.We also give some insights that may have an impact on the evolution of the extended relational calculus SQL

    A MapReduce solution for associative classification of big data

    Get PDF
    Associative classifiers have proven to be very effective in classification problems. Unfortunately, the algorithms used for learning these classifiers are not able to adequately manage big data because of time complexity and memory constraints. To overcome such drawbacks, we propose a distributed association rule-based classification scheme shaped according to the MapReduce programming model. The scheme mines classification association rules (CARs) using a properly enhanced, distributed version of the well-known FP-Growth algorithm. Once CARs have been mined, the proposed scheme performs a distributed rule pruning. The set of survived CARs is used to classify unlabeled patterns. The memory usage and time complexity for each phase of the learning process are discussed, and the scheme is evaluated on seven real-world big datasets on the Hadoop framework, characterizing its scalability and achievable speedup on small computer clusters. The proposed solution for associative classifiers turns to be suitable to practically address big datasets even with modest hardware support. Comparisons with two state-of-the-art distributed learning algorithms are also discussed in terms of accuracy, model complexity, and computation time

    Monitoring data streams

    Get PDF
    Stream monitoring is concerned with analyzing data that is represented in the form of infinite streams. This field has gained prominence in recent years, as streaming data is generated in increasing volume and dimension in a variety of areas. It finds application in connection with monitoring industrial sensors, "smart" technology like smart houses and smart cars, wearable devices used for medical and physiological monitoring, but also in environmental surveillance or finance. However, stream monitoring is a challenging task due to the diverse and changing nature of the streaming data, its high volume and high dimensionality with thousands of sensors producing streams with millions of measurements over short time spans. Automated, scalable and efficient analysis of these streams can help to keep track of important events, highlight relevant aspects and provide better insights into the monitored system. In this thesis, we propose techniques adapted to these tasks in supervised and unsupervised settings, in particular Stream Classification and Stream Dependency Monitoring. After a motivating introduction, we introduce concepts related to streaming data and discuss technological frameworks that have emerged to deal with streaming data in the second chapter of this thesis. We introduce the notion of information theoretical entropy as a useful basis for data monitoring in the third chapter. In the second part of the thesis, we present Probabilistic Hoeffding Trees, a novel approach towards stream classification. We will show how probabilistic learning greatly improves the flexibility of decision trees and their ability to adapt to changes in data streams. The general technique is applicable to a variety of classification models and fast to compute without significantly greater memory cost compared to regular Hoeffding Trees. We show that our technique achieves better or on-par results to current state-of-the-art tree classification models on a variety of large, synthetic and real life data sets. In the third part of the thesis, we concentrate on unsupervised monitoring of data streams. We will use mutual information as entropic measure to identify the most important relationships in a monitored system. By using the powerful concept of mutual information we can, first, capture relevant aspects in a great variety of data sources with different underlying concepts and possible relationships and, second, analyze theoretical and computational complexity. We present the MID and DIMID algorithms. They perform extremely efficient on high dimensional data streams and provide accurate results, outperforming state-of-the-art algorithms for dependency monitoring. In the fourth part of this thesis, we introduce delayed relationships as a further feature in the dependency analysis. In reality, the phenomena monitored by e.g. some type of sensor might depend on another, but measurable effects can be delayed. This delay might be due to technical reasons, i.e. different stream processing speeds, or because the effects actually appear delayed over time. We present Loglag, the first algorithm that monitors dependency with respect to an optimal delay. It utilizes several approximation techniques to achieve competitive resource requirements. We demonstrate its scalability and accuracy on real world data, and also give theoretical guarantees to its accuracy
    • …
    corecore