372 research outputs found
Feature selection in high-dimensional dataset using MapReduce
This paper describes a distributed MapReduce implementation of the minimum
Redundancy Maximum Relevance algorithm, a popular feature selection method in
bioinformatics and network inference problems. The proposed approach handles
both tall/narrow and wide/short datasets. We further provide an open source
implementation based on Hadoop/Spark, and illustrate its scalability on
datasets involving millions of observations or features
An overview of recent distributed algorithms for learning fuzzy models in Big Data classification
AbstractNowadays, a huge amount of data are generated, often in very short time intervals and in various formats, by a number of different heterogeneous sources such as social networks and media, mobile devices, internet transactions, networked devices and sensors. These data, identified as Big Data in the literature, are characterized by the popular Vs features, such as Value, Veracity, Variety, Velocity and Volume. In particular, Value focuses on the useful knowledge that may be mined from data. Thus, in the last years, a number of data mining and machine learning algorithms have been proposed to extract knowledge from Big Data. These algorithms have been generally implemented by using ad-hoc programming paradigms, such as MapReduce, on specific distributed computing frameworks, such as Apache Hadoop and Apache Spark. In the context of Big Data, fuzzy models are currently playing a significant role, thanks to their capability of handling vague and imprecise data and their innate characteristic to be interpretable. In this work, we give an overview of the most recent distributed learning algorithms for generating fuzzy classification models for Big Data. In particular, we first show some design and implementation details of these learning algorithms. Thereafter, we compare them in terms of accuracy and interpretability. Finally, we argue about their scalability
Automatically Leveraging MapReduce Frameworks for Data-Intensive Applications
MapReduce is a popular programming paradigm for developing large-scale,
data-intensive computation. Many frameworks that implement this paradigm have
recently been developed. To leverage these frameworks, however, developers must
become familiar with their APIs and rewrite existing code. Casper is a new tool
that automatically translates sequential Java programs into the MapReduce
paradigm. Casper identifies potential code fragments to rewrite and translates
them in two steps: (1) Casper uses program synthesis to search for a program
summary (i.e., a functional specification) of each code fragment. The summary
is expressed using a high-level intermediate language resembling the MapReduce
paradigm and verified to be semantically equivalent to the original using a
theorem prover. (2) Casper generates executable code from the summary, using
either the Hadoop, Spark, or Flink API. We evaluated Casper by automatically
converting real-world, sequential Java benchmarks to MapReduce. The resulting
benchmarks perform up to 48.2x faster compared to the original.Comment: 12 pages, additional 4 pages of references and appendi
Classification algorithms for Big Data with applications in the urban security domain
A classification algorithm is a versatile tool, that can serve as a predictor for the
future or as an analytical tool to understand the past. Several obstacles prevent
classification from scaling to a large Volume, Velocity, Variety or Value. The aim
of this thesis is to scale distributed classification algorithms beyond current limits,
assess the state-of-practice of Big Data machine learning frameworks and validate
the effectiveness of a data science process in improving urban safety.
We found in massive datasets with a number of large-domain categorical features
a difficult challenge for existing classification algorithms. We propose associative
classification as a possible answer, and develop several novel techniques to distribute
the training of an associative classifier among parallel workers and improve the final
quality of the model. The experiments, run on a real large-scale dataset with more
than 4 billion records, confirmed the quality of the approach.
To assess the state-of-practice of Big Data machine learning frameworks and
streamline the process of integration and fine-tuning of the building blocks, we
developed a generic, self-tuning tool to extract knowledge from network traffic
measurements. The result is a system that offers human-readable models of the data
with minimal user intervention, validated by experiments on large collections of
real-world passive network measurements.
A good portion of this dissertation is dedicated to the study of a data science
process to improve urban safety. First, we shed some light on the feasibility of a
system to monitor social messages from a city for emergency relief. We then propose
a methodology to mine temporal patterns in social issues, like crimes. Finally,
we propose a system to integrate the findings of Data Science on the citizenryās
perception of safety and communicate its results to decision makers in a timely
manner. We applied and tested the system in a real Smart City scenario, set in Turin,
Italy
Garbage collection auto-tuning for Java MapReduce on Multi-Cores
MapReduce has been widely accepted as a simple programming pattern that can form the basis for efficient, large-scale, distributed data processing. The success of the MapReduce pattern has led to a variety of implementations for different computational scenarios. In this paper we present MRJ, a MapReduce Java framework for multi-core architectures. We evaluate its scalability on a four-core, hyperthreaded Intel Core i7 processor, using a set of standard MapReduce benchmarks. We investigate the significant impact that Java runtime garbage collection has on the performance and scalability of MRJ. We propose the use of memory management auto-tuning techniques based on machine learning. With our auto-tuning approach, we are able to achieve MRJ performance within 10% of optimal on 75% of our benchmark tests
Tupleware: Redefining Modern Analytics
There is a fundamental discrepancy between the targeted and actual users of
current analytics frameworks. Most systems are designed for the data and
infrastructure of the Googles and Facebooks of the world---petabytes of data
distributed across large cloud deployments consisting of thousands of cheap
commodity machines. Yet, the vast majority of users operate clusters ranging
from a few to a few dozen nodes, analyze relatively small datasets of up to a
few terabytes, and perform primarily compute-intensive operations. Targeting
these users fundamentally changes the way we should build analytics systems.
This paper describes the design of Tupleware, a new system specifically aimed
at the challenges faced by the typical user. Tupleware's architecture brings
together ideas from the database, compiler, and programming languages
communities to create a powerful end-to-end solution for data analysis. We
propose novel techniques that consider the data, computations, and hardware
together to achieve maximum performance on a case-by-case basis. Our
experimental evaluation quantifies the impact of our novel techniques and shows
orders of magnitude performance improvement over alternative systems
Hadoop neural network for parallel and distributed feature selection
In this paper, we introduce a theoretical basis for a Hadoop-based neural network for parallel and distributed feature selection in Big Data sets. It is underpinned by an associative memory (binary) neural network which is highly amenable to parallel and distributed processing and fits with the Hadoop paradigm. There are many feature selectors described in the literature which all have various strengths and weaknesses. We present the implementation details of five feature selection algorithms constructed using our artificial neural network framework embedded in Hadoop YARN. Hadoop allows parallel and distributed processing. Each feature selector can be divided into subtasks and the subtasks can then be processed in parallel. Multiple feature selectors can also be processed simultaneously (in parallel) allowing multiple feature selectors to be compared. We identify commonalities among the five features selectors. All can be processed in the framework using a single representation and the overall processing can also be greatly reduced by only processing the common aspects of the feature selectors once and propagating these aspects across all five feature selectors as necessary. This allows the best feature selector and the actual features to select to be identified for large and high dimensional data sets through exploiting the efficiency and flexibility of embedding the binary associative-memory neural network in Hadoop
An Improved Associative Classification Algorithm based on Incremental Rules
In Associative classification (AC), the step of rule generation is necessarily exhaustive because of the inherited search problems from the association rule. Besides which, the entire rules set must be induced prior constructing the classifier. This article proposes a new AC algorithm called Dynamic Covering Associative Classification (DCAC) that learns each rule from a training dataset, removes its classified instances, and then learns the next rule from the remaining unclassified data rather than the original training dataset. This ensures that the exhaustive steps of rule evaluation and candidate generation will no longer be needed, thereby maintaining a real time rule generation process. The proposed algorithm constantly amends the support and confidence for each rule rather restricting itself with the support and confidence computed from the original dataset. Experiments on 20 datasets from different domains showed that the proposed algorithm generates higher quality and more accurate classifiers than other AC rule induction approaches
- ā¦