1,446 research outputs found

    맵리듀스 클러스터에서 필터링 기법을 사용한 조인 처리

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2014. 2. 김형주.The join operation is one of the essential operations for data analysis because it is necessary to join large datasets to analyze heterogeneous data collected from different sources. MapReduce is a very useful framework for large-scale data analysis, but it is not suitable for joining multiple datasets. This is because it may produce a large number of redundant intermediate results, irrespective of the size of the joined records. Several existing approaches have been employed to improve the join performance, but they can only be used in specific circumstances or they may require multiple MapReduce jobs. To alleviate this problem, MFR-Join is proposed in this dissertation, which is a general join framework for processing equi-joins with filtering techniques in MapReduce. MFR-Join filters out redundant intermediate records within a single MapReduce job by applying filters in the map phase. To achieve this, the MapReduce framework is modified in two ways. First, map tasks are scheduled according to the processing order of the input datasets. Second, filters are created dynamically with the join keys of the datasets in a distributed manner. Various filtering techniques that support specific desirable operations can be plugged into MFR-Join. If the performance of join processing with filters is worse than that without filters, adaptive join processing methods are also proposed. The filters can be applied according to their performance, which is estimated in terms of the false positive rate. Furthermore, two map task scheduling policies are also provided: synchronous and asynchronous scheduling. The concept of filtering techniques is extended to multi-way joins. Methods for filter applications are proposed for the two types of multi-way joins: common attribute joins and distinct attribute joins. The experimental results showed that the proposed approach outperformed existing join algorithms and reduced the size of intermediate results when small portions of input datasets were joined.Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix 1 Introduction 1 1.1 Research Background and Motivation . . . . . . . . . . . . . . . . . . . . 1 1.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.1 Join Processing with Filtering Techniques in MapReduce . . . . . . 4 1.2.2 Adaptive Join Processing with Filtering Techniques in MFR-Join . 5 1.2.3 Multi-way Join Processing in MFR-Join . . . . . . . . . . . . . . . 6 1.3 Dissertation Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2 Preliminaries and Related Work 9 2.1 MapReduce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2 Parallel and Distributed Join Algorithms in DBMS . . . . . . . . . . . . . 11 2.3 Join Algorithms in MapReduce . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3.1 Map-side joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.3.2 Reduce-side joins . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.4 Multi-way Joins in MapReduce . . . . . . . . . . . . . . . . . . . . . . . . 17 2.5 Filtering Techniques for Join Processing . . . . . . . . . . . . . . . . . . . 19 3 MFR-Join: A General Join Framework with Filtering Techniques in MapReduce 23 3.1 MFR-Join Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.1.1 Execution Overview . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.1.2 Map Task Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.1.3 Filter Construction . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.1.4 Filtering Techniques Applicable to MFR-Join . . . . . . . . . . . . 29 3.1.5 API and Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.2 Cost Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.2.1 Cost Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.2.2 Effects of the Filters . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.3 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.3.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.3.2 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . 43 4 Adaptive Join Processing with Filtering Techniques in MFR-Join 53 4.1 Adaptive join processing in MFR-Join . . . . . . . . . . . . . . . . . . . . 54 4.1.1 Execution Overview . . . . . . . . . . . . . . . . . . . . . . . . . 55 4.1.2 Additional Filter Operations for Adaptive Joins . . . . . . . . . . . 57 4.1.3 Early Detection of FPR Threshold Being Exceeded . . . . . . . . . 58 4.1.4 Map Task Scheduling Policies . . . . . . . . . . . . . . . . . . . . 59 4.1.5 Additional Parameters for Adaptive Joins . . . . . . . . . . . . . . 60 4.2 Join Cost and FPR Threshold Analysis . . . . . . . . . . . . . . . . . . . . 61 4.2.1 Cost of Adaptive Join . . . . . . . . . . . . . . . . . . . . . . . . . 61 4.2.2 Effects of FPR Threshold . . . . . . . . . . . . . . . . . . . . . . . 62 4.2.3 Effects of Map Task Scheduling Policy . . . . . . . . . . . . . . . 63 4.3 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 4.3.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . 64 4.3.2 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . 65 5 Multi-way Join Processing in MFR-Join 77 5.1 Applying filters to multi-way joins . . . . . . . . . . . . . . . . . . . . . . 78 5.1.1 Common Attribute Joins . . . . . . . . . . . . . . . . . . . . . . . 79 5.1.2 Distinct Attribute Joins . . . . . . . . . . . . . . . . . . . . . . . . 80 5.1.3 General Multi-way Joins . . . . . . . . . . . . . . . . . . . . . . . 83 5.1.4 Cost Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 5.2 Implementation Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 5.2.1 Partition Assignment . . . . . . . . . . . . . . . . . . . . . . . . . 85 5.2.2 MapReduce Functions . . . . . . . . . . . . . . . . . . . . . . . . 88 5.3 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 5.3.1 Common Attribute Joins . . . . . . . . . . . . . . . . . . . . . . . 90 5.3.2 Distinct attribute joins . . . . . . . . . . . . . . . . . . . . . . . . 91 6 Conclusions and Future Work 99 6.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 6.2.1 Integration with Data Warehouse Systems . . . . . . . . . . . . . . 100 6.2.2 Join-based Applications . . . . . . . . . . . . . . . . . . . . . . . 101 6.2.3 Improving Scalability . . . . . . . . . . . . . . . . . . . . . . . . . 102 References 105 Summary (in Korean) 113Docto

    An ontology enhanced parallel SVM for scalable spam filter training

    Get PDF
    This is the post-print version of the final paper published in Neurocomputing. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2013 Elsevier B.V.Spam, under a variety of shapes and forms, continues to inflict increased damage. Varying approaches including Support Vector Machine (SVM) techniques have been proposed for spam filter training and classification. However, SVM training is a computationally intensive process. This paper presents a MapReduce based parallel SVM algorithm for scalable spam filter training. By distributing, processing and optimizing the subsets of the training data across multiple participating computer nodes, the parallel SVM reduces the training time significantly. Ontology semantics are employed to minimize the impact of accuracy degradation when distributing the training data among a number of SVM classifiers. Experimental results show that ontology based augmentation improves the accuracy level of the parallel SVM beyond the original sequential counterpart

    Streaming Similarity Self-Join

    Full text link
    We introduce and study the problem of computing the similarity self-join in a streaming context (SSSJ), where the input is an unbounded stream of items arriving continuously. The goal is to find all pairs of items in the stream whose similarity is greater than a given threshold. The simplest formulation of the problem requires unbounded memory, and thus, it is intractable. To make the problem feasible, we introduce the notion of time-dependent similarity: the similarity of two items decreases with the difference in their arrival time. By leveraging the properties of this time-dependent similarity function, we design two algorithmic frameworks to solve the sssj problem. The first one, MiniBatch (MB), uses existing index-based filtering techniques for the static version of the problem, and combines them in a pipeline. The second framework, Streaming (STR), adds time filtering to the existing indexes, and integrates new time-based bounds deeply in the working of the algorithms. We also introduce a new indexing technique (L2), which is based on an existing state-of-the-art indexing technique (L2AP), but is optimized for the streaming case. Extensive experiments show that the STR algorithm, when instantiated with the L2 index, is the most scalable option across a wide array of datasets and parameters

    The Family of MapReduce and Large Scale Data Processing Systems

    Full text link
    In the last two decades, the continuous increase of computational power has produced an overwhelming flow of data which has called for a paradigm shift in the computing architecture and large scale data processing mechanisms. MapReduce is a simple and powerful programming model that enables easy development of scalable parallel applications to process vast amounts of data on large clusters of commodity machines. It isolates the application from the details of running a distributed program such as issues on data distribution, scheduling and fault tolerance. However, the original implementation of the MapReduce framework had some limitations that have been tackled by many research efforts in several followup works after its introduction. This article provides a comprehensive survey for a family of approaches and mechanisms of large scale data processing mechanisms that have been implemented based on the original idea of the MapReduce framework and are currently gaining a lot of momentum in both research and industrial communities. We also cover a set of introduced systems that have been implemented to provide declarative programming interfaces on top of the MapReduce framework. In addition, we review several large scale data processing systems that resemble some of the ideas of the MapReduce framework for different purposes and application scenarios. Finally, we discuss some of the future research directions for implementing the next generation of MapReduce-like solutions.Comment: arXiv admin note: text overlap with arXiv:1105.4252 by other author
    corecore