5,749 research outputs found

    Improving Big Data Processing Time

    Get PDF
    The process of storing and processing massive amounts of data (big data) in a traditional database is expensive and consumes a lot of time to obtain desired results. This project has been implemented to solve these problems faced by an organization, with the implementation of Hadoop framework that stores huge data sets on distributed clusters and performs parallel data processing to achieve results quickly. It uses commodity hardware to store the data making it cost effective and provides data security by replicating the data sets. The main goals of the project were to improve the performance of processing huge data sets, reduce long term data storage costs and provide a platform that supports ad hoc analysis and provides real-time insights. The project was structured to follow agile model of software development and the data was collected and analyzed after the execution of the project. The results obtained by the analysis of data aided in arriving to the conclusion and validating that the stated goals were achieved

    Error-Tolerant Big Data Processing

    Get PDF
    Real-world data contains various kinds of errors. Before analyzing data, one usually needs to process the raw data. However, traditional data processing based on exactly match often misses lots of valid information. To get high-quality analysis results and fit in the big data era, this thesis studies the error-tolerant big data processing. As most of the data in real world can be represented as a sequence or a set, this thesis utilizes the widely-used sequence-based and set-based similar functions to tolerate errors in data processing and studies the approximate entity extraction, similarity join and similarity search problems. The main contributions of this thesis include: 1. This thesis proposes a unified framework to support approximate entity extraction with both sequence-based and set-based similarity functions simultaneously. The experiments show that the unified framework can improve the state-of-the-art methods by 1 to 2 orders of magnitude. 2. This thesis designs two methods respectively for the sequence and the set similarity joins. For the sequence similarity join, this thesis proposes to evenly partition the sequences to segments. It is guaranteed that two sequences are similar only if one sequence has a subsequence identical to a segment of another sequence. For the set similarity join, this thesis proposes to partition all the sets into segments based on the universe. This thesis further extends the two partition-based methods to support the large-scale data processing framework, Map-Reduce and Spark. The partition-based method won the string similarity join competition held by EDBT and beat the second place by 10 times. 3. This thesis proposes a pivotal prefix filter technique to solve the sequence similarity search problem. This thesis shows that the pivotal prefix filter has stronger pruning power and less filtering cost compared to the state-of-the-art filters.Comment: PhD thesis, Tsinghua University, 201

    Metocean Big Data Processing Using Hadoop

    Get PDF
    This report will discuss about MapReduce and how it handles big data. In this report, Metocean (Meteorology and Oceanography) Data will be used as it consist of large data. As the number and type of data acquisition devices grows annually, the sheer size and rate of data being collected is rapidly expanding. These big data sets can contain gigabytes or terabytes of data, and can grow on the order of megabytes or gigabytes per day. While the collection of this information presents opportunities for insight, it also presents many challenges. Most algorithms are not designed to process big data sets in a reasonable amount of time or with a reasonable amount of memory. MapReduce allows us to meet many of these challenges to gain important insights from large data sets. The objective of this project is to use MapReduce to handle big data. MapReduce is a programming technique for analysing data sets that do not fit in memory. The problem statement chapter in this project will discuss on how MapReduce comes as an advantage to deal with large data. The literature review part will explain the definition of NoSQL and RDBMS, Hadoop Mapreduce and big data, things to do when selecting database, NoSQL database deployments, scenarios for using Hadoop and Hadoop real world example. The methodology part will explain the waterfall method used in this project development. The result and discussion will explain in details the result and discussion from my project. The last chapter in this project report is conclusion and recommendatio
    • …
    corecore