1,353 research outputs found

    Sorting Technique- An Efficient Approach for Data Mining

    Get PDF
    As the new data or updates are arriving constantly, it becomes very difficult to handle data in an efficient manner. Moreover, if data is not refreshed it will soon become of no use. Hence data should be updated on regular mode so that it do not obsolete in coming future. In traditional work several other approaches or methods like page ranking, i2mapreduce( that is extension of Map Reduce) were used to enhance performance and increase computation speed as well as run-time processing. But as we have seen the performance is not up to that level which is required in current environment. So, to overcome these drawbacks, in this paper sorting technique is proposed that can enhance mean value and overall performance

    Approximation with Error Bounds in Spark

    Full text link
    We introduce a sampling framework to support approximate computing with estimated error bounds in Spark. Our framework allows sampling to be performed at the beginning of a sequence of multiple transformations ending in an aggregation operation. The framework constructs a data provenance tree as the computation proceeds, then combines the tree with multi-stage sampling and population estimation theories to compute error bounds for the aggregation. When information about output keys are available early, the framework can also use adaptive stratified reservoir sampling to avoid (or reduce) key losses in the final output and to achieve more consistent error bounds across popular and rare keys. Finally, the framework includes an algorithm to dynamically choose sampling rates to meet user specified constraints on the CDF of error bounds in the outputs. We have implemented a prototype of our framework called ApproxSpark, and used it to implement five approximate applications from different domains. Evaluation results show that ApproxSpark can (a) significantly reduce execution time if users can tolerate small amounts of uncertainties and, in many cases, loss of rare keys, and (b) automatically find sampling rates to meet user specified constraints on error bounds. We also explore and discuss extensively trade-offs between sampling rates, execution time, accuracy and key loss
    • …
    corecore