2,466 research outputs found

    A Framework for Developing Real-Time OLAP algorithm using Multi-core processing and GPU: Heterogeneous Computing

    Full text link
    The overwhelmingly increasing amount of stored data has spurred researchers seeking different methods in order to optimally take advantage of it which mostly have faced a response time problem as a result of this enormous size of data. Most of solutions have suggested materialization as a favourite solution. However, such a solution cannot attain Real- Time answers anyhow. In this paper we propose a framework illustrating the barriers and suggested solutions in the way of achieving Real-Time OLAP answers that are significantly used in decision support systems and data warehouses

    Astronomy in the Cloud: Using MapReduce for Image Coaddition

    Full text link
    In the coming decade, astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. The study of these sources will involve computation challenges such as anomaly detection and classification, and moving object tracking. Since such studies benefit from the highest quality data, methods such as image coaddition (stacking) will be a critical preprocessing step prior to scientific investigation. With a requirement that these images be analyzed on a nightly basis to identify moving sources or transient objects, these data streams present many computational challenges. Given the quantity of data involved, the computational load of these problems can only be addressed by distributing the workload over a large number of nodes. However, the high data throughput demanded by these applications may present scalability challenges for certain storage architectures. One scalable data-processing method that has emerged in recent years is MapReduce, and in this paper we focus on its popular open-source implementation called Hadoop. In the Hadoop framework, the data is partitioned among storage attached directly to worker nodes, and the processing workload is scheduled in parallel on the nodes that contain the required input data. A further motivation for using Hadoop is that it allows us to exploit cloud computing resources, e.g., Amazon's EC2. We report on our experience implementing a scalable image-processing pipeline for the SDSS imaging database using Hadoop. This multi-terabyte imaging dataset provides a good testbed for algorithm development since its scope and structure approximate future surveys. First, we describe MapReduce and how we adapted image coaddition to the MapReduce framework. Then we describe a number of optimizations to our basic approach and report experimental results comparing their performance.Comment: 31 pages, 11 figures, 2 table

    An Efficient Transport Protocol for delivery of Multimedia An Efficient Transport Protocol for delivery of Multimedia Content in Wireless Grids

    Get PDF
    A grid computing system is designed for solving complicated scientific and commercial problems effectively,whereas mobile computing is a traditional distributed system having computing capability with mobility and adopting wireless communications. Media and Entertainment fields can take advantage from both paradigms by applying its usage in gaming applications and multimedia data management. Multimedia data has to be stored and retrieved in an efficient and effective manner to put it in use. In this paper, we proposed an application layer protocol for delivery of multimedia data in wireless girds i.e. multimedia grid protocol (MMGP). To make streaming efficient a new video compression algorithm called dWave is designed and embedded in the proposed protocol. This protocol will provide faster, reliable access and render an imperceptible QoS in delivering multimedia in wireless grid environment and tackles the challenging issues such as i) intermittent connectivity, ii) device heterogeneity, iii) weak security and iv) device mobility.Comment: 20 pages, 15 figures, Peer Reviewed Journa

    Visualizing Historical Book Trade Data: An Iterative Design Study with Close Collaboration with Domain Experts

    Full text link
    The circulation of historical books has always been an area of interest for historians. However, the data used to represent the journey of a book across different places and times can be difficult for domain experts to digest due to buried geographical and chronological features within text-based presentations. This situation provides an opportunity for collaboration between visualization researchers and historians. This paper describes a design study where a variant of the Nine-Stage Framework was employed to develop a Visual Analytics (VA) tool called DanteExploreVis. This tool was designed to aid domain experts in exploring, explaining, and presenting book trade data from multiple perspectives. We discuss the design choices made and how each panel in the interface meets the domain requirements. We also present the results of a qualitative evaluation conducted with domain experts. The main contributions of this paper include: 1) the development of a VA tool to support domain experts in exploring, explaining, and presenting book trade data; 2) a comprehensive documentation of the iterative design, development, and evaluation process following the variant Nine-Stage Framework; 3) a summary of the insights gained and lessons learned from this design study in the context of the humanities field; and 4) reflections on how our approach could be applied in a more generalizable way
    • โ€ฆ
    corecore