125 research outputs found

    TICAL - a web-tool for multivariate image clustering and data topology preserving visualization

    Get PDF
    In life science research bioimaging is often used to study two kinds of features in a sample simultaneously: morphology and co-location of molecular components. While bioimaging technology is rapidly proposing and improving new multidimensional imaging platforms, bioimage informatics has to keep pace in order to develop algorithmic approaches to support biology experts in the complex task of data analysis. One particular problem is the availability and applicability of sophisticated image analysis algorithms via the web so different users can apply the same algorithms to their data (sometimes even to the same data to get the same results) and independently from her/his whereabouts and from the technical features of her/his computer. In this paper we describe TICAL, a visual data mining approach to multivariate microscopy analysis which can be applied fully through the web.We describe the algorithmic approach, the software concept and present results obtained for different example images

    Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges

    Full text link
    Cloud computing is offering utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only save energy for the environment but also reduce operational costs. This paper presents vision, challenges, and architectural elements for energy-efficient management of Cloud computing environments. We focus on the development of dynamic resource provisioning and allocation algorithms that consider the synergy between various data center infrastructures (i.e., the hardware, power units, cooling and software), and holistically work to boost data center energy efficiency and performance. In particular, this paper proposes (a) architectural principles for energy-efficient management of Clouds; (b) energy-efficient resource allocation policies and scheduling algorithms considering quality-of-service expectations, and devices power usage characteristics; and (c) a novel software technology for energy-efficient management of Clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 12 pages, 5 figures,Proceedings of the 2010 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA 2010), Las Vegas, USA, July 12-15, 201

    PFS: A Productivity Forecasting System For Desktop Computers To Improve Grid Applications Performance In Enterprise Desktop Grid

    Get PDF
    An Enterprise Desktop Grid (EDG) is a low cost platform that gathers desktop computers spread over different institutions. This platform uses desktop computers idle time to run Grid applications. We argue that computers in these environments have a predictable productivity that affects a Grid application execution time. In this paper, we propose a system called PFS for computer productivity forecasting that improves Grid applications performance. We simulated 157.500 applications and compared the performance achieved by our proposal against two recent strategies. Our experiments show that a Grid scheduler based on PFS runs applications faster than schedulers based on other selection strategies.Fil: Salinas, Sergio Ariel. Universidad Nacional de Cuyo; ArgentinaFil: Garcia Garino, Carlos Gabriel. Universidad Nacional de Cuyo; ArgentinaFil: Zunino Suarez, Alejandro Octavio. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Tandil. Instituto Superior de Ingenieria del Software; Argentin

    CATNETS Final Activity Report

    Get PDF

    PFS: A Productivity Forecasting System for Desktop Computers to Improve Grid Applications Performance in Enterprise Desktop Grid

    Get PDF
    An Enterprise Desktop Grid (EDG) is a low cost platform that gathers desktop computers spread over different institutions. This platform uses desktop computers idle time to run Grid applications. We argue that computers in these environments have a predictable productivity that affects a Grid application execution time. In this paper, we propose a system called PFS for computer productivity forecasting that improves Grid applications performance. We simulated 157.500 applications and compared the performance achieved by our proposal against two recent strategies. Our experiments show that a Grid scheduler based on PFS runs applications faster than schedulers based on other selection strategies

    Build-and-Test Workloads for Grid Middleware: Problem, Analysis, and Applications

    Full text link

    A Cloud-Computing-Based Data Placement Strategy in High-Speed Railway

    Get PDF
    As an important component of China’s transportation data sharing system, high-speed railway data sharing is a typical application of data-intensive computing. Currently, most high-speed railway data is shared in cloud computing environment. Thus, there is an urgent need for an effective cloud-computing-based data placement strategy in high-speed railway. In this paper, a new data placement strategy named hierarchical structure data placement strategy is proposed. The proposed method combines the semidefinite programming algorithm with the dynamic interval mapping algorithm. The semi-definite programming algorithm is suitable for the placement of files with various replications, ensuring that different replications of a file are placed on different storage devices, while the dynamic interval mapping algorithm ensures better self-adaptability of the data storage system. A hierarchical data placement strategy is proposed for large-scale networks. In this paper, a new theoretical analysis is provided, which is put in comparison with several other previous data placement approaches, showing the efficacy of the new analysis in several experiments
    corecore