12 research outputs found

    Simulazione di modelli di consistenza per file replicati su sistemi Grid

    Get PDF
    Lo scopo di questa tesi è quello di analizzare i problemi di consistenza dei file replicati su sistemi Grid e di progettare, simulare e mettere a confronto alcune possibili soluzioni. Una delle caratteristiche dei sistemi Grid è quella di consentire l'accesso semplice,sicuro e coordinato, ad una quantità di dati enorme (dell'ordine dei PetaByte) distribuiti nei vari nodi del sistema e di mettere a disposizione un'adeguata potenza di calcolo per elaborarli. Per migliorare l'accesso ai dati si ricorre a tecniche di replicazione che, se da un lato producono grossi vantaggi, dall'altro incrementano la mole di dati da gestire e introducono nuove problematiche di gestione, come quello della consistenza. Se infatti ammettiamo che un utente possa modificare una replica di un dato, abbiamo bisogno anche di meccanismi per sincronizzare le altre repliche. Dopo uno approfondito studio del problema, vedremo alcune possibili soluzioni ed effettueremo delle simulazioni per valutare il loro impatto sul sistema

    Collaborative Environment for Grid-based Flood Prediction

    Get PDF
    This paper presents the design, architecture and main implementation features of the flood prediction application of the Task 1.2 of the EU IST CROSSGRID. The paper begins with the description of the virtual organization of hydrometeorological experts, users, data providers and customers supported by the application. Then the architecture of the application is described, followed by used simulation models and modules of the collaborative environment. The paper ends with vision of future development of the application

    Grid Resources, Services and Data ­ Towards a Semantic Grid System

    Get PDF
    Howard Temin, who died on February 9, 1994, was driven by the genetic preoccupations of the “phage group” to an insight that was fundamental to thedevelopment of contemporary cellular biology. Howard went to Caltech in 1955 to begin graduate studies, and there developed a unique scientific style, blending the influences of Max Delbrück and Renato Dulbecco. His work was marked by a devotion to understanding the genetic issues posed by cancer-inducing viruses. This focus on genetics put him firmly in the traditions of American science dating back to the beginning of the century, and his concern with virus-induced cancer also built on a rich past. But Howard‘s fierce belief in himself, his deep scholarship, and his remarkable insight allowed him to realize a synthesis that made him one of the most creative scientists of the twentieth century

    Simulatore per un servizio di consistenza su architetture Grid

    Get PDF
    Integrazione di CONStanza e OptorSim al fine di ottenere un simulatore per il servizio di consistenza per la replicazione dei dati

    The Office of Science Data-Management Challenge

    Full text link

    High-performance and fault-tolerant techniques for massive data distribution in online communities

    Get PDF
    The amount of digital information produced and consumed is increasing each day. This rapid growth is motivated by the advances in computing power, hardware technologies, and the popularization of user generated content networks. New hardware is able to process larger quantities of data, which permits to obtain finer results, and as a consequence more data is generated. In this respect, scientific applications have evolved benefiting from the new hardware capabilities. This type of application is characterized by requiring large amounts of information as input, generating a significant amount of intermediate data resulting in large files. This increase not only appears in terms of volume, but also in terms of size, we need to provide methods that permit a efficient and reliable data access mechanism. Producing such a method is a challenging task due to the amount of aspects involved. However, we can leverage the knowledge found in social networks to improve the distribution process. In this respect, the advent of the Web 2.0 has popularized the concept of social network, which provides valuable knowledge about the relationships among users, and the users with the data. However, extracting the knowledge and defining ways to actively use it to increase the performance of a system remains an open research direction. Additionally, we must also take into account other existing limitations. In particular, the interconnection between different elements of the system is one of the key aspects. The availability of new technologies such as the mass-production of multicore chips, large storage media, better sensors, etc. contributed to the increase of data being produced. However, the underlying interconnection technologies have not improved with the same speed as the others. This leads to a situation where vast amounts of data can be produced and need to be consumed by a large number of geographically distributed users, but the interconnection between both ends does not match the required needs. In this thesis, we address the problem of efficient and reliable data distribution in a geographically distributed systems. In this respect, we focus on providing a solution that 1) optimizes the use of existing resources, 2) does not requires changes in the underlying interconnection, and 3) provides fault-tolerant capabilities. In order to achieve this objectives, we define a generic data distribution architecture composed of three main components: community detection module, transfer scheduling module, and distribution controller. The community detection module leverages the information found in the social network formed by the users requesting files and produces a set of virtual communities grouping entities with similar interests. The transfer scheduling module permits to produce a plan to efficiently distribute all requested files improving resource utilization. For this purpose, we model the distribution problem using linear programming and offer a method to permit a distributed solving of the problem. Finally, the distribution controller manages the distribution process using the aforementioned schedule, controls the available server infrastructure, and launches new on-demand resources when necessary

    Applications Development for the Computational Grid

    Get PDF
    corecore