26 research outputs found

    FriendComputing: Organic application centric distributed computing

    Get PDF
    Proceedings of: Second International Workshop on Sustainable Ultrascale Computing Systems (NESUS 2015). Krakow (Poland), September 10-11, 2015.Building Ultrascale computer systems is a hard problem, not yet solved and fully explored. Combining the computing resources of multiple organizations, often in different administrative domains with heterogeneous hardware and diverse demands on the system, requires new tools and frameworks to be put in place. During previous work we developed POP-Java, a Java programming language extension that allows to easily develop distributed applications in a heterogeneous environment. We now present an extension to the POP-Java language, that allows to create application centered networks in which any member can benefit from the computing power and storage capacity of its members. An accounting system is integrated, allowing the different members of the network to bill the usage of their resources to the other members, if so desired. The system is expanded through a similar process as seen in social networks, making it possible to use the resources of friend and friends of friends. Parts of the proposed system has been implemented as a prototype inside the POP-Java programming language

    Hybrid ant colony system algorithm for static and dynamic job scheduling in grid computing

    Get PDF
    Grid computing is a distributed system with heterogeneous infrastructures. Resource management system (RMS) is one of the most important components which has great influence on the grid computing performance. The main part of RMS is the scheduler algorithm which has the responsibility to map submitted tasks to available resources. The complexity of scheduling problem is considered as a nondeterministic polynomial complete (NP-complete) problem and therefore, an intelligent algorithm is required to achieve better scheduling solution. One of the prominent intelligent algorithms is ant colony system (ACS) which is implemented widely to solve various types of scheduling problems. However, ACS suffers from stagnation problem in medium and large size grid computing system. ACS is based on exploitation and exploration mechanisms where the exploitation is sufficient but the exploration has a deficiency. The exploration in ACS is based on a random approach without any strategy. This study proposed four hybrid algorithms between ACS, Genetic Algorithm (GA), and Tabu Search (TS) algorithms to enhance the ACS performance. The algorithms are ACS(GA), ACS+GA, ACS(TS), and ACS+TS. These proposed hybrid algorithms will enhance ACS in terms of exploration mechanism and solution refinement by implementing low and high levels hybridization of ACS, GA, and TS algorithms. The proposed algorithms were evaluated against twelve metaheuristic algorithms in static (expected time to compute model) and dynamic (distribution pattern) grid computing environments. A simulator called ExSim was developed to mimic the static and dynamic nature of the grid computing. Experimental results show that the proposed algorithms outperform ACS in terms of best makespan values. Performance of ACS(GA), ACS+GA, ACS(TS), and ACS+TS are better than ACS by 0.35%, 2.03%, 4.65% and 6.99% respectively for static environment. For dynamic environment, performance of ACS(GA), ACS+GA, ACS+TS, and ACS(TS) are better than ACS by 0.01%, 0.56%, 1.16%, and 1.26% respectively. The proposed algorithms can be used to schedule tasks in grid computing with better performance in terms of makespan

    MOSE': A grid-enabled software platform to solve geoprocessing problems

    Get PDF
    Grid computing has emerged as an important new field in the distributed computing arena. It focus on intensive resource sharing, innovative applications, and in some cases, high-performance orientation. This paper describes how grids technologies can be used to develop an infrastructure for developing geoprocessing applications. We present the MOS`E system, a grid-enabled problem solving environment (PSE) able to support the activities that concern the modelling and simulation of spatio-temporal phenomena for analyzing and managing the identification and the mitigation of natural disasters like floods, wildfires, landslides, etc. MOSE' takes advantages of the standardized resource access and workflow support for loosely coupled software components provided by the web/grid services technologies

    UBIDEV: a homogeneous service framework for pervasive computing environments

    Get PDF
    This dissertation studies the heterogeneity problem of pervasive computing system from the viewpoint of an infrastructure aiming to provide a service-oriented application model. From Distributed System passing through mobile computing, pervasive computing is presented as a step forward in ubiquitous availability of services and proliferation of interacting autonomous entities. To better understand the problems related to the heterogeneous and dynamic nature of pervasive computing environments, we need to analyze the structure of a pervasive computing system from its physical and service dimension. The physical dimension describes the physical environment together wit the technology infrastructure that characterizes the interactions and the relations within the environment; the service dimension represents the services (being them software or not) the environment is able to provide [Nor99]. To better separate the constrains and the functionalities of a pervasive computing system, this dissertation classifies it in terms of resources, context, classification, services, coordination and application. UBIDEV, as the key result of this dissertation, introduces a unified model helping the design and the implementation of applications for heterogeneous and dynamic environments. This model is composed of the following concepts: • Resource: all elements of the environment that are manipulated by the application, they are the atomic abstraction unit of the model. • Context: all information coming from the environment that is used by the application to adapts its behavior. Context contains resources and services and defines their role in the application. • Classification: the environment is classified according to the application ontology in order to ground the generic conceptual model of the application to the specific environment. It defines the basic semantic level of interoperability. • Service: the functionalities supported by the system; each service manipulates one or more resources. Applications are defined as a coordination and adaptation of services. • Coordination: all aspects related to service composition and execution as well as the use of the contextual information are captured by the coordination concept. • Application Ontology: represents the viewpoint of the application on the specific context; it defines the high level semantic of resources, services and context. Applying the design paradigm proposed by UBIDEV, allows to describe applications according to a Service Oriented Architecture[Bie02], and to focus on application functionalities rather than their relations with the physical devices. Keywords: pervasive computing, homogenous environment, service-oriented, heterogeneity problem, coordination model, context model, resource management, service management, application interfaces, ontology, semantic services, interaction logic, description logic.Questa dissertazione studia il problema della eterogeneit`a nei sistemi pervasivi proponendo una infrastruttura basata su un modello orientato ai servizi. I sistemi pervasivi sono presentati come un’evoluzione naturale dei sistemi distribuiti, passando attraverso mobile computing, grazie ad una disponibilit`a ubiqua di servizi (sempre, ovunque ed in qualunque modo) e ad loro e con l’ambiente stesso. Al fine di meglio comprendere i problemi legati allintrinseca eterogeneit`a dei sistemi pervasivi, dobbiamo prima descrivere la struttura fondamentale di questi sistemi classificandoli attraverso la loro dimensione fisica e quella dei loro servizi. La dimensione fisica descrive l’ambiente fisico e tutti i dispositivi che fanno parte del contesto della applicazione. La dimensione dei servizi descrive le funzionalit`a (siano esse software o no) che l’ambiente `e in grado di fornire [Nor99]. I sistemi pervasivi vengono cos`ı classificati attraverso una metrica pi `u formale del tipo risorse, contesto, servizi, coordinazione ed applicazione. UBIDEV, come risultato di questa dissertazione, introduce un modello uniforme per la descrizione e lo sviluppo di applicazioni in ambienti dinamici ed eterogenei. Il modello `e composto dai seguenti concetti di base: • Risorse: gli elementi dell’ambiente fisico che fanno parte del modello dellapplicazione. Questi rappresentano l’unit`a di astrazione atomica di tutto il modello UBIDEV. • Contesto: le informazioni sullo stato dell’ambiente che il sistema utilizza per adattare il comportamento dell’applicazione. Il contesto include informazioni legate alle risorse, ai servizi ed alle relazioni che li legano. • Classificazione: l’ambiente viene classificato sulla base di una ontologia che rappresenta il punto di accordo a cui tutti i moduli di sistema fanno riferimento. Questa classificazione rappresenta il modello concettuale dell’applicazione che si riflette sull’intero ambiente. Si definisce cos`ı la semantica di base per tutto il sistema. • Servizi: le funzionalit`a che il sistema `e in grado di fornire; ogni servizio `e descritto in termini di trasformazione di una o pi `u risorse. Le applicazioni sono cos`ı definite in termini di cooperazione tra servizi autonomi. • Coordinazione: tutti gli aspetti legati alla composizione ed alla esecuzione di servizi cos`ı come l’elaborazione dell’informazione contestuale. • Ontologia dell’Applicazione: rappresenta il punto di vista dell’applicazione; definisce la semantica delle risorse, dei servizi e dell’informazione contestuale. Applicando il paradigma proposto da UBIDEV, si possono descrivere applicazioni in accordo con un modello Service-oriented [Bie02] ed, al tempo stesso, ridurre l’applicazione stessa alle sue funzionalit`a di alto livello senza intervenire troppo su come queste funzionalit` a devono essere realizzate dalle singole componenti fisiche

    Automated, Parallel Optimization Algorithms for Stochastic Functions

    Get PDF
    The optimization algorithms for stochastic functions are desired specifically for real-world and simulation applications where results are obtained from sampling, and contain experimental error or random noise. We have developed a series of stochastic optimization algorithms based on the well-known classical down hill simplex algorithm. Our parallel implementation of these optimization algorithms, using a framework called MW, is based on a master-worker architecture where each worker runs a massively parallel program. This parallel implementation allows the sampling to proceed independently on many processors as demonstrated by scaling up to more than 100 vertices and 300 cores. This framework is highly suitable for clusters with an ever increasing number of cores per node. The new algorithms have been successfully applied to the reparameterization of a model for liquid water, achieving thermodynamic and structural results for liquid water that are better than a standard model used in molecular simulations, with the the advantage of a fully automated parameterization process

    Volunteer computing

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.Includes bibliographical references (p. 205-216).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.This thesis presents the idea of volunteer computing, which allows high-performance parallel computing networks to be formed easily, quickly, and inexpensively by enabling ordinary Internet users to share their computers' idle processing power without needing expert help. In recent years, projects such as SETI@home have demonstrated the great potential power of volunteer computing. In this thesis, we identify volunteer computing's further potentials, and show how these can be achieved. We present the Bayanihan system for web-based volunteer computing. Using Java applets, Bayanihan enables users to volunteer their computers by simply visiting a web page. This makes it possible to set up parallel computing networks in a matter of minutes compared to the hours, days, or weeks required by traditional NOW and metacomputing systems. At the same time, Bayanihan provides a flexible object-oriented software framework that makes it easy for programmers to write various applications, and for researchers to address issues such as adaptive parallelism, fault-tolerance, and scalability. Using Bayanihan, we develop a general-purpose runtime system and APIs, and show how volunteer computing's usefulness extends beyond solving esoteric mathematical problems to other, more practical, master-worker applications such as image rendering, distributed web-crawling, genetic algorithms, parametric analysis, and Monte Carlo simulations. By presenting a new API using the bulk synchronous parallel (BSP) model, we further show that contrary to popular belief and practice, volunteer computing need not be limited to master-worker applications, but can be used for coarse-grain message-passing programs as well. Finally, we address the new problem of maintaining reliability in the presence of malicious volunteers. We present and analyze traditional techniques such as voting, and new ones such as spot-checking, encrypted computation, and periodic obfuscation. Then, we show how these can be integrated in a new idea called credibility-based fault-tolerance, which uses probability estimates to limit and direct the use of redundancy. We validate this new idea with parallel Monte Carlo simulations, and show how it can achieve error rates several orders-of-magnitude smaller than traditional voting for the same slowdown.by Luis F.G. Sarmenta.Ph.D

    Economic-based Distributed Resource Management and Scheduling for Grid Computing

    Full text link
    Computational Grids, emerging as an infrastructure for next generation computing, enable the sharing, selection, and aggregation of geographically distributed resources for solving large-scale problems in science, engineering, and commerce. As the resources in the Grid are heterogeneous and geographically distributed with varying availability and a variety of usage and cost policies for diverse users at different times and, priorities as well as goals that vary with time. The management of resources and application scheduling in such a large and distributed environment is a complex task. This thesis proposes a distributed computational economy as an effective metaphor for the management of resources and application scheduling. It proposes an architectural framework that supports resource trading and quality of services based scheduling. It enables the regulation of supply and demand for resources and provides an incentive for resource owners for participating in the Grid and motives the users to trade-off between the deadline, budget, and the required level of quality of service. The thesis demonstrates the capability of economic-based systems for peer-to-peer distributed computing by developing users' quality-of-service requirements driven scheduling strategies and algorithms. It demonstrates their effectiveness by performing scheduling experiments on the World-Wide Grid for solving parameter sweep applications
    corecore