3 research outputs found

    A Pluggable Framework for Lightweight Task Offloading in Parallel and Distributed Computing

    Get PDF
    Multicore processors have quickly become ubiquitous in supercomputing, cluster computing, datacenter computing, and even personal computing. Software advances, however, continue to lag behind. In the past, software designers could simply rely on clock-speed increases to improve the performance of their software. With clock speeds now stagnant, software designers need to tap into the increased horsepower of multiple cores in a processor by creating software artifacts that support parallelism. Rather than forcing designers to write such software artifacts from scratch, we propose a pluggable framework that designers can reuse for lightweight task offloading in a parallel computing environment of multiple cores, whether those cores be colocated on a processor within a compute node, between compute nodes in a tightly-coupled system like a supercomputer, or between compute nodes in a loosely-coupled one like a cloud computer. To demonstrate the efficacy of our framework, we use the framework to implement lightweight task offloading (or software acceleration) for a popular parallel sequence-search application called mpiBLAST. Our experimental results on a 9-node, 36-core AMD Opteron cluster show that using mpiBLAST with our pluggable framework results in a 205% speed-up

    ASYMMETRIC DISTRIBUTED LOCK MANAGEMENT IN CLOUD COMPUTING

    Get PDF
    Cloud computing have become part of our daily lives. They offer a dynamic environment for costumers to store and access their data at any time in any location. The developments of social networks have led to the necessity to build a solution which is easily accesible and available when required. Cloud computing provide a solution that does not depend on the location and can offer a wide range of services, while being free from failure and errors. Although there is an increase in the usage of the cloud storage services, there is still a significant number of aspects such as instant servers failures, network partitioning and natural disasters that require to be carefully addressed. Another important point that is vital for a sustainable cloud is the implementation of an algorithm which will coordinate and maintain concurrent access and keep shared files free from errors. One of the main approaches to overcome these problems is to provide a set of servers which will act as a gateway between clients and storage nodes. In this thesis we propose a new approach which provides an alternative solution to the main problematics related with cloud storages. The approach is based on multiple strategies for eliminating the problem of node failure and network partitioning while providing a complete distributed environment. In our approach, every server acts as a master server for its own requests and can provide service to its clients without interacting with other master servers. The concurrent access is maintained in an asymmetric way through our lock manager algorithm with the least communication among other master servers. According to the state of a specific file, master server can execute any received request without communicating with other master servers and only when additional information is required does further communication occur. In our approach the network partitioning or failure of one or more master servers has no effect on the other part of the cloud. To improve availability, we associate every master server with a failover server which takes up the duty of a master when the master server fails or becomes obsolete. To measure the performance of our approach we have performed various tests and the results are presented in detailed graphs
    corecore