243 research outputs found

    On Utilization of Contributory Storage in Desktop Grids

    Get PDF
    The availability of desktop grids and shared computing platforms has popularized the use of contributory resources, such as desktops, as computing substrates for a variety of applications. However, addressing the exponentially growing storage demands of applications, especially in a contributory environment, remains a challenging research problem. In this report, we propose a transparent distributed storage system that harnesses the storage contributed by grid participants arranged in a peer-to-peer network to yield a scalable, robust, and self-organizing system. The novelty of our work lies in (i) design simplicity to facilitate actual use; (ii) support for easy integration with grid platforms; (iii) ingenious use of striping and error coding techniques to support very large data files; and (iv) the use of multicast techniques for data replication. Experimental results through simulations and an actual implementation show that our system can provide reliable and efficient storage with large file support for desktop grid applications

    Understanding collaboration in volunteer computing systems

    Get PDF
    Volunteer computing is a paradigm in which devices participating in a distributed environment share part of their resources to help others perform their activities. The effectiveness of this computing paradigm depends on the collaboration attitude adopted by the participating devices. Unfortunately for software designers it is not clear how to contribute with local resources to the shared environment without compromising resources that could then be required by the contributors. Therefore, many designers adopt a conservative position when defining the collaboration strategy to be embedded in volunteer computing applications. This position produces an underutilization of the devices’ local resources and reduces the effectiveness of these solutions. This article presents a study that helps designers understand the impact of adopting a particular collaboration attitude to contribute with local resources to the distributed shared environment. The study considers five collaboration strategies, which are analyzed in computing environments with both, abundance and scarcity of resources. The obtained results indicate that collaboration strategies based on effort-based incentives work better than those using contribution-based incentives. These results also show that the use of effort-based incentives does not jeopardize the availability of local resources for the local needs.Peer ReviewedPostprint (published version

    Virtual Organization Clusters: Self-Provisioned Clouds on the Grid

    Get PDF
    Virtual Organization Clusters (VOCs) provide a novel architecture for overlaying dedicated cluster systems on existing grid infrastructures. VOCs provide customized, homogeneous execution environments on a per-Virtual Organization basis, without the cost of physical cluster construction or the overhead of per-job containers. Administrative access and overlay network capabilities are granted to Virtual Organizations (VOs) that choose to implement VOC technology, while the system remains completely transparent to end users and non-participating VOs. Unlike alternative systems that require explicit leases, VOCs are autonomically self-provisioned according to configurable usage policies. As a grid computing architecture, VOCs are designed to be technology agnostic and are implementable by any combination of software and services that follows the Virtual Organization Cluster Model. As demonstrated through simulation testing and evaluation of an implemented prototype, VOCs are a viable mechanism for increasing end-user job compatibility on grid sites. On existing production grids, where jobs are frequently submitted to a small subset of sites and thus experience high queuing delays relative to average job length, the grid-wide addition of VOCs does not adversely affect mean job sojourn time. By load-balancing jobs among grid sites, VOCs can reduce the total amount of queuing on a grid to a level sufficient to counteract the performance overhead introduced by virtualization

    Forecasting Nigeria\u27s Electricity Demand and Energy Efficiency Potential Under Climate Uncertainty

    Get PDF
    The increasing population and socio-economic growth of Nigeria, coupled with the current, unmet electricity demand, requires the need for power supply facilities expansion. Of all Nigeria’s electricity consumption by sector, the residential sector is the largest and growing at a very fast rate. To meet this growing demand, an accurate estimation of the demand into the future that will guide policy makers to adequately plan for the expansion of electricity supply and distribution, and energy efficiency standards and labeling must be made. To achieve this, a residential electricity demand forecast model that can correctly predict future demand and guide the construction of power plants including cost optimization of building these power infrastructures is needed. Modelling electricity demand in developing countries is problematic because of scarcity of data and methodologies that adequately consider detailed disaggregation of household appliances, energy efficiency improvements, and stock uptakes. This dissertation addresses these gaps and presents methodologies that can carry out a detailed disaggregation of household appliances, a more accurate electricity demand projection, peak load reduction, energy savings, economic, and environmental benefits of energy efficiency in the residential sector of Nigeria. This study adopts a bottom-up and top-down approach (hybrid) supplemented with hourly end-use demand profile to model residential electricity consumption. and project efficiency improvement through the introduction of energy efficiency standards and labelling (EE S&L) under two scenarios (Business As Usual and Best Available Technology). A consumer life-cycle cost analysis was also conducted to determine the cost-effectiveness of introducing EE S& L to consumers. The results show significant savings in energy and carbon emissions, increased cooling demand due to climate uncertainty, and negative return on investment and increase lifecycle costs to consumers who purchase more efficient appliances. These results are subject to some level of uncertainties that are mainly caused by the input data. The uncertainties were analyzed based on a Monte Carlo Simulation. The uncertainties that were considered including the type of distributions applied to them were outlined and the result of the outputs were presented

    Security in Distributed, Grid, Mobile, and Pervasive Computing

    Get PDF
    This book addresses the increasing demand to guarantee privacy, integrity, and availability of resources in networks and distributed systems. It first reviews security issues and challenges in content distribution networks, describes key agreement protocols based on the Diffie-Hellman key exchange and key management protocols for complex distributed systems like the Internet, and discusses securing design patterns for distributed systems. The next section focuses on security in mobile computing and wireless networks. After a section on grid computing security, the book presents an overview of security solutions for pervasive healthcare systems and surveys wireless sensor network security

    An investigation into higher education users’ perception of virtual services at the University Campus Oldham

    Get PDF
    Virtualization is a method of using computer technology in such a way that any one individual user is able to access many other devices remotely. In other words, it allows one computer to control many others or one operating system accessing multiple computers in order to analyse their databases. It can also be used in a way that there is no need for multiple servers, thus reducing the numbers needed by any one organisation. This will also allow the use of many different applications and software with less hardware. Applications are able to run faster and there will be increased availability with more users being able to use each piece of hardware as well as access to software licenses, which under normal circumstances can be expensive. The main attraction of virtualization is that access to a virtual desktop can be global as well as local by remote access with the ever expanding use of Broadband services, giving high speed internet access. There is also evidence to show that virtualization is growing in all areas, most importantly education, business and commerce. Sharing many different services via the Cloud is increasing popular for both communication and data storage and that its capabilities may expand even further (Cafaro et al, 2010). As virtualization increases, surely that is the way education should evolve also. Therefore should the education system now be using virtualization to provide Knowledge as a Service? These are questions, which will be addressed by this thesis together with showing that virtualization would be more cost effective to run and be more flexible and faster; ensuring that both academic students and staff are working with up to date technology and resources. Finally by using theoretical models of program development for virtualization this thesis will demonstrate that introducing a virtual desktop would be accepted by users though there are some issues that would need addressing prior to any significant deployment in a higher education setting

    Reliability of clustered vs. declustered replica placement in data storage systems

    Get PDF
    The placement of replicas across storage nodes in a replication-based storage system is known to affect rebuild times and therefore system reliability. Earlier work has shown that, for a replication factor of two, the reliability is essentially unaffected by the replica placement scheme because all placement schemes have mean times to data loss (MTTDLs) within a factor of two for practical values of the failure rate, storage capacity, and rebuild bandwidth of a storage node. However, for higher replication factors, simulation results reveal that this no longer holds. Moreover, an analytical derivation of MTTDL becomes intractable for general placement schemes. In this paper, we develop a theoretical model that is applicable for any replication factor and provides a good approximation of the MTTDL for small failure rates. This model characterizes the system behavior by using an analytically tractable measure of reliability: the probability of the shortest path to data loss following the first node failure. It is shown that, for highly reliable systems, this measure approximates well the probability of all paths to data loss after the first node failure and prior to the completion of rebuild, and leads to a rough estimation of the MTTDL. The results obtained are of theoretical and practical importance and are confirmed by means of simulations. As our results show, the declustered placement scheme, contrary to intuition, offers a reliability for replication factors greater than two that does not decrease as the number of nodes in the system increases

    Effect of replica placement on the reliability of large scale data storage systems

    Get PDF
    Replication is a widely used method to protect large- scale data storage systems from data loss when storage nodes fail. It is well known that the placement of replicas of the different data blocks across the nodes affects the time to rebuild. Several systems described in the literature are designed based on the premise that minimizing the rebuild times maximizes the system reliability. Our results however indicate that the reliability is essentially unaffected by the replica placement scheme. We show that, for a replication factor of two, all possible placement schemes have mean times to data loss (MTTDLs) within a factor of two for practical values of the failure rate, storage capacity, and rebuild bandwidth of a storage node. The theoretical results are confirmed by means of event-driven simulation. For higher replication factors, an analytical derivation of MTTDL becomes intractable for a general placement scheme. We therefore use one of the alternate measures of reliability that have been proposed in the literature, namely, the probability of data loss during rebuild in the critical mode of the system. Whereas for a replication factor of two this measure can be directly translated into MTTDL, it is only speculative of the MTTDL behavior for higher replication factors. This measure of reliability is shown to lie within a factor of two for all possible placement schemes and any replication factor. We also show that for any replication factor, the clustered placement scheme has the lowest probability of data loss during rebuild in critical mode among all possible placement schemes, whereas the declustered placement scheme has the highest probability. Simulation results reveal however that these properties do not hold for the corresponding MTTDLs for a replication factor greater than two. This indicates that some alternate measures of reliability may not be appropriate for comparing the MTTDL of different placement schemes
    • …
    corecore