69,904 research outputs found
Personal Volunteer Computing
We propose personal volunteer computing, a novel paradigm to encourage
technical solutions that leverage personal devices, such as smartphones and
laptops, for personal applications that require significant computations, such
as animation rendering and image processing. The paradigm requires no
investment in additional hardware, relying instead on devices that are already
owned by users and their community, and favours simple tools that can be
implemented part-time by a single developer. We show that samples of personal
devices of today are competitive with a top-of-the-line laptop from two years
ago. We also propose new directions to extend the paradigm
Pando: Personal Volunteer Computing in Browsers
The large penetration and continued growth in ownership of personal
electronic devices represents a freely available and largely untapped source of
computing power. To leverage those, we present Pando, a new volunteer computing
tool based on a declarative concurrent programming model and implemented using
JavaScript, WebRTC, and WebSockets. This tool enables a dynamically varying
number of failure-prone personal devices contributed by volunteers to
parallelize the application of a function on a stream of values, by using the
devices' browsers. We show that Pando can provide throughput improvements
compared to a single personal device, on a variety of compute-bound
applications including animation rendering and image processing. We also show
the flexibility of our approach by deploying Pando on personal devices
connected over a local network, on Grid5000, a French-wide computing grid in a
virtual private network, and seven PlanetLab nodes distributed in a wide area
network over Europe.Comment: 14 pages, 12 figures, 2 table
Genet: A Quickly Scalable Fat-Tree Overlay for Personal Volunteer Computing using WebRTC
WebRTC enables browsers to exchange data directly but the number of possible
concurrent connections to a single source is limited. We overcome the
limitation by organizing participants in a fat-tree overlay: when the maximum
number of connections of a tree node is reached, the new participants connect
to the node's children. Our design quickly scales when a large number of
participants join in a short amount of time, by relying on a novel scheme that
only requires local information to route connection messages: the destination
is derived from the hash value of the combined identifiers of the message's
source and of the node that is holding the message. The scheme provides
deterministic routing of a sequence of connection messages from a single source
and probabilistic balancing of newer connections among the leaves. We show that
this design puts at least 83% of nodes at the same depth as a deterministic
algorithm, can connect a thousand browser windows in 21-55 seconds in a local
network, and can be deployed for volunteer computing to tap into 320 cores in
less than 30 seconds on a local network to increase the total throughput on the
Collatz application by two orders of magnitude compared to a single core
Mechanisms for Outsourcing Computation via a Decentralized Market
As the number of personal computing and IoT devices grows rapidly, so does
the amount of computational power that is available at the edge. Since many of
these devices are often idle, there is a vast amount of computational power
that is currently untapped, and which could be used for outsourcing
computation. Existing solutions for harnessing this power, such as volunteer
computing (e.g., BOINC), are centralized platforms in which a single
organization or company can control participation and pricing. By contrast, an
open market of computational resources, where resource owners and resource
users trade directly with each other, could lead to greater participation and
more competitive pricing. To provide an open market, we introduce MODiCuM, a
decentralized system for outsourcing computation. MODiCuM deters participants
from misbehaving-which is a key problem in decentralized systems-by resolving
disputes via dedicated mediators and by imposing enforceable fines. However,
unlike other decentralized outsourcing solutions, MODiCuM minimizes
computational overhead since it does not require global trust in mediation
results. We provide analytical results proving that MODiCuM can deter
misbehavior, and we evaluate the overhead of MODiCuM using experimental results
based on an implementation of our platform
Public grid computing participation: An exploratory study of determinants
Using the Internet, “public” computing grids can be assembled using “volunteered” PCs. To achieve this, volunteers download and install a software application capable of sensing periods of low local processor activity. During such times, this program on the local PC downloads and processes a subset of the project's data. At the completion of processing, the results are uploaded to the project and the cycle repeats.
Public grids are being used for a wide range of endeavors, from searching for signals suggesting extraterrestrial life to finding a cure for cancer. Despite the potential benefits, however, participation has been relatively low. The work reported here, drawing from technology acceptance and volunteer literature, suggests that the grid operator's reputation, the project's perceived need, and the level of volunteering activity of the PC owner are significant determinants of participation in grid projects. Attitude, in addition to personal innovativeness and level of volunteering activity, predicted intentions to join the project. Thus, methods traditionally used for motivating volunteer behavior may be effective in promoting the use of grid computing
Recommended from our members
Learning by volunteer computing, thinking and gaming: What and how are volunteers learning by participating in Virtual Citizen Science?
Citizen Science (CS) refers to a form of research collaboration that engages volunteers without formal scientific training in contributing to empirical scientific projects. Virtual Citizen Science (VCS) projects engage participants in online tasks. VCS has demonstrated its usefulness for research, however little is known about its learning potential for volunteers. This paper reports on research exploring the learning outcomes and processes in VCS. In order to identify different kinds of learning, 32 exploratory interviews of volunteers were conducted in three different VCS projects. We found six main learning outcomes related to different participants' activities in the project. Volunteers learn on four dimensions that are directly related to the scope of the VCS project: they learn at the task/game level, acquire pattern recognition skills, on-topic content knowledge, and improve their scientific literacy. Thanks to indirect opportunities of VCS projects, volunteers learn on two additional dimensions: off topic knowledge and skills, and personal development. Activities through which volunteers learn can be categorized in two levels: at a micro (task/game) level that is direct participation to the task, and at a macro level, i.e. use of project documentation, personal research on the Internet, and practicing specific roles in project communities. Both types are influenced by interactions with others in chat or forums. Most learning happens to be informal, unstructured and social. Volunteers do not only learn from others by interacting with scientists and their peers, but also by working for others: they gain knowledge, new status and skills by acting as active participants, moderators, editors, translators, community managers, etc. in a project community. This research highlights these informal and social aspects in adult learning and science education and also stresses the importance for learning through the indirect opportunities provided by the project: the main one being the opportunity to participate and progress in a project community, according to one's tastes and skills
A collaborative citizen science platform for real-time volunteer computing and games
Volunteer computing (VC) or distributed computing projects are common in the
citizen cyberscience (CCS) community and present extensive opportunities for
scientists to make use of computing power donated by volunteers to undertake
large-scale scientific computing tasks. Volunteer computing is generally a
non-interactive process for those contributing computing resources to a project
whereas volunteer thinking (VT) or distributed thinking, which allows
volunteers to participate interactively in citizen cyberscience projects to
solve human computation tasks. In this paper we describe the integration of
three tools, the Virtual Atom Smasher (VAS) game developed by CERN, LiveQ, a
job distribution middleware, and CitizenGrid, an online platform for hosting
and providing computation to CCS projects. This integration demonstrates the
combining of volunteer computing and volunteer thinking to help address the
scientific and educational goals of games like VAS. The paper introduces the
three tools and provides details of the integration process along with further
potential usage scenarios for the resulting platform.Comment: 12 pages, 13 figure
MOON: MapReduce On Opportunistic eNvironments
Abstract—MapReduce offers a flexible programming model for processing and generating large data sets on dedicated resources, where only a small fraction of such resources are every unavailable at any given time. In contrast, when MapReduce is run on volunteer computing systems, which opportunistically harness idle desktop computers via frameworks like Condor, it results in poor performance due to the volatility of the resources, in particular, the high rate of node unavailability. Specifically, the data and task replication scheme adopted by existing MapReduce implementations is woefully inadequate for resources with high unavailability. To address this, we propose MOON, short for MapReduce On Opportunistic eNvironments. MOON extends Hadoop, an open-source implementation of MapReduce, with adaptive task and data scheduling algorithms in order to offer reliable MapReduce services on a hybrid resource architecture, where volunteer computing systems are supplemented by a small set of dedicated nodes. The adaptive task and data scheduling algorithms in MOON distinguish between (1) different types of MapReduce data and (2) different types of node outages in order to strategically place tasks and data on both volatile and dedicated nodes. Our tests demonstrate that MOON can deliver a 3-fold performance improvement to Hadoop in volatile, volunteer computing environments
- …