Volunteer computing has the potential to provide significant additional computing capacity for the LHC experiments.
Initiatives such as the CMS@home project are aiming to integrate volunteer computing resources into the experiment’s
computational frameworks to support their scientific workloads. This is especially important, as over the next few years
the demands on computing capacity will increase beyond what can be supported by general technology trends. This paper
describes how a volunteer computing project that uses virtualization to run high energy physics simulations can integrate
those resources into their computing infrastructure. The concept of the volunteer cloud is introduced and how this model can
simplify the integration is described. An architecture for implementing the volunteer cloud model is presented along with
an implementation for the CMS@home project. Finally, the submission of real CMS workloads to this volunteer cloud are
compared to identical workloads submitted to the grid