The generation of high-fidelity imagery is a computationally expensive process\ud and parallel computing has been traditionally employed to alleviate this cost.\ud However, traditional parallel rendering has been restricted to expensive shared\ud memory or dedicated distributed processors. In contrast, parallel computing on\ud shared resources such as a computational or a desktop grid, offers a low cost alternative. But, the prevalent rendering systems are currently incapable of seamlessly handling such shared resources as they suffer from high latencies, restricted\ud bandwidth and volatility. A conventional approach of rescheduling failed jobs in\ud a volatile environment inhibits performance by using redundant computations.\ud Instead, clever task subdivision along with image reconstruction techniques provides an unrestrictive fault-tolerance mechanism, which is highly suitable for\ud high-fidelity rendering. This thesis presents novel fault-tolerant parallel rendering algorithms for effectively tapping the enormous inexpensive computational\ud power provided by shared resources.\ud A first of its kind system for fully dynamic high-fidelity interactive rendering\ud on idle resources is presented which is key for providing an immediate feedback\ud to the changes made by a user. The system achieves interactivity by monitoring\ud and adapting computations according to run-time variations in the computational\ud power and employs a spatio-temporal image reconstruction technique for enhancing the visual fidelity. Furthermore, algorithms described for time-constrained offline rendering of still images and animation sequences, make it possible to deliver\ud the results in a user-defined limit. These novel methods enable the employment\ud of variable resources in deadline-driven environments
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.