6 research outputs found

    Private Cloud Deployment on Shared Computer Labs

    Get PDF
    A computer laboratory in a school or college is often shared for multiple class and lab sessions. However, often the computers in the lab are just left idling for an extended period of time. Those are potential resources to be harvested for cloud services. This manuscript details the deployment of a private cloud on the shared computer labs. Fundamental services like operation manager, configuration manager, cloud manager, and schedule manager were put up to power on/off computers remotely, specify each computer’s OS configuration, manage cloud services (i.e., provision and retire virtual machines), and schedule OS switching tasks, respectively. OpenStack was employed to manage computer resources for cloud services. The deployment of private cloud can improve the computers’ utilization on the shared computer labs

    Deploying an Ad-Hoc Computing Cluster Overlaid on Top of Public Desktops

    Get PDF
    A computer laboratory is often a homogeneous environment, in which the computers have the same hardware and software settings. Conducting system tests in this laboratory environment is quite challenging, as the laboratory is supposed to be shared with regular classes. This manuscript details the use of desktop virtualization to deploy dynamically a virtual cluster for testing and ad-hoc purposes. The virtual cluster can support an environment completely different from the physical environment and provide application isolation essential for separating the testing environment from the regular class activities. Windows 7 OS was running in the host desktops, and VMware Workstation was employed as the desktop virtualization manager. The deployed virtual cluster comprised virtual desktops installed with Ubuntu Desktop Linux OS. Lightweight applications using VMware VIX library and shell scripts were developed and employed to manage job submission to the virtual cluster. Evaluations on the virtual cluster’s deployment show that we can leverage on desktop virtualization to quickly and dynamically deploy a testing environment while exploiting the underutilized compute resources

    Planificación de aplicaciones best-effort y soft real-time en NOWs

    Get PDF
    La aparición de nuevos tipos de aplicaciones, como vídeo bajo demanda, realidad virtual y videoconferencias entre otras, caracterizadas por la necesidad de cumplir sus deadlines. Este tipo de aplicaciones, han sido denominadas en la literatura aplicaciones soft-real time (SRT) periódicas. Este trabajo se centra en el problema de la planificación temporal de este nuevo tipo de aplicaciones en clusters no dedicados.L'aparició de nous tipus d'aplicacions, com vídeo sota demanda, realitat virtual i videoconferències entre unes altres, caracteritzades per la necessitat de complir les seves deadlines. Aquest tipus d'aplicacions, han estat denominades en la literatura aplicacions soft-real time (SRT) periòdiques. Aquest treball es centra en el problema de la planificació temporal d'aquest nou tipus d'aplicacions en clusters no dedicats

    High-fidelity rendering on shared computational resources

    Get PDF
    The generation of high-fidelity imagery is a computationally expensive process and parallel computing has been traditionally employed to alleviate this cost. However, traditional parallel rendering has been restricted to expensive shared memory or dedicated distributed processors. In contrast, parallel computing on shared resources such as a computational or a desktop grid, offers a low cost alternative. But, the prevalent rendering systems are currently incapable of seamlessly handling such shared resources as they suffer from high latencies, restricted bandwidth and volatility. A conventional approach of rescheduling failed jobs in a volatile environment inhibits performance by using redundant computations. Instead, clever task subdivision along with image reconstruction techniques provides an unrestrictive fault-tolerance mechanism, which is highly suitable for high-fidelity rendering. This thesis presents novel fault-tolerant parallel rendering algorithms for effectively tapping the enormous inexpensive computational power provided by shared resources. A first of its kind system for fully dynamic high-fidelity interactive rendering on idle resources is presented which is key for providing an immediate feedback to the changes made by a user. The system achieves interactivity by monitoring and adapting computations according to run-time variations in the computational power and employs a spatio-temporal image reconstruction technique for enhancing the visual fidelity. Furthermore, algorithms described for time-constrained offline rendering of still images and animation sequences, make it possible to deliver the results in a user-defined limit. These novel methods enable the employment of variable resources in deadline-driven environments

    Enabling rapid iterative model design within the laboratory environment

    Get PDF
    This thesis presents a proof of concept study for the better integration of the electrophysiological and modelling aspects of neuroscience. Members of these two sub-disciplines collaborate regularly, but due to differing resource requirements, and largely incompatible spheres of knowledge, cooperation is often impeded by miscommunication and delays. To reduce the model design time, and provide a platform for more efficient experimental analysis, a rapid iterative model design method is proposed. The main achievement of this work is the development of a rapid model evaluation method based on parameter estimation, utilising a combination of evolutionary algorithms (EAs) and graphics processing unit (GPU) hardware acceleration. This method is the primary force behind the better integration of modelling and laboratorybased electrophysiology, as it provides a generic model evaluation method that does not require prior knowledge of model structure, or expertise in modelling, mathematics, or computer science. If combined with a suitable intuitive and user targeted graphical user interface, the ideas presented in this thesis could be developed into a suite of tools that would enable new forms of experimentation to be performed. The latter part of this thesis investigates the use of excitability-based models as the basis of an iterative design method. They were found to be computationally and structurally simple, easily extensible, and able to reproduce a wide range of neural behaviours whilst still faithfully representing underlying cellular mechanisms. A case study was performed to assess the iterative design process, through the implementation of an excitability-based model. The model was extended iteratively, using the rapid model evaluation method, to represent a vasopressin releasing neuron. Not only was the model implemented successfully, but it was able to suggest the existence of other more subtle cell mechanisms, in addition to highlighting potential failings in previous implementations of the class of neuron

    Gestion efficiente et écoresponsable des données sauvegardées dans l'infonuagique : bilan énergétique des opérations CRUD (créer, lire, modifier et effacer) de MySQL-Java stockées dans un nuage privé

    Get PDF
    Les technologies de l’information et des communications (TIC) permettent de générer de plus en plus des données et de conserver des renseignements. Leur surcroissance dans les centres de traitement des données (CTD) et dans les disques durs des ordinateurs crée des problèmes de capacité. La matrice CRUD (create, read, update, delete) est un outil conceptuel qui illustre les interactions de divers processus en informatique. Elle est utilisée pour mesurer le cycle de vie du contenu d’une base de données, soit l’insertion « C », la lecture « R », la mise à jour « U » et la suppression « D » des données qu’elle contient. Ces opérations fragmentées sont assimilées aux requêtes INSERT, SELECT, UPDATE et DELETE dans le système de gestion de base de données MySQL. Sous le langage de programmation Java, la commande « System.nanotime() » mesure le temps d’exécution total pour chacune des activités traitées sur un ordinateur et sauvegardées dans un CTD local pour les comparer à celles stockées dans un nuage privé. Le temps d’exécution total, la puissance énergétique de l’unité centrale de traitement (UCT) et le pourcentage d’utilisation du processeur permettent de calculer l’énergie totale en joules consommée par les requêtes SQL exécutées de façon synchrone et asynchrone, individuellement et en séquences. L’objectif est de caractériser le profil énergétique des données sauvegardées dans l’infonuagique pour déterminer si le nuage apporte à l’ordinateur une réduction d’énergie aussi notoire que semble dire la position majoritaire dans le milieu scientifique. Les résultats démontrent que selon le type, le taux et la séquence d’activité CRUD traités dans l’ordinateur, le stockage dans les nuages n’est pas toujours l’opération la plus écoresponsable. Avec cette analyse, il est possible pour l’entreprise de comparer pour ces différentes options (lesquelles) du traitement et sauvegarde des données et d’adapter de façon plus écologique la gestion et l’utilisation des opérations CRUD dans l’infonuagique
    corecore