6 research outputs found

    Data-Dependency Formalism for Developing Peer-to-Peer Applications

    Get PDF
    Developing peer-to-peer (P2P) applications became increasingly important in software development. Nowadays, a large number of organizations from many different sectors and sizes depend more and more on collaboration between actors to perform their tasks. These P2P applications usually have a recursive behavior that many modeling approaches cannot describe and analyze (e.g. finite-state approaches). In this paper, we present an approach that combines component-based development with well-understood methods and techniques from the field of Attribute Grammars and Data-Flow Analysis in order to construct an abstract representation (i.e. Data-Dependency Graph) for P2P applications, and then perform data-flow analyzes on it. This approach embodies a formalism called DDF (Data-Dependency Formalism) to capture the behavior of P2P applications and construct their Data-Dependency Graphs. Various properties can be inferred and computed at the proposed level of data abstraction, including some properties that model checking cannot compute if the system presents a recursive behavior. As examples, we present two algorithms: one to resolve the deadlock problem and another for dominance analysis

    Exploring Strategies that IT Leaders Use to Adopt Cloud Computing

    Get PDF
    Information Technology (IT) leaders must leverage cloud computing to maintain competitive advantage. Evidence suggests that IT leaders who have leveraged cloud computing in small and medium sized organizations have saved an average of $1 million in IT services for their organizations. The purpose of this qualitative single case study was to explore strategies that IT leaders use to adopt cloud computing for their organizations. The target population consisted of 15 IT leaders who had experience with designing and deploying cloud computing solutions at their organization in Long Island, New York within the past 2 years. The conceptual framework of this research project was the disruptive innovation theory. Semistructured interviews were conducted and company documents were gathered. Data were inductively analyzed for emergent themes, then subjected to member checking to ensure the trustworthiness of findings. Four main themes emerged from the data: the essential elements for strategies to adopt cloud computing; most effective strategies; leadership essentials; and barriers, critical factors, and ineffective strategies affecting adoption of cloud computing. These findings may contribute to social change by providing insights to IT leaders in small and medium sized organizations to save money while gaining competitive advantage and ensure sustainable business growth that could enhance community standards of living

    Tietojenkäsittelytieteellisiä tutkielmia. Talvi 2016

    Get PDF

    Website Performance Evaluation and Estimation in an E-business Environment

    Get PDF
    This thesis introduces a new Predictus-model for performance evaluation and estimation in a multi-layer website environment. The model is based on soft computing ideas, i.e. simulation and statistical analysis. The aim is to improve energy consumption of the website's hardware and investment efficiency and to avoid loss of availability. The aim of optimised exploitation is reduced energy and maintenance costs on the one hand and increased end-user satisfaction due to robust and stable web services on the other. A method based on simulation of user requests is described. Instead of ordinary static parameter set, the dynamic extraction from previous log files is used. The distribution of existing requests is exploited to generate the actual based natural load. By loading the server system with valid and well-known requests, the behaviour of the server system is natural. The control back loop on the generation of work load assures the validity of the work load in the long-term. A method for identifying the actual performance of the website is described. Using the well-known load in simulation of usage by a large number of virtual users and observing the utilisation rate of server resources ensure the best information for the internal state of the system. The disturbance of the service website usage can be avoided using the mathematical extrapolation method to reach the saturation point on the single server resource

    Gestion efficiente et écoresponsable des données sauvegardées dans l'infonuagique : bilan énergétique des opérations CRUD (créer, lire, modifier et effacer) de MySQL-Java stockées dans un nuage privé

    Get PDF
    Les technologies de l’information et des communications (TIC) permettent de générer de plus en plus des données et de conserver des renseignements. Leur surcroissance dans les centres de traitement des données (CTD) et dans les disques durs des ordinateurs crée des problèmes de capacité. La matrice CRUD (create, read, update, delete) est un outil conceptuel qui illustre les interactions de divers processus en informatique. Elle est utilisée pour mesurer le cycle de vie du contenu d’une base de données, soit l’insertion « C », la lecture « R », la mise à jour « U » et la suppression « D » des données qu’elle contient. Ces opérations fragmentées sont assimilées aux requêtes INSERT, SELECT, UPDATE et DELETE dans le système de gestion de base de données MySQL. Sous le langage de programmation Java, la commande « System.nanotime() » mesure le temps d’exécution total pour chacune des activités traitées sur un ordinateur et sauvegardées dans un CTD local pour les comparer à celles stockées dans un nuage privé. Le temps d’exécution total, la puissance énergétique de l’unité centrale de traitement (UCT) et le pourcentage d’utilisation du processeur permettent de calculer l’énergie totale en joules consommée par les requêtes SQL exécutées de façon synchrone et asynchrone, individuellement et en séquences. L’objectif est de caractériser le profil énergétique des données sauvegardées dans l’infonuagique pour déterminer si le nuage apporte à l’ordinateur une réduction d’énergie aussi notoire que semble dire la position majoritaire dans le milieu scientifique. Les résultats démontrent que selon le type, le taux et la séquence d’activité CRUD traités dans l’ordinateur, le stockage dans les nuages n’est pas toujours l’opération la plus écoresponsable. Avec cette analyse, il est possible pour l’entreprise de comparer pour ces différentes options (lesquelles) du traitement et sauvegarde des données et d’adapter de façon plus écologique la gestion et l’utilisation des opérations CRUD dans l’infonuagique
    corecore