20 research outputs found

    Containerization in Cloud Computing: performance analysis of virtualization architectures

    Get PDF
    La crescente adozione del cloud è fortemente influenzata dall’emergere di tecnologie che mirano a migliorare i processi di sviluppo e deployment di applicazioni di livello enterprise. L’obiettivo di questa tesi è analizzare una di queste soluzioni, chiamata “containerization” e di valutare nel dettaglio come questa tecnologia possa essere adottata in infrastrutture cloud in alternativa a soluzioni complementari come le macchine virtuali. Fino ad oggi, il modello tradizionale “virtual machine” è stata la soluzione predominante nel mercato. L’importante differenza architetturale che i container offrono ha portato questa tecnologia ad una rapida adozione poichè migliora di molto la gestione delle risorse, la loro condivisione e garantisce significativi miglioramenti in termini di provisioning delle singole istanze. Nella tesi, verrà esaminata la “containerization” sia dal punto di vista infrastrutturale che applicativo. Per quanto riguarda il primo aspetto, verranno analizzate le performances confrontando LXD, Docker e KVM, come hypervisor dell’infrastruttura cloud OpenStack, mentre il secondo punto concerne lo sviluppo di applicazioni di livello enterprise che devono essere installate su un insieme di server distribuiti. In tal caso, abbiamo bisogno di servizi di alto livello, come l’orchestrazione. Pertanto, verranno confrontate le performances delle seguenti soluzioni: Kubernetes, Docker Swarm, Apache Mesos e Cattle

    Energy-efficient cloud computing application solutions and architectures

    Get PDF
    Environmental issues are receiving unprecedented attention from business and governments around the world. As concern for greenhouse, climate change and sustainability continue to grow; businesses are grappling with improving their environmental impacts while remaining profitable. Many businesses have discovered that Green IT initiatives and strategies can reform the organization, comply with laws and regulations, enhance the public appearance of the organization, save energy cost, and improving their environmental impacts. One of these Green IT initiatives is migrating or building the business applications in the cloud. Cloud computing is a highly scalable and cost-effective infrastructure for running enterprise and web applications. As a result, building enterprise systems on cloud computing platform is increasing significantly today. However, cloud computing is not inherently proposing energy efficiency solutions for these businesses. In this thesis, a concept has been developed to support organizations choosing suitable energy-efficient cloud architecture while moving their application to the cloud or building new cloud applications. Thus, the concept focuses on how to employ the cloud computing technology as an energy efficient solution from the application perspective. The main idea applied in the concept is identifying architectures for cloud applications depending on the inherent properties of cloud computing such as virtualization and the elasticity that can make them green potential, and identifying correlations between these architectures with already identified business process patterns used in green business process design. Alongside with these correlations, the application has been decomposed into basic technical and business attributes that can describe the application. The relations between these attributes and the cloud architectures have been defined. The relations between the different components the application attributes, application architectures, and the green patterns can lead to not only the energy-efficient cloud architecture for the business application, but also to the architectures that can achieve the organization technical and business requirements. Prototypically, a recommender system has been implemented that supports the identification of suitable energy-efficient cloud application architectures in addition to the cloud migration decision

    Smart Wireless Sensor Networks

    Get PDF
    The recent development of communication and sensor technology results in the growth of a new attractive and challenging area - wireless sensor networks (WSNs). A wireless sensor network which consists of a large number of sensor nodes is deployed in environmental fields to serve various applications. Facilitated with the ability of wireless communication and intelligent computation, these nodes become smart sensors which do not only perceive ambient physical parameters but also be able to process information, cooperate with each other and self-organize into the network. These new features assist the sensor nodes as well as the network to operate more efficiently in terms of both data acquisition and energy consumption. Special purposes of the applications require design and operation of WSNs different from conventional networks such as the internet. The network design must take into account of the objectives of specific applications. The nature of deployed environment must be considered. The limited of sensor nodesďż˝ resources such as memory, computational ability, communication bandwidth and energy source are the challenges in network design. A smart wireless sensor network must be able to deal with these constraints as well as to guarantee the connectivity, coverage, reliability and security of network's operation for a maximized lifetime. This book discusses various aspects of designing such smart wireless sensor networks. Main topics includes: design methodologies, network protocols and algorithms, quality of service management, coverage optimization, time synchronization and security techniques for sensor networks

    Bayesian Prognostic Framework for High-Availability Clusters

    Get PDF
    Critical services from domains as diverse as finance, manufacturing and healthcare are often delivered by complex enterprise applications (EAs). High-availability clusters (HACs) are software-managed IT infrastructures that enable these EAs to operate with minimum downtime. To that end, HACs monitor the health of EA layers (e.g., application servers and databases) and resources (i.e., components), and attempt to reinitialise or restart failed resources swiftly. When this is unsuccessful, HACs try to failover (i.e., relocate) the resource group to which the failed resource belongs to another server. If the resource group failover is also unsuccessful, or when a system-wide critical failure occurs, HACs initiate a complete system failover. Despite the availability of multiple commercial and open-source HAC solutions, these HACs (i) disregard important sources of historical and runtime information, and (ii) have limited reasoning capabilities. Therefore, they may conservatively perform unnecessary resource group or system failovers or delay justified failovers for longer than necessary. This thesis introduces the first HAC taxonomy, uses it to carry out an extensive survey of current HAC solutions, and develops a novel Bayesian prognostic (BP) framework that addresses the significant HAC limitations that are mentioned above and are identified by the survey. The BP framework comprises four \emph{modules}. The first module is a technique for modelling high availability using a combination of established and new HAC characteristics. The second is a suite of methods for obtaining and maintaining the information required by the other modules. The third is a HAC-independent Bayesian decision network (BDN) that predicts whether resource failures can be managed locally (i.e., without failovers). The fourth is a method for constructing a HAC-specific Bayesian network for the fast prediction of resource group and system failures. Used together, these modules reduce the downtime of HAC-protected EAs significantly. The experiments presented in this thesis show that the BP framework can deliver downtimes between 5.5 and 7.9 times smaller than those obtained with an established open-source HAC

    Applications Development for the Computational Grid

    Get PDF

    Combining SOA and BPM Technologies for Cross-System Process Automation

    Get PDF
    This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation

    A Fault-Tolerant Strategy of Redeploying the Lost Replicas in Cloud

    No full text

    An analysis of selected ""cyberpunk"" works by William Gibson, placed in a cultural and socio-political context

    Get PDF
    This thesis studies William Gibson's ""cyberspace trilogy"" (Neuromancer, Count Zero and Mona Lisa Overdrive). This was an extremely interesting and significant development in 1980s science fiction. It was used to codify and promote the ""cyberpunk"" movement in science fiction at that time, which this thesis also briefly studies. Such a study (at such a relatively late date, given the rapid pace of change in popular culture) seems valuable because a great deal of self-serving and mystifying comment and analysis has served to confuse critical understanding about this movement. It seems clear that cyberpunk was indeed a new development in science fiction (like other developments earlier in the twentieth century) but that the roots of this development were broader than the genre itself. However, much of the real novelty of Gibson's work is only evident through close analysis of the texts and how their apparent ideological message shifts focus with time. This message is inextricably entwined with Gibson's and cyberpunk's technological fantasias. Admittedly, these three texts appear to have been, broadly speaking, representations of a liberal U.S. world-view reflecting Gibson's own apparent beliefs. However, they were also expressions of a kind of technophilia which, while similar to that of much earlier science fiction, possessed its own special dynamic. In many ways this technophilia contradicted or undermined the classical liberalism nominally practiced in the United States. However, the combination of this framework and this dynamic, which appears both apocalyptic and conservative, appears in some ways to have been a reasonably accurate prediction of the future trajectory of the U.S. body politic -- towards exaggerated dependency on machines to resolve the consequences of an ever increasingly paranoid fantasy of the entire world as a threat. (It seems likely that this was also true, if sometimes to a lesser degree, of the cyberpunk movement as a whole.) While Gibson's work was enormously popular (both commercially and critically) in the 1980s and early 1990s, very little of this aspect of his work was taken seriously (except, to a limited degree, by a few Marxist and crypto-Marxist commentators like Darko Suvin). This seems ironic, given the avowedly futurological context of science fiction at this time
    corecore