357 research outputs found

    A Model for Accomplishing and Managing Dynamic Cloud Federations

    Get PDF
    Cloud computing is not just a promising approach to the service provisioning: nowadays it represents the reference model in such field. Several cloud service providers have emerged as de facto standards and an increasing number of companies are choosing to migrate their business in the Cloud "ecosystem". Nevertheless, each provider adopts a particular interface to manage its services and uses a proprietary technology. In this paper we present a cloud federation model which is able to provide scalability and flexibility to small clouds. The idea is to benefit of renting seamless resources according to federation agreements among operators. The challenge here is to overcome all the problems raising trying to merge small clouds with heterogeneous administrative domains

    Creating and Managing Dynamic Cloud Federations

    Get PDF
    Cloud computing has evolved from a promising approach to the service provisioning to the reference model for all new data centres to build. Additionally, an increasing number of companies are choosing to migrate their business in the cloud "ecosystem" adopting the solutions developed by the biggest public Cloud Service Providers (CSPs). Smaller CSPs build their infrastructure on technologies available and to better support user activities and provide enough resources to their users, the federation could be a possible solution. In this work, we present different federation models, showing their strengths and weakness together with our considerations. Beside the highlighted existing federation we show the design of a new implementations under development at INFN aiming at maximising the scalability and flexibility of small and/or hybrid clouds by the introduction of a federation manager. This new component will support a seamless resources renting on the base of acceptance of federation agreements among operators. Additionally, we will discuss how the implementation of this model inside research institutes could help in the field of High Energy Physics with explicit reference at LHC experiments, digital humanities, life sciences and others

    Efficient multilevel scheduling in grids and clouds with dynamic provisioning

    Get PDF
    Tesis de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de Arquitectura de Computadores y Automática, leída el 12-01-2016La consolidación de las grandes infraestructuras para la Computación Distribuida ha resultado en una plataforma de Computación de Alta Productividad que está lista para grandes cargas de trabajo. Los mejores exponentes de este proceso son las federaciones grid actuales. Por otro lado, la Computación Cloud promete ser más flexible, utilizable, disponible y simple que la Computación Grid, cubriendo además muchas más necesidades computacionales que las requeridas para llevar a cabo cálculos distribuidos. En cualquier caso, debido al dinamismo y la heterogeneidad presente en grids y clouds, encontrar la asignación ideal de las tareas computacionales en los recursos disponibles es, por definición un problema NP-completo, y sólo se pueden encontrar soluciones subóptimas para estos entornos. Sin embargo, la caracterización de estos recursos en ambos tipos de infraestructuras es deficitaria. Los sistemas de información disponibles no proporcionan datos fiables sobre el estado de los recursos, lo cual no permite la planificación avanzada que necesitan los diferentes tipos de aplicaciones distribuidas. Durante la última década esta cuestión no ha sido resuelta para la Computación Grid y las infraestructuras cloud establecidas recientemente presentan el mismo problema. En este marco, los planificadores (brokers) sólo pueden mejorar la productividad de las ejecuciones largas, pero no proporcionan ninguna estimación de su duración. La planificación compleja ha sido abordada tradicionalmente por otras herramientas como los gestores de flujos de trabajo, los auto-planificadores o los sistemas de gestión de producción pertenecientes a ciertas comunidades de investigación. Sin embargo, el bajo rendimiento obtenido con estos mecanismos de asignación anticipada (early-binding) es notorio. Además, la diversidad en los proveedores cloud, la falta de soporte de herramientas de planificación y de interfaces de programación estandarizadas para distribuir la carga de trabajo, dificultan la portabilidad masiva de aplicaciones legadas a los entornos cloud...The consolidation of large Distributed Computing infrastructures has resulted in a High-Throughput Computing platform that is ready for high loads, whose best proponents are the current grid federations. On the other hand, Cloud Computing promises to be more flexible, usable, available and simple than Grid Computing, covering also much more computational needs than the ones required to carry out distributed calculations. In any case, because of the dynamism and heterogeneity that are present in grids and clouds, calculating the best match between computational tasks and resources in an effectively characterised infrastructure is, by definition, an NP-complete problem, and only sub-optimal solutions (schedules) can be found for these environments. Nevertheless, the characterisation of the resources of both kinds of infrastructures is far from being achieved. The available information systems do not provide accurate data about the status of the resources that can allow the advanced scheduling required by the different needs of distributed applications. The issue was not solved during the last decade for grids and the cloud infrastructures recently established have the same problem. In this framework, brokers only can improve the throughput of very long calculations, but do not provide estimations of their duration. Complex scheduling was traditionally tackled by other tools such as workflow managers, self-schedulers and the production management systems of certain research communities. Nevertheless, the low performance achieved by these earlybinding methods is noticeable. Moreover, the diversity of cloud providers and mainly, their lack of standardised programming interfaces and brokering tools to distribute the workload, hinder the massive portability of legacy applications to cloud environments...Depto. de Arquitectura de Computadores y AutomáticaFac. de InformáticaTRUEsubmitte

    Limitations Of Micro And Macro Solutions To The Simulation Interoperability Challenge: An Ease Case Study

    Get PDF
    This thesis explored the history of military simulations and linked it to the current challenges of interoperability. The research illustrated the challenge of interoperability in integrating different networks, databases, standards, and interfaces and how it results in U.S. Army organizations constantly spending time and money to create and implement irreproducible Live, Virtual, and Constructive (LVC) integrating architectures to accomplish comparable tasks. Although the U.S. Army has made advancements in interoperability, it has struggled with this challenge since the early 1990s. These improvements have been inadequate due to evolving and growing needs of the user coupled with the technical complexities of interoperating legacy systems with emergent systems arising from advances in technology. To better understand the impact of the continued evolution of simulations, this paper mapped Maslow\u27s Hierarchy of Needs with Tolk\u27s Levels of Conceptual Interoperability Model (LCIM). This mapping illustrated a common relationship in both the Hierarchy of Needs and the LCIM model depicting that each level increases with complexity and the proceeding lower level must first be achieved prior to reaching the next. Understanding the continuum of complexity of interoperability, as requirements or needs, helped to determine why the previous funding and technical efforts have been inadequate in mitigating the interoperability challenges within U.S. Army simulations. As the U.S. Army\u27s simulation programs continue to evolve while the military and contractor personnel turnover rate remains near constant, a method of capturing and passing on the tacit knowledge from one personnel staffing life cycle to the next must be developed in order to economically and quickly reproduce complex simulation events. This thesis explored a potential solution to this challenge, the Executable Architecture Systems Engineering (EASE) research project managed by the U.S. Army’s Simulation and Training Technology Center in the Army Research Laboratory within the Research, Development and Engineering Command. However, there are two main drawbacks to EASE; it iv is still in the prototype stage and has not been fully tested and evaluated as a simulation tool within the community of practice. In order to determine if EASE has the potential to reduce the micro as well as macro interoperability, an EASE experiment was conducted as part of this thesis. The following three alternative hypothesis were developed, tested, and accepted as a result of the research for this thesis: Ha1 = Expert stakeholders believe the EASE prototype does have potential as a U.S. Army technical solution to help mitigate the M&S interoperability challenge. Ha2 = Expert stakeholders believe the EASE prototype does have potential as a U.S. Army managerial solution to help mitigate the M&S interoperability challenge. Ha3 = Expert stakeholders believe the EASE prototype does have potential as a U.S. Army knowledge management solution to help mitigate the M&S interoperability challenge. To conduct this experiment, eleven participants representing ten different organizations across the three M&S Domains were selected to test EASE using a modified Technology Acceptance Model (TAM) approach developed by Davis. Indexes were created from the participants’ responses to include both the quality of participants and research questions. The Cronbach Alpha Test for reliability was used to test the reliability of the adapted TAM. The Wilcoxon Signed Ranked test provided the statistical analysis that formed the basis of the research; that determined the EASE project has the potential to help mitigate the interoperability challenges in the U.S. Army\u27s M&S domains

    Collaborative Intrusion Detection in Federated Cloud Environments using Dempster-Shafer Theory of Evidence

    Get PDF
    Moving services to the Cloud environment is a trend that has been increasing in recent years, with a constant increase in sophistication and complexity of such services. Today, even critical infrastructure operators are considering moving their services and data to the Cloud. As Cloud computing grows in popularity, new models are deployed to further the associated benefits. Federated Clouds are one such concept, which are an alternative for companies reluctant to move their data out of house to a Cloud Service Providers (CSP) due to security and confidentiality concerns. Lack of collaboration among different components within a Cloud federation, or among CSPs, for detection or prevention of attacks is an issue. For protecting these services and data, as Cloud environments and Cloud federations are large scale, it is essential that any potential solution should scale alongside the environment adapt to the underlying infrastructure without any issues or performance implications. This thesis presents a novel architecture for collaborative intrusion detection specifically for CSPs within a Cloud federation. Our approach offers a proactive model for Cloud intrusion detection based on the distribution of responsibilities, whereby the responsibility for managing the elements of the Cloud is distributed among several monitoring nodes and brokering, utilising our Service-based collaborative intrusion detection – “Security as a Service” methodology. For collaborative intrusion detection, the Dempster-Shafer (D-S) theory of evidence is applied, executing as a fusion node with the role of collecting and fusing the information provided by the monitoring entities, taking the final decision regarding a possible attack. This type of detection and prevention helps increase resilience to attacks in the Cloud. The main novel contribution of this project is that it provides the means by which DDoS attacks are detected within a Cloud federation, so as to enable an early propagated response to block the attack. This inter-domain cooperation will offer holistic security, and add to the defence in depth. However, while the utilisation of D-S seems promising, there is an issue regarding conflicting evidences which is addressed with an extended two stage D-S fusion process. The evidence from the research strongly suggests that fusion algorithms can play a key role in autonomous decision making schemes, however our experimentation highlights areas upon which improvements are needed before fully applying to federated environments

    Hybrid Cloud Model Checking Using the Interaction Layer of HARMS for Ambient Intelligent Systems

    Get PDF
    Soon, humans will be co-living and taking advantage of the help of multi-agent systems in a broader way than the present. Such systems will involve machines or devices of any variety, including robots. These kind of solutions will adapt to the special needs of each individual. However, to the concern of this research effort, systems like the ones mentioned above might encounter situations that will not be seen before execution time. It is understood that there are two possible outcomes that could materialize; either keep working without corrective measures, which could lead to an entirely different end or completely stop working. Both results should be avoided, specially in cases where the end user will depend on a high level guidance provided by the system, such as in ambient intelligence applications. This dissertation worked towards two specific goals. First, to assure that the system will always work, independently of which of the agents performs the different tasks needed to accomplish a bigger objective. Second, to provide initial steps towards autonomous survivable systems which can change their future actions in order to achieve the original final goals. Therefore, the use of the third layer of the HARMS model was proposed to insure the indistinguishability of the actors accomplishing each task and sub-task without regard of the intrinsic complexity of the activity. Additionally, a framework was proposed using model checking methodology during run-time for providing possible solutions to issues encountered in execution time, as a part of the survivability feature of the systems final goals

    Seamless connectivity:investigating implementation challenges of multibroker MQTT platform for smart environmental monitoring

    Get PDF
    Abstract. This thesis explores the performance and efficiency of MQTT-based infrastructure Internet of Things (IoT) sensor networks for smart environment. The study focuses on the impact of network latency and broker switching in distributed multi-broker MQTT platforms. The research involves three case studies: a cloud-based multi-broker deployment, a Local Area Network (LAN)-based multi-broker deployment, and a multi-layer LAN network-based multi-broker deployment. The research is guided by three objectives: quantifying and analyzing the latency of multi-broker MQTT platforms; investigating the benefits of distributed brokers for edge users; and assessing the impact of switching latency at applications. This thesis ultimately seeks to answer three key questions related to network and switching latency, the merits of distributed brokers, and the influence of switching latency on the reliability of end-user applications

    High Energy Physics Forum for Computational Excellence: Working Group Reports (I. Applications Software II. Software Libraries and Tools III. Systems)

    Full text link
    Computing plays an essential role in all aspects of high energy physics. As computational technology evolves rapidly in new directions, and data throughput and volume continue to follow a steep trend-line, it is important for the HEP community to develop an effective response to a series of expected challenges. In order to help shape the desired response, the HEP Forum for Computational Excellence (HEP-FCE) initiated a roadmap planning activity with two key overlapping drivers -- 1) software effectiveness, and 2) infrastructure and expertise advancement. The HEP-FCE formed three working groups, 1) Applications Software, 2) Software Libraries and Tools, and 3) Systems (including systems software), to provide an overview of the current status of HEP computing and to present findings and opportunities for the desired HEP computational roadmap. The final versions of the reports are combined in this document, and are presented along with introductory material.Comment: 72 page
    corecore