14,215 research outputs found

    Deliverable JRA1.1: Evaluation of current network control and management planes for multi-domain network infrastructure

    Get PDF
    This deliverable includes a compilation and evaluation of available control and management architectures and protocols applicable to a multilayer infrastructure in a multi-domain Virtual Network environment.The scope of this deliverable is mainly focused on the virtualisation of the resources within a network and at processing nodes. The virtualization of the FEDERICA infrastructure allows the provisioning of its available resources to users by means of FEDERICA slices. A slice is seen by the user as a real physical network under his/her domain, however it maps to a logical partition (a virtual instance) of the physical FEDERICA resources. A slice is built to exhibit to the highest degree all the principles applicable to a physical network (isolation, reproducibility, manageability, ...). Currently, there are no standard definitions available for network virtualization or its associated architectures. Therefore, this deliverable proposes the Virtual Network layer architecture and evaluates a set of Management- and Control Planes that can be used for the partitioning and virtualization of the FEDERICA network resources. This evaluation has been performed taking into account an initial set of FEDERICA requirements; a possible extension of the selected tools will be evaluated in future deliverables. The studies described in this deliverable define the virtual architecture of the FEDERICA infrastructure. During this activity, the need has been recognised to establish a new set of basic definitions (taxonomy) for the building blocks that compose the so-called slice, i.e. the virtual network instantiation (which is virtual with regard to the abstracted view made of the building blocks of the FEDERICA infrastructure) and its architectural plane representation. These definitions will be established as a common nomenclature for the FEDERICA project. Other important aspects when defining a new architecture are the user requirements. It is crucial that the resulting architecture fits the demands that users may have. Since this deliverable has been produced at the same time as the contact process with users, made by the project activities related to the Use Case definitions, JRA1 has proposed a set of basic Use Cases to be considered as starting point for its internal studies. When researchers want to experiment with their developments, they need not only network resources on their slices, but also a slice of the processing resources. These processing slice resources are understood as virtual machine instances that users can use to make them behave as software routers or end nodes, on which to download the software protocols or applications they have produced and want to assess in a realistic environment. Hence, this deliverable also studies the APIs of several virtual machine management software products in order to identify which best suits FEDERICA’s needs.Postprint (published version

    The role of the host in a cooperating mainframe and workstation environment, volumes 1 and 2

    Get PDF
    In recent years, advancements made in computer systems have prompted a move from centralized computing based on timesharing a large mainframe computer to distributed computing based on a connected set of engineering workstations. A major factor in this advancement is the increased performance and lower cost of engineering workstations. The shift to distributed computing from centralized computing has led to challenges associated with the residency of application programs within the system. In a combined system of multiple engineering workstations attached to a mainframe host, the question arises as to how does a system designer assign applications between the larger mainframe host and the smaller, yet powerful, workstation. The concepts related to real time data processing are analyzed and systems are displayed which use a host mainframe and a number of engineering workstations interconnected by a local area network. In most cases, distributed systems can be classified as having a single function or multiple functions and as executing programs in real time or nonreal time. In a system of multiple computers, the degree of autonomy of the computers is important; a system with one master control computer generally differs in reliability, performance, and complexity from a system in which all computers share the control. This research is concerned with generating general criteria principles for software residency decisions (host or workstation) for a diverse yet coupled group of users (the clustered workstations) which may need the use of a shared resource (the mainframe) to perform their functions

    CERN openlab Whitepaper on Future IT Challenges in Scientific Research

    Get PDF
    This whitepaper describes the major IT challenges in scientific research at CERN and several other European and international research laboratories and projects. Each challenge is exemplified through a set of concrete use cases drawn from the requirements of large-scale scientific programs. The paper is based on contributions from many researchers and IT experts of the participating laboratories and also input from the existing CERN openlab industrial sponsors. The views expressed in this document are those of the individual contributors and do not necessarily reflect the view of their organisations and/or affiliates

    Scaling Virtualized Smartphone Images in the Cloud

    Get PDF
    Üks selle Bakalaureuse töö eesmärkidest oli Android-x86 nutitelefoni platvormi juurutamine pilvekeskkonda ja välja selgitamine, kas valitud instance on piisav virtualiseeritud nutitelefoni platvormi juurutamiseks ning kui palju koormust see talub. Töös kasutati Amazoni instance'i M1 Small, mis oli piisav, et juurutada Androidi virtualiseeritud platvormi, kuid jäi kesisemaks kui mobiiltelefon, millel teste läbi viidi. M1 Medium instance'i tüüp oli sobivam ja näitas paremaid tulemusi võrreldes telefoniga. Teostati koormusteste selleks vastava tööriistaga Tsung, et näha, kui palju üheaegseid kasutajaid instance talub. Testi läbiviimiseks paigaldasime Dalviku instance'ile Tomcat serveri. Pärast teste ühe eksemplariga, juurutasime külge Elastic Load Balancing ja automaatse skaleerimise Amazon Auto Scaling tööriista. Esimene neist jaotas koormust instance'ide vahel. Automaatse skaleerimise tööriista kasutasime, et rakendada horisontaalset skaleerimist meie Android-x86 instance'le. Kui CPU tõusis üle 60% kauemaks kui üks minut, siis tehti eelmisele identne instance ja koormust saadeti edaspidi sinna. Seda protseduuri vajadusel korrati maksimum kümne instance'ini. Meie teostusel olid tagasilöögid, sest Elastic Load Balancer aegus 60 sekundi pärast ning me ei saanud kõikide välja saadetud päringutele vastuseid. Serverisse saadetud faili kirjutamine ja kompileerimine olid kulukad tegevused ja seega ei lõppenud kõik 60 sekundi jooksul. Me ei saanud koos Load Balancer'iga läbiviidud testidest piisavalt andmeid, et teha järeldusi, kas virtualiseeritud nutitelefoni platvorm Android on hästi või halvasti skaleeruv.In this thesis we deployed a smartphone image in an Amazon EC2 instance and ran stress tests on them to know how much users can one instance bear and how scalable it is. We tested how much time would a method run in a physical Android device and in a cloud instance. We deployed CyanogenMod and Dalvik for a single instance. We used Tsung for stress testing. For those tests we also made a Tomcat server on Dalvik instance that would take the incoming file, the file would be compiled with java and its class file would be wrapped into dex, a Dalvik executable file, that is later executed with Dalvik. Three instances made a Tsung cluster that sent load to a Dalvik Virtual Machine instance. For scaling we used Amazon Auto Scaling tool and Elastic Load Balancer that divided incoming load between the instances

    Functional requirements document for the Earth Observing System Data and Information System (EOSDIS) Scientific Computing Facilities (SCF) of the NASA/MSFC Earth Science and Applications Division, 1992

    Get PDF
    Five scientists at MSFC/ESAD have EOS SCF investigator status. Each SCF has unique tasks which require the establishment of a computing facility dedicated to accomplishing those tasks. A SCF Working Group was established at ESAD with the charter of defining the computing requirements of the individual SCFs and recommending options for meeting these requirements. The primary goal of the working group was to determine which computing needs can be satisfied using either shared resources or separate but compatible resources, and which needs require unique individual resources. The requirements investigated included CPU-intensive vector and scalar processing, visualization, data storage, connectivity, and I/O peripherals. A review of computer industry directions and a market survey of computing hardware provided information regarding important industry standards and candidate computing platforms. It was determined that the total SCF computing requirements might be most effectively met using a hierarchy consisting of shared and individual resources. This hierarchy is composed of five major system types: (1) a supercomputer class vector processor; (2) a high-end scalar multiprocessor workstation; (3) a file server; (4) a few medium- to high-end visualization workstations; and (5) several low- to medium-range personal graphics workstations. Specific recommendations for meeting the needs of each of these types are presented
    corecore