30 research outputs found

    Virtual network security: threats, countermeasures, and challenges

    Get PDF
    Network virtualization has become increasingly prominent in recent years. It enables the creation of network infrastructures that are specifically tailored to the needs of distinct network applications and supports the instantiation of favorable environments for the development and evaluation of new architectures and protocols. Despite the wide applicability of network virtualization, the shared use of routing devices and communication channels leads to a series of security-related concerns. It is necessary to provide protection to virtual network infrastructures in order to enable their use in real, large scale environments. In this paper, we present an overview of the state of the art concerning virtual network security. We discuss the main challenges related to this kind of environment, some of the major threats, as well as solutions proposed in the literature that aim to deal with different security aspects.Network virtualization has become increasingly prominent in recent years. It enables the creation of network infrastructures that are specifically tailored to the needs of distinct network applications and supports the instantiation of favorable environme61CNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICORNP - REDE NACIONAL DE ENSINO E PESQUISAFAPERGS - FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DO RIO GRANDE DO SULsem informaçãosem informaçãosem informaçã

    Time delay and its effect in a virtual lab created using cloud computing

    Get PDF
    The emergence of Cloud Computing, as a model of virtualized physical resources and virtualized infrastructure, offers the opportunity of outsourcing the implementation of a Virtual Lab Manager. Virtual Lab Management has come to be considered the Holy Grail in the deployment and administration of Labs created in a Virtual Environment. With the advent of Cloud Computing new opportunities are developing that promise to cover much of the future in Virtual Labs. Designing network and information labs with real equipment and tools does not make sense from a cost benefit standpoint, as hardware gets obsolete in a short gap of time, therefore replacing real labs with labs in a Virtual environment this days is a must for teaching in information, security and network classes. Choosing an adequate Virtual Lab Environment solves the problem of creating an adequate academic environment where teachers can serve as effective guides for students which will have a lot of freedom and first hand on experience in the learning subject under consideration. A Virtual Lab Manager in a Cloud Computing environment reduces cost even further, but creates some doubts about the time delays inherent in such a technology. After choosing to use the one created by VMLogix for Amazonaws ec2, it was decided to answer a question in this paper: being Virtual Labs a real time application, how it is affected by time delays and bandwidth when accessed from remote places? The same criteria used for video on demand, voice-over-IP or on line business system as used in networks are going to be applied in the presented work although the much interactivity in a Virtual Lab of any kind

    Virtual Organization Clusters: Self-Provisioned Clouds on the Grid

    Get PDF
    Virtual Organization Clusters (VOCs) provide a novel architecture for overlaying dedicated cluster systems on existing grid infrastructures. VOCs provide customized, homogeneous execution environments on a per-Virtual Organization basis, without the cost of physical cluster construction or the overhead of per-job containers. Administrative access and overlay network capabilities are granted to Virtual Organizations (VOs) that choose to implement VOC technology, while the system remains completely transparent to end users and non-participating VOs. Unlike alternative systems that require explicit leases, VOCs are autonomically self-provisioned according to configurable usage policies. As a grid computing architecture, VOCs are designed to be technology agnostic and are implementable by any combination of software and services that follows the Virtual Organization Cluster Model. As demonstrated through simulation testing and evaluation of an implemented prototype, VOCs are a viable mechanism for increasing end-user job compatibility on grid sites. On existing production grids, where jobs are frequently submitted to a small subset of sites and thus experience high queuing delays relative to average job length, the grid-wide addition of VOCs does not adversely affect mean job sojourn time. By load-balancing jobs among grid sites, VOCs can reduce the total amount of queuing on a grid to a level sufficient to counteract the performance overhead introduced by virtualization

    Practical Implementation of the Virtual Organization Cluster Model

    Get PDF
    Virtualization has great potential in the realm of scientific computing because of its inherent advantages with regard to environment customization and isolation. Virtualization technology is not without it\u27s downsides, most notably, increased computational overhead. This thesis introduces the operating mechanisms of grid technologies in general, and the Open Science Grid in particular, including a discussion of general organization and specific software implementation. A model for utilization of virtualization resources with separate administrative domains for the virtual machines (VMs) and the physical resources is then presented. Two well-known virtual machine monitors, Xen and the Kernel-based Virtual Machine (KVM), are introduced and a performance analysis conducted. The High-Performance Computing Challenge (HPCC) benchmark suite is used in conjunction with independent High-Performance Linpack (HPL) trials in order to analyze specific performance issues. Xen was found to introduce much lower performance overhead than KVM, however, KVM retains advantages with regard to ease of deployment, both of the VMM itself and of the VM images. KVM\u27s snapshot mode is of special interest, as it allows multiple VMs to be instantiated from a single image located on a network store. With virtualization overhead shown to be acceptable for high-throughput computing tasks, the Virtual Organization Cluster (VOC) Model was implemented as a prototype. Dynamic scaling and multi-site scheduling extensions were also successfully implemented using this prototype. It is also shown that traditional overlay networks have scaling issues and that a new approach to wide-area scheduling is needed. The use of XMPP messaging and the Google App Engine service to implement a virtual machine monitoring system is presented. Detailed discussions of the relevant sections of the XMPP protocol and libraries are presented. XMPP is found to be a good choice for sending status information due to its inherent advantages in a bandwidth-limited NAT environment. Thus, it is concluded that the VOC Model is a practical way to implement virtualization of high-throughput computing tasks. Smaller VOCs may take advantage of traditional overlay networks whereas larger VOCs need an alternative approach to scheduling

    LHCb distributed data analysis on the computing grid

    Get PDF
    LHCb is one of the four Large Hadron Collider (LHC) experiments based at CERN, the European Organisation for Nuclear Research. The LHC experiments will start taking an unprecedented amount of data when they come online in 2007. Since no single institute has the compute resources to handle this data, resources must be pooled to form the Grid. Where the Internet has made it possible to share information stored on computers across the world, Grid computing aims to provide access to computing power and storage capacity on geographically distributed systems. LHCb software applications must work seamlessly on the Grid allowing users to efficiently access distributed compute resources. It is essential to the success of the LHCb experiment that physicists can access data from the detector, stored in many heterogeneous systems, to perform distributed data analysis. This thesis describes the work performed to enable distributed data analysis for the LHCb experiment on the LHC Computing Grid

    Measuring the Semantic Integrity of a Process Self

    Get PDF
    The focus of the thesis is the definition of a framework to protect a process from attacks against the process self, i.e. attacks that alter the expected behavior of the process, by integrating static analysis and run-time monitoring. The static analysis of the program returns a description of the process self that consists of a context-free grammar, which defines the legal system call traces, and a set of invariants on process variables that hold when a system call is issued. Run-time monitoring assures the semantic integrity of the process by checking that its behavior is coherent with the process self returned by the static analysis. The proposed framework can also cover kernel integrity to protect the process from attacks from the kernel-level. The implementation of the run-time monitoring is based upon introspection, a technique that analyzes the state of a computer to rebuild and check the consistency of kernel or user-level data structures. The ability of observing the run-time values of variables reduces the complexity of the static analysis and increases the amount of information that can be extracted on the run-time behavior of the process. To achieve transparency of the controls for the process while avoiding the introduction of special purpose hardware units that access the memory, the architecture of the run-time monitoring adopts virtualization technology and introduces two virtual machines, the monitored and the introspection virtual machines. This approach increases the overall robustness because a distinct virtual machine, the introspection virtual machine, applies introspection in a transparent way both to verify the kernel integrity and to retrieve the status of the process to check the process self. After presenting the framework and its implementation, the thesis discusses some of its applications to increase the security of a computer network. The first application of the proposed framework is the remote attestation of the semantic integrity of a process. Then, the thesis describes a set of extensions to the framework to protect a process from physical attacks by running an obfuscated version of the process code. Finally, the thesis generalizes the framework to support the efficient sharing of an information infrastructure among users and applications with distinct security and reliability requirements by introducing highly parallel overlays

    Gestor d'entorns virtuals per a l'execució de tasques d'altes prestacions

    Get PDF
    Amb l'evolució de la tecnologia les capacitats de còmput es van incrementant i problemes irresolubles del passat deixen de ser-ho amb els recursos actuals. La majoria d'aplicacions que s'enfronten a aquests problemes són complexes, ja que per aconseguir taxes elevades de rendiment es fa necessari utilitzar el major nombre de recursos possibles, i això les dota d'una arquitectura inherentment distribuïda. Seguint la tendència de la comunitat investigadora, en aquest treball de recerca es proposa una arquitectura per a entorns grids basada en la virtualització de recursos que possibilita la gestió eficient d'aquests recursos. L'experimentació duta a terme ha permès comprovar la viabilitat d'aquesta arquitectura i la millora en la gestió que la utilització de màquines virtuals proporciona.Con la evolución de la tecnología, las capacidades de cómputo se incrementan y problemas irresolubles del pasado dejan de serlo con los recursos actuales. La mayoría de las aplicaciones que se enfrentan a estos problemas son complejas, ya que para conseguir un elevado rendimiento es necesario utilizar el mayor número posible de recursos, lo que requiere de una arquitectura distribuida. Siguiendo la tendencia de la comunidad investigadora, en este trabajo de investigación se propone una arquitectura para entornos grid basada en la virtualización de recursos que posibilita la gestión eficiente de estos recursos. La experimentación llevada a cabo ha permitido comprobar la vialibilidad de esta arquitectura y la mejora en la gestión que supone el uso de máquinas virtuales.As the technology evolves the computational power increases. Past goals, which wre deemed too difficult to achieve, now become computationally solvable. Most applications that focus on that problems are complex; they need a lot of resources to attain good performance, and that imposes a distributed architecture. Following the research community trend, in this work we propose an architectural design for distributed environments based on resource virtualization, which enables efficient resource management. The experimentations held have been able to prove this architecture viability, along with, how could the use of virtual machines enhance resource management

    Service-Oriented Ad Hoc Grid Computing

    Get PDF
    Subject of this thesis are the design and implementation of an ad hoc Grid infrastructure. The vision of an ad hoc Grid further evolves conventional service-oriented Grid systems into a more robust, more flexible and more usable environment that is still standards compliant and interoperable with other Grid systems. A lot of work in current Grid middleware systems is focused on providing transparent access to high performance computing (HPC) resources (e.g. clusters) in virtual organizations spanning multiple institutions. The ad hoc Grid vision presented in this thesis exceeds this view in combining classical Grid components with more flexible components and usage models, allowing to form an environment combining dedicated HPC-resources with a large number of personal computers forming a "Desktop Grid". Three examples from medical research, media research and mechanical engineering are presented as application scenarios for a service-oriented ad hoc Grid infrastructure. These sample applications are also used to derive requirements for the runtime environment as well as development tools for such an ad hoc Grid environment. These requirements form the basis for the design and implementation of the Marburg ad hoc Grid Environment (MAGE) and the Grid Development Tools for Eclipse (GDT). MAGE is an implementation of a WSRF-compliant Grid middleware, that satisfies the criteria for an ad hoc Grid middleware presented in the introduction to this thesis. GDT extends the popular Eclipse integrated development environment by components that support application development both for traditional service-oriented Grid middleware systems as well as ad hoc Grid infrastructures such as MAGE. These development tools represent the first fully model driven approach to Grid service development integrated with infrastructure management components in service-oriented Grid computing. This thesis is concluded by a quantitative discussion of the performance overhead imposed by the presented extensions to a service-oriented Grid middleware as well as a discussion of the qualitative improvements gained by the overall solution. The conclusion of this thesis also gives an outlook on future developments and areas for further research. One of these qualitative improvements is "hot deployment" the ability to install and remove Grid services in a running node without interrupt to other active services on the same node. Hot deployment has been introduced as a novelty in service-oriented Grid systems as a result of the research conducted for this thesis. It extends service-oriented Grid computing with a new paradigm, making installation of individual application components a functional aspect of the application. This thesis further explores the idea of using peer-to-peer (P2P networking for Grid computing by combining a general purpose P2P framework with a standard compliant Grid middleware. In previous work the application of P2P systems has been limited to replica location and use of P2P index structures for discovery purposes. The work presented in this thesis also uses P2P networking to realize seamless communication accross network barriers. Even though the web service standards have been designed for the internet, the two-way communication requirement introduced by the WSRF-standards and particularly the notification pattern is not well supported by the web service standards. This defficiency can be answered by mechanisms that are part of such general purpose P2P communication frameworks. Existing security infrastructures for Grid systems focus on protection of data during transmission and access control to individual resources or the overall Grid environment. This thesis focuses on security issues within a single node of a dynamically changing service-oriented Grid environment. To counter the security threads arising from the new capabilities of an ad hoc Grid, a number of novel isolation solutions are presented. These solutions address security issues and isolation on a fine-grained level providing a range of applicable basic mechanisms for isolation, ranging from lightweight system call interposition to complete para-virtualization of the operating systems

    Network environment for testing peer-to-peer streaming applications

    Get PDF
    Peer-to-Peer (P2P) streaming applications are an emerging trend in content distribution. A reliable network environment was needed to test their capabilities and performance limits, which this thesis focused on. Furthermore, some experimental tests in the environment were performed with an application implemented in the Department of Communications Engineering (DCE) at Tampere University of Technology. For practical reasons, the testing environment was assembled in a teaching laboratory at DCE premises. The environment was built using a centralized architecture, where a Linux emulation node, WANemulator, generates realistic packet losses, delays, and jitters to the network. After an extensive literature survey an extension to the Iproute2’s Tc utility, NetEm, was chosen to be responsible of the network link emulation at the WANemulator. The peers are run inside VirtualBox images, which are used at the Linux computers to keep the laboratory still suitable for teaching purposes. In addition to the network emulation, Linux traffic controlling mechanisms were used both at the WANemulator and VirtualBox’s virtual machines to limit the traffic rates of the peers. When used together, emulation and rate limitation resemble to the statistical behaviour of the Internet quite closely. Virtualization overhead limited the maximum number of Virtual Machines (VMs) at each laboratory computer into two. Also, a peculiar feature in VirtualBox’s bridge implementation reduced the network capabilities of the VMs. However, the bottleneck in the environment is the centralized architecture, where all of the traffic is routed through the WANemulator. The environment was tested reliable with the chosen streamed content and 160 peers, but by tuning the parameters in WANemulator bigger overlays might be achievable. In addition, a distributed emulation should be possible with the environment, but it was not tested. The results from the experimental tests performed with the P2P streaming application proved the application to be functional in an environment that has mobile network conditions. The designed network environment is tested to work reliably, it enables reasonable scalability and provides better possibility to emulate the networking characteristics of the Internet, when compared to an ordinary local area network environment. /Kir1
    corecore