246 research outputs found

    An Architecture Model for a Distributed Virtualization System

    Get PDF
    Virtualization technologies are massively adopted to cover those requirements in which Operating Systems (OS) have shown weakness, such as fault and security isolation. They also add features like resource partitioning, server consolidation, legacy application support, management tools, among others, which are attractive to Cloud service providers. Hardware virtualization, paravirtualization, and OS-level virtualization are the most widely used technologies to carry out these tasks, although each of them presents different levels of server consolidation, performance, scalability, high-availability, and isolation. The term “Virtual Machine” (VM) is used in issues related to hardware virtualization and paravirtualization technologies to describe an isolated execution environment for an OS and its applications. Containers, Jails, Zones are the names used in OS-level virtualization to describe the environments for applications confinement. Regardless of the definition of the virtualization abstraction, its computing power and resource usage are limited to the physical machine where it runs. The proposed virtualization architecture model breaks this issue, distributing processes, services, and resources to provide distributed virtual environments based on OS factoring and OS containers. The outcome is a Distributed Virtualization System (DVS) which allows running several distributed Virtual Operating System (VOS) on the same cluster. A DVS also fits the requirements for delivering high-performance cloud services with provider-class features as high-availability, replication, elasticity, load balancing, resource management, and process migration. Furthermore, a DVS is able to run several instances of different guest VOS concurrently, allocating a subset of nodes for each instance (resource aggregation), and to share nodes between them (resource partitioning). Each VOS runs isolated within a Distributed Container (DC), which could span multiple nodes of the DVS cluster. The proposed architecture model keeps the appreciated features of current virtualization technologies, such as confinement, consolidation and security, and the benefits of DOS, such as transparency, greater performance, high-availability, elasticity, and scalability.Este documento es la reseña de una tesis publicada en Sedici (ver documento relacionado).Facultad de Informátic

    An Architecture Model for a Distributed Virtualization System

    Get PDF
    Virtualization technologies are massively adopted to cover those requirements in which Operating Systems (OS) have shown weakness, such as fault and security isolation. They also add features like resource partitioning, server consolidation, legacy application support, management tools, among others, which are attractive to Cloud service providers. Hardware virtualization, paravirtualization, and OS-level virtualization are the most widely used technologies to carry out these tasks, although each of them presents different levels of server consolidation, performance, scalability, high-availability, and isolation. The term “Virtual Machine” (VM) is used in issues related to hardware virtualization and paravirtualization technologies to describe an isolated execution environment for an OS and its applications. Containers, Jails, Zones are the names used in OS-level virtualization to describe the environments for applications confinement. Regardless of the definition of the virtualization abstraction, its computing power and resource usage are limited to the physical machine where it runs. The proposed virtualization architecture model breaks this issue, distributing processes, services, and resources to provide distributed virtual environments based on OS factoring and OS containers. The outcome is a Distributed Virtualization System (DVS) which allows running several distributed Virtual Operating System (VOS) on the same cluster. A DVS also fits the requirements for delivering high-performance cloud services with provider-class features as high-availability, replication, elasticity, load balancing, resource management, and process migration. Furthermore, a DVS is able to run several instances of different guest VOS concurrently, allocating a subset of nodes for each instance (resource aggregation), and to share nodes between them (resource partitioning). Each VOS runs isolated within a Distributed Container (DC), which could span multiple nodes of the DVS cluster. The proposed architecture model keeps the appreciated features of current virtualization technologies, such as confinement, consolidation and security, and the benefits of DOS, such as transparency, greater performance, high-availability, elasticity, and scalability.Este documento es la reseña de una tesis publicada en Sedici (ver documento relacionado).Facultad de Informátic

    MINIX4RT: real-time interprocess communications facilities

    Get PDF
    MINIX4RT es una extensión del conocido Sistema Operativo MINIX que incorpora servicios de Tiempo Real Estricto en un nuevo microkernel pero manteniendo compatibilidad con las versiones anteriores del MINIX estándar. La Comunicación entre Procesos es un mecanismo que permite hacer extensible a un Sistema Operativo, pero debe estar libre de Inversión de Prioridades para ser utilizado en aplicaciones de Tiempo Real. Como las primitivas de MINIX no disponen de esta funcionalidad, se incorporaron nuevas primitivas de Comunicación entre Procesos al microkernel de Tiempo Real. El presente artículo describe las facilidades de Comunicaciones entre Procesos en Tiempo Real disponibles en MINIX4RT, su diseño, implementación, tests de desempeño y sus resultados.MINIX4RT is an extension of the well-known MINIX Operating System that adds Hard Real-Time services in a new microkernel but keeping backward compatibility with standard MINIX versions. Interprocess Communications provides a mechanism to make Operating System extensible, but they must be Priority Inversion free for Real-Time applications. As MINIX Interprocess Communications primitives does not have this functionality, new primitives were added to the Real-Time microkernel. This article describes the Real- Time Interprocess Communications facilities available on MINIX4RT, its design, implementation, performance tests and their results.I Workshop de Arquitecturas, Redes y Sistemas Operativos (WARSO)Red de Universidades con Carreras en Informática (RedUNCI

    Enhancing MINIX 3 Input/Output performance using a virtual machine approach

    Get PDF
    MINIX 3 is an open-source operating system designed to be highly reliable, flexible, and secure. The kernel is extremely small and user processes, specialized servers and device drivers run as user-mode insulated processes. These features, the tiny amount of kernel code, and other aspects greatly enhance system reliability. The drawbacks of running device drivers in usermode are the performance penalties on input/output ports access, kernel data structures access, interrupt indirect management, memory copy operations, etc. As MINIX 3 is based on the message transfer paradigm, device drivers must request those operations to the System Task (a special kernel representative process) sending request messages and waiting for reply messages increasing the system overhead. This article proposes a direct call mechanism using a Virtual Machine (VM) approach that keeps system reliability running device drivers in user-mode but avoiding the message transfer, queuing, de-queuing and scheduling overhead.Presentado en el V Workshop Arquitectura, Redes y Sistemas Operativos (WARSO)Red de Universidades con Carreras en Informática (RedUNCI

    Filtering Useless Data at the Source

    Get PDF
    There are some processing environments where an application reads remote sequential files with a large number of records only to use some of them. Examples of those environments are servers, proxies, firewall and intrusion detection log analysis tools, sensor log analysis, large scientific datasets processing, etc. To be processed, all file records must be transferred through the network, and all of them must be processed by the application. Some of the transferred records would be discarded immediately by the application because it has no interest in them, but they just consumed network bandwidth and operating system’s cache buffers. This article proposes to filter records from the source of data but without changing the application. Those records of interest will be transferred without modifications but only references to the other records will be transferred from the source to the consuming application. At the application side, the sequence of records is rebuilt, keeping the content of records of interest and filling the others with dummy values which will be discarded by the application. As the number and length of records are preserved (and therefore the file size too), it is not necessary to modify the application. Once a filtering rule is applied to a file, only the useful records and references to unuseful ones will be transferred to the application side reducing network usage, transfer time, and cache utilization. A modified (but compatible) version of NFS protocol was developed as a proof of concept.Red de Universidades con Carreras en Informátic

    Filtering Useless Data at the Source

    Get PDF
    There are some processing environments where an application reads remote sequential files with a large number of records only to use some of them. Examples of those environments are servers, proxies, firewall and intrusion detection log analysis tools, sensor log analysis, large scientific datasets processing, etc. To be processed, all file records must be transferred through the network, and all of them must be processed by the application. Some of the transferred records would be discarded immediately by the application because it has no interest in them, but they just consumed network bandwidth and operating system’s cache buffers. This article proposes to filter records from the source of data but without changing the application. Those records of interest will be transferred without modifications but only references to the other records will be transferred from the source to the consuming application. At the application side, the sequence of records is rebuilt, keeping the content of records of interest and filling the others with dummy values which will be discarded by the application. As the number and length of records are preserved (and therefore the file size too), it is not necessary to modify the application. Once a filtering rule is applied to a file, only the useful records and references to unuseful ones will be transferred to the application side reducing network usage, transfer time, and cache utilization. A modified (but compatible) version of NFS protocol was developed as a proof of concept.Red de Universidades con Carreras en Informátic

    MINIX4RT: real-time interprocess communications facilities

    Get PDF
    MINIX4RT es una extensión del conocido Sistema Operativo MINIX que incorpora servicios de Tiempo Real Estricto en un nuevo microkernel pero manteniendo compatibilidad con las versiones anteriores del MINIX estándar. La Comunicación entre Procesos es un mecanismo que permite hacer extensible a un Sistema Operativo, pero debe estar libre de Inversión de Prioridades para ser utilizado en aplicaciones de Tiempo Real. Como las primitivas de MINIX no disponen de esta funcionalidad, se incorporaron nuevas primitivas de Comunicación entre Procesos al microkernel de Tiempo Real. El presente artículo describe las facilidades de Comunicaciones entre Procesos en Tiempo Real disponibles en MINIX4RT, su diseño, implementación, tests de desempeño y sus resultados.MINIX4RT is an extension of the well-known MINIX Operating System that adds Hard Real-Time services in a new microkernel but keeping backward compatibility with standard MINIX versions. Interprocess Communications provides a mechanism to make Operating System extensible, but they must be Priority Inversion free for Real-Time applications. As MINIX Interprocess Communications primitives does not have this functionality, new primitives were added to the Real-Time microkernel. This article describes the Real- Time Interprocess Communications facilities available on MINIX4RT, its design, implementation, performance tests and their results.I Workshop de Arquitecturas, Redes y Sistemas Operativos (WARSO)Red de Universidades con Carreras en Informática (RedUNCI

    MINIX4RT: a real-time operating system based on MINIX

    Get PDF
    Tanenbaum's MINIX Operating System was extended with a Real-Time microkernel and services to conform MINIX4RT, a Real-Time Operating System for academic uses that includes more flexible Interprocess Communications facilities supporting basic priority inheritance protocol, a fixed priority scheduler, timer and event driven interrupt management, statistics and Real-Time metrics gathering keeping backward compatibility with standard MINIX versions.Facultad de Informátic

    An Architecture Model for a Distributed Virtualization System

    Get PDF
    Virtualization technologies are massively adopted to cover those requirements in which Operating Systems (OS) have shown weakness, such as fault and security isolation. They also add features like resource partitioning, server consolidation, legacy application support, management tools, among others, which are attractive to Cloud service providers. Hardware virtualization, paravirtualization, and OS-level virtualization are the most widely used technologies to carry out these tasks, although each of them presents different levels of server consolidation, performance, scalability, high-availability, and isolation. The term “Virtual Machine” (VM) is used in issues related to hardware virtualization and paravirtualization technologies to describe an isolated execution environment for an OS and its applications. Containers, Jails, Zones are the names used in OS-level virtualization to describe the environments for applications confinement. Regardless of the definition of the virtualization abstraction, its computing power and resource usage are limited to the physical machine where it runs. The proposed virtualization architecture model breaks this issue, distributing processes, services, and resources to provide distributed virtual environments based on OS factoring and OS containers. The outcome is a Distributed Virtualization System (DVS) which allows running several distributed Virtual Operating System (VOS) on the same cluster. A DVS also fits the requirements for delivering high-performance cloud services with provider-class features as high-availability, replication, elasticity, load balancing, resource management, and process migration. Furthermore, a DVS is able to run several instances of different guest VOS concurrently, allocating a subset of nodes for each instance (resource aggregation), and to share nodes between them (resource partitioning). Each VOS runs isolated within a Distributed Container (DC), which could span multiple nodes of the DVS cluster. The proposed architecture model keeps the appreciated features of current virtualization technologies, such as confinement, consolidation and security, and the benefits of DOS, such as transparency, greater performance, high-availability, elasticity, and scalability.Este documento es la reseña de una tesis publicada en Sedici (ver documento relacionado).Facultad de Informátic

    Un modelo de arquitectura para un sistema de virtualización distribuido

    Get PDF
    Si bien los Sistemas Operativos disponen de características de seguridad, protección, gestión de recursos, etc. éstas parecen ser insuficientes para satisfacer los requerimientos de los sistemas informáticos que suelen estar permanente y globalmente conectados. Las actuales tecnologías de virtualización han sido y continúan siendo masivamente adoptadas para cubrir esas necesidades de sistemas y aplicaciones por sus características de particionado de recursos, aislamiento, capacidad de consolidación, seguridad, soporte de aplicaciones heredadas, facilidades de administración, etc. Una de sus restricciones es que el poder de cómputo de una Máquina Virtual (o un Contenedor) está acotado al poder de cómputo de la máquina física que la contiene. Esta tesis propone superar esta restricción abordando la problemática con el enfoque de un sistema distribuido. Para poder alcanzar mayores niveles de rendimiento y escalabilidad, los programadores de aplicaciones nativas para la Nube deben partirlas en diferentes componentes distribuyendo su ejecución en varias Máquinas Virtuales (o Contenedores). Dichos componentes se comunican mediante interfaces bien definidas tales como las interfaces de Web Services. Las Máquinas Virtuales (o Contenedores) deben configurarse, asegurarse y desplegarse para poder ejecutar la aplicación. Esto se debe, en parte, a que los diferentes componentes no comparten la misma instancia de Sistema Operativo por lo que no comparten los mismos recursos abstractos tales como colas de mensajes, mutexes, archivos, pipes, etc. El defecto de esta modalidad de desarrollo de aplicaciones es que impide una visión integral y generalizada de los recursos. En ella, el programador debe planificar la asignación de recursos a cada componente de su aplicación y, por lo tanto, no solo debe programar su aplicación sino también gestionar la distribución de esos recursos. En este trabajo se propone un modelo de arquitectura para un Sistema de Virtualización Distribuido (DVS) que permite expandir los límites de un dominio de ejecución más allá de una máquina física, explotando el poder de cómputo de un cluster de computadores. En un DVS se combinan e integran tecnologías de Virtualización, de Sistemas Operativos y de Sistemas Distribuidos, donde cada una de ellas le aporta sus mejores características. Esta arquitectura, por ejemplo, le brinda al programador una visión integrada de los recursos distribuidos que dispone para su aplicación relevándolo de la responsabilidad de gestionarlos. El modelo de DVS propuesto dispone de aquellas características que son requeridas por los proveedores de servicios de infraestructura en la Nube, tales como: mayor rendimiento, disponibilidad, escalabilidad, elasticidad, capacidad de replicación y migración de procesos, balanceo de carga, entre otras. Las aplicaciones heredadas pueden migrarse más fácilmente, dado que es posible disponer de la misma instancia de un Sistema Operativo Virtual en cada nodo del cluster de virtualización. Las aplicaciones desarrolladas bajo las nuevas metodologías para el diseño y desarrollo de software para la Nube también se benefician adaptándose su utilización a un sistema que es inherentemente distribuido.Esta tesis está reseñada en Sedici (ver documento relacionado).Facultad de Informátic
    corecore