998 research outputs found
Infrastructural Security for Virtualized Grid Computing
The goal of the grid computing paradigm is to make computer power as easy to access as an electrical power grid. Unlike the power grid, the computer grid uses remote resources located at a service provider. Malicious users can abuse the provided resources, which not only affects their own systems but also those of the provider and others.
Resources are utilized in an environment where sensitive programs and data from competitors are processed on shared resources, creating again the potential for misuse. This is one of the main security issues, since in a business environment competitors distrust each other, and the fear of industrial espionage is always present. Currently, human trust is the strategy used to deal with these threats. The relationship between grid users and resource providers ranges from highly trusted to highly untrusted. This wide trust relationship occurs because grid computing itself changed from a research topic with few users to a widely deployed product that included early commercial adoption. The traditional open research communities have very low security requirements, while in contrast, business customers often operate on sensitive data that represents intellectual property; thus, their security demands are very high. In traditional grid computing, most users share the same resources concurrently. Consequently, information regarding other users and their jobs can usually be acquired quite easily. This includes, for example, that a user can see which processes are running on another user´s system. For business users, this is unacceptable since even the meta-data of their jobs is classified. As a consequence, most commercial customers are not convinced that their intellectual property in the form of software and data is protected in the grid.
This thesis proposes a novel infrastructural security solution that advances the concept of virtualized grid computing. The work started back in 2007 and led to the development of the XGE, a virtual grid management software. The XGE itself uses operating system virtualization to provide a virtualized landscape. Users’ jobs are no longer executed in a shared manner; they are executed within special sandboxed environments. To satisfy the requirements of a traditional grid setup, the solution can be coupled with an installed scheduler and grid middleware on the grid head node. To protect the prominent grid head node, a novel dual-laned demilitarized zone is introduced to make attacks more difficult. In a traditional grid setup, the head node and the computing nodes are installed in the same network, so a successful attack could also endanger the user´s software and data. While the zone complicates attacks, it is, as all security solutions, not a perfect solution. Therefore, a network intrusion detection system is enhanced with grid specific signatures. A novel software called Fence is introduced that supports end-to-end encryption, which means that all data remains encrypted until it reaches its final destination. It transfers data securely between the user´s computer, the head node and the nodes within the shielded, internal network. A lightweight kernel rootkit detection system assures that only trusted kernel modules can be loaded. It is no longer possible to load untrusted modules such as kernel rootkits. Furthermore, a malware scanner for virtualized grids scans for signs of malware in all running virtual machines. Using virtual machine introspection, that scanner remains invisible for most types of malware and has full access to all system calls on the monitored system. To speed up detection, the load is distributed to multiple detection engines simultaneously. To enable multi-site service-oriented grid applications, the novel concept of public virtual nodes is presented. This is a virtualized grid node with a public IP address shielded by a set of dynamic firewalls. It is possible to create a set of connected, public nodes, either present on one or more remote grid sites. A special web service allows users to modify their own rule set in both directions and in a controlled manner.
The main contribution of this thesis is the presentation of solutions that convey the security of grid computing infrastructures. This includes the XGE, a software that transforms a traditional grid into a virtualized grid. Design and implementation details including experimental evaluations are given for all approaches. Nearly all parts of the software are available as open source software. A summary of the contributions and an outlook to future work conclude this thesis
Towards an Automatic Microservices Manager for Hybrid Cloud Edge Environments
Cloud computing came to make computing resources easier to access thus helping a
faster deployment of applications/services benefiting from the scalability provided by
the service providers. It has been registered an exponential growth of the data volume
received by the cloud. This is due to the fact that almost every device used in everyday
life are connected to the internet sharing information in a global scale (ex: smartwatches,
clocks, cars, industrial equipment’s). Increasing the data volume results in an increased
latency in client applications resulting in the degradation of the QoS (Quality of service).
With these problems, hybrid systems were born by integrating the cloud resources
with the various edge devices between the cloud and edge, Fog/Edge computation. These
devices are very heterogeneous, with different resources capabilities (such as memory
and computational power), and geographically distributed.
Software architectures also evolved and microservice architecture emerged to make
application development more flexible and increase their scalability. The Microservices
architecture comprehends decomposing monolithic applications into small services each
one with a specific functionality and that can be independently developed, deployed and
scaled. Due to their small size, microservices are adquate for deployment on Hybrid
Cloud/Edge infrastructures. However, the heterogeneity of those deployment locations
makes microservices’ management and monitoring rather complex. Monitoring, in particular,
is essential when considering that microservices may be replicated and migrated
in the cloud/edge infrastructure.
The main problem this dissertation aims to contribute is to build an automatic system
of microservices management that can be deployed in hybrid infrastructures cloud/fog
computing. Such automatic system will allow edge enabled applications to have an
adaptive deployment at runtime in response to variations inworkloads and computational
resources available. Towards this end, this work is a first step on integrating two existing
projects that combined may support an automatic system. One project does the automatic
management of microservices but uses only an heavy monitor, Prometheus, as a cloud
monitor. The second project is a light adaptive monitor. This thesis integrates the light
monitor into the automatic manager of microservices.A computação na Cloud surgiu como forma de simplificar o acesso aos recursos computacionais,
permitindo um deployment mais rápido das aplicações e serviços como resultado
da escalabilidade suportada pelos provedores de serviços.
Computação na cloud surgiu para facilitar o acesso aos recursos de computação provocando
um facultamento no deployment de aplicações/serviços sendo benéfico para a
escalabilidade fornecida pelos provedores de serviços. Tem-se registado um crescimento
exponencial do volume de data que é recebido pela cloud. Este aumento deve-se ao facto de
quase todos os dispositivos utilizados no nosso quotidiano estarem conectados à internet
(exemplos destes são, relogios, maquinas industriais, carros). Este aumento no volume de
dados resulta num aumento da latência para as aplicações cliente, resultando assim numa
degradação na qualidade de serviço QoS.
Com estes problemas, nasceram os sistemas híbridos, nascidos pela integração dos
recursos de cloud com os variados dispositivos presentes no caminho entre a cloud e
a periferia denominando-se computação na Edge/Fog (Computação na periferia). Estes
dispositivos apresentam uma grande heterogeneidade e são geograficamente muito
distribuídos.
As arquitecturas dos sistemas também evoluíram emergindo a arquitectura de micro
serviços que permitem tornar o desenvolvimento de aplicações não só mais flexivel
como para aumentar a sua escalabilidade. A arquitetura de micro serviços consiste na
decomposição de aplicações monolíticas em pequenos serviços, onde cada um destes
possuí uma funcionalidade específica e que pode ser desenvolvido, lançado e migrado
de forma independente. Devido ao seu tamanho os micro serviços são adequados para
serem lançados em ambientes de infrastructuras híbridas (cloud e periferia). No entanto,
a heterogeneidade da localização para serem lançados torna a gestão e monitorização
de micro serviços bastante mais complexa. A monitorização, em particular, é essencial
quando consideramos que os micro serviços podem ser replicados e migrados nestas
infrastruturas de cloud e periferia (Edge).
O problema abordado nesta dissertação é contribuir para a construção de um sistema
automático de gestão de micro serviços que podem ser lançados em estruturas hibridas.
Este sistema automático irá tornar possível às aplicações que estão na edge possuírem um
deploy adaptativo enquanto estão em execução, como resposta às variações dos recursos
computacionais disponíveis e suas cargas. Para chegar a este fim, este trabalho será o primeiro passo na integração de dois projectos já existentes que, juntos poderão suportar
umsistema automático. Umdeles realiza a gestão automática de micro serviços mas utiliza
apenas o Prometheus como monitor na cloud, enquanto o segundo projecto é um monitor
leve adaptativo. Esta tese integra então um monitor leve com um gestor automático de
micro serviços
Virtual machine scheduling in dedicated computing clusters
Time-critical applications process a continuous stream of input data and have to meet specific timing constraints. A common approach to ensure that such an application satisfies its constraints is over-provisioning: The application is deployed in a dedicated cluster environment with enough processing power to achieve the target performance for every specified data input rate. This approach comes with a drawback: At times of decreased data input rates, the cluster resources are not fully utilized. A typical use case is the HLT-Chain application that processes physics data at runtime of the ALICE experiment at CERN. From a perspective of cost and efficiency it is desirable to exploit temporarily unused cluster resources. Existing approaches aim for that goal by running additional applications. These approaches, however, a) lack in flexibility to dynamically grant the time-critical application the resources it needs, b) are insufficient for isolating the time-critical application from harmful side-effects introduced by additional applications or c) are not general because application-specific interfaces are used. In this thesis, a software framework is presented that allows to exploit unused resources in a dedicated cluster without harming a time-critical application. Additional applications are hosted in Virtual Machines (VMs) and unused cluster resources are allocated to these VMs at runtime. In order to avoid resource bottlenecks, the resource usage of VMs is dynamically modified according to the needs of the time-critical application. For this purpose, a number of previously not combined methods is used. On a global level, appropriate VM manipulations like hot migration, suspend/resume and start/stop are determined by an informed search heuristic and applied at runtime. Locally on cluster nodes, a feedback-controlled adaption of VM resource usage is carried out in a decentralized manner. The employment of this framework allows to increase a cluster’s usage by running additional applications, while at the same time preventing negative impact towards a time-critical application. This capability of the framework is shown for the HLT-Chain application: In an empirical evaluation the cluster CPU usage is increased from 49% to 79%, additional results are computed and no negative effect towards the HLT-Chain application are observed
Virtual machine scheduling in dedicated computing clusters
Time-critical applications process a continuous stream of input data and have to meet specific timing constraints. A common approach to ensure that such an application satisfies its constraints is over-provisioning: The application is deployed in a dedicated cluster environment with enough processing power to achieve the target performance for every specified data input rate. This approach comes with a drawback: At times of decreased data input rates, the cluster resources are not fully utilized. A typical use case is the HLT-Chain application that processes physics data at runtime of the ALICE experiment at CERN. From a perspective of cost and efficiency it is desirable to exploit temporarily unused cluster resources. Existing approaches aim for that goal by running additional applications. These approaches, however, a) lack in flexibility to dynamically grant the time-critical application the resources it needs, b) are insufficient for isolating the time-critical application from harmful side-effects introduced by additional applications or c) are not general because application-specific interfaces are used. In this thesis, a software framework is presented that allows to exploit unused resources in a dedicated cluster without harming a time-critical application. Additional applications are hosted in Virtual Machines (VMs) and unused cluster resources are allocated to these VMs at runtime. In order to avoid resource bottlenecks, the resource usage of VMs is dynamically modified according to the needs of the time-critical application. For this purpose, a number of previously not combined methods is used. On a global level, appropriate VM manipulations like hot migration, suspend/resume and start/stop are determined by an informed search heuristic and applied at runtime. Locally on cluster nodes, a feedback-controlled adaption of VM resource usage is carried out in a decentralized manner. The employment of this framework allows to increase a cluster’s usage by running additional applications, while at the same time preventing negative impact towards a time-critical application. This capability of the framework is shown for the HLT-Chain application: In an empirical evaluation the cluster CPU usage is increased from 49% to 79%, additional results are computed and no negative effect towards the HLT-Chain application are observed
Recommended from our members
System Design and Implementation for Hybrid Network Function Virtualization
With the application of virtualization technology in computer networks, many new research areas and techniques have been explored, such as network function virtualization (NFV). A significant benefit of virtualization is that it reduces the cost of a network system and increases its flexibility. Due to the increasing complexity of the network environment and constantly improving network scale and bandwidth, it is imperative to aim for higher performance, extensibility, and flexibility in the future network systems. In this dissertation, hybrid NFV platforms applying virtualization technology are proposed. We further explore the techniques used to improve the performance, scalability and resilience of these systems.
In the first part of this dissertation, we describe a new heterogeneous hardware-software NFV platform that provides scalability and programmability while supporting significant hardware-level parallelism and reconfiguration. Our computing platform takes advantage of both field-programmable gate arrays (FPGAs) and microprocessors to implement numerous virtual network functions (VNFs) that can be dynamically customized to specific network flow needs. Traffic management and hardware reconfiguration functions are performed by a global coordinator which allows for the rapid sharing of network function states and continuous evaluation of network function needs. With the help of state sharing mechanism offered by the coordinator, customer-defined VNF instances can be easily migrated between heterogeneous middleboxes as the network environment changes. A resource allocation algorithm dynamically assesses resource deployments as network flows and conditions are updated.
In the second part of this thesis document, we explore a new session-level approach for NFV that implements distributed agents in heterogeneous middleboxes to steer packets belonging to different sessions through session-specific service chains. Our session-level approach supports inter-domain service chaining with both FPGA- and processor-based middleboxes, dynamic reconfiguration of service chains for ongoing sessions, and the application of session-level approaches for UDP-based protocols. To demonstrate our approach, we establish inter-domain service chains for QUIC sessions, and reconfigure the service chains across a range of FPGA- and processor-based middleboxes. We show that our session-level approach can successfully reconfigure service chains for individual QUIC sessions. Compared with software implementations, the distributed agents implemented on FPGAs show better performance in various test scenarios
Overview of Cloud Computing
This updated book (Version 1.2) serves to fill a void in introductory textbook on cloud computing publishing. The target audience are readers with some technical background that are also interested in the business aspects of cloud computing. The book intentionally does not focus on technical details and does not include step-by-step instructions in order to avoid becoming obsolete too quickly. While new tools and concepts are sure to continue to come up at a rapid pace, the bulk of the book should remain true and useful for a number of years. Examples are usually based on the Google Cloud Platform, but the principles covered in the book are equally relevant to users of other cloud platforms.Published
Blockchain leveraged decentralized IoT eHealth framework
Blockchain technologies recently emerging for eHealth, can facilitate a secure, decentral- ized and patient-driven, record management system. However, Blockchain technologies cannot accommodate the storage of data generated from IoT devices in remote patient management (RPM) settings as this application requires a fast consensus mechanism, care- ful management of keys and enhanced protocols for privacy. In this paper, we propose a Blockchain leveraged decentralized eHealth architecture which comprises three layers: (1) The Sensing layer –Body Area Sensor Networks include medical sensors typically on or in a patient body transmitting data to a smartphone. (2) The NEAR processing layer –Edge Networks consist of devices at one hop from data sensing IoT devices. (3) The FAR pro- cessing layer –Core Networks comprise Cloud or other high computing servers). A Patient Agent (PA) software replicated on the three layers processes medical data to ensure reli- able, secure and private communication. The PA executes a lightweight Blockchain consen- sus mechanism and utilizes a Blockchain leveraged task-offloading algorithm to ensure pa- tient’s privacy while outsourcing tasks. Performance analysis of the decentralized eHealth architecture has been conducted to demonstrate the feasibility of the system in the pro- cessing and storage of RPM data
A Message-Passing, Thread-Migrating Operating System for a Non-Cache-Coherent Many-Core Architecture
The difference between emerging many-core architectures and their multi-core predecessors goes beyond just the number of cores incorporated on a chip. Current technologies for maintaining cache coherency are not scalable beyond a few dozen cores, and a lack of coherency presents a new paradigm for software developers to work with. While shared memory multithreading has been a viable and popular programming technique for multi-cores, the distributed nature of many-cores is more amenable to a model of share-nothing, message-passing threads. This model places different demands on a many-core operating system, and this thesis aims to understand and accommodate those demands. We introduce Xipx, a port of the lightweight Embedded Xinu operating system to the many-core Intel Single-chip Cloud Computer (SCC). The SCC is a 48-core x86 architecture that lacks cache coherency. It features a fast mesh network-on-chip (NoC) and on-die message passing buffers to facilitate message-passing communications between cores. Running as a separate instance per core, Xipx takes advantage of this hardware in its implementation of a message-passing device. The device multiplexes the message passing hardware, thereby allowing multiple concurrent threads to share the hardware without interfering with each other. Xipx also features a limited framework for transparent thread migration. This achievement required fundamental modifications to the kernel, including incorporation of a new type of thread. Additionally, a minimalistic framework for bare-metal development on the SCC has been produced as a pragmatic offshoot of the work on Xipx. This thesis discusses the design and implementation of the many-core extensions described above. While Xipx serves as a foundation for continued research on many-core operating systems, test results show good performance from both message passing and thread migration suggesting that, as it stands, Xipx is an effective platform for exploration of many-core development at the application level as well
Dynamic computation migration in distributed shared memory systems
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.Vita.Includes bibliographical references (p. 123-131).by Wilson Cheng-Yi Hsieh.Ph.D
- …