34 research outputs found
Integração do paradigma de cloud computing com a infraestrutura de rede do operador
Doutoramento em Engenharia InformáticaThe proliferation of Internet access allows that users have the possibility to use
services available directly through the Internet, which translates in a change of
the paradigm of using applications and in the way of communicating,
popularizing in this way the so-called cloud computing paradigm. Cloud
computing brings with it requirements at two different levels: at the cloud level,
usually relying in centralized data centers, where information technology and
network resources must be able to guarantee the demand of such services;
and at the access level, i.e., depending on the service being consumed,
different quality of service is required in the access network, which is a Network
Operator (NO) domain. In summary, there is an obvious network dependency.
However, the network has been playing a relatively minor role, mostly as a
provider of (best-effort) connectivity within the cloud and in the access network.
The work developed in this Thesis enables for the effective integration of cloud
and NO domains, allowing the required network support for cloud. We propose
a framework and a set of associated mechanisms for the integrated
management and control of cloud computing and NO domains to provide endto-
end services. Moreover, we elaborate a thorough study on the embedding of
virtual resources in this integrated environment. The study focuses on
maximizing the host of virtual resources on the physical infrastructure through
optimal embedding strategies (considering the initial allocation of resources as
well as adaptations through time), while at the same time minimizing the costs
associated to energy consumption, in single and multiple domains.
Furthermore, we explore how the NO can take advantage of the integrated
environment to host traditional network functions. In this sense, we study how
virtual network Service Functions (SFs) should be modelled and managed in a
cloud environment and enhance the framework accordingly.
A thorough evaluation of the proposed solutions was performed in the scope of
this Thesis, assessing their benefits. We implemented proof of concepts to
prove the added value, feasibility and easy deployment characteristics of the
proposed framework. Furthermore, the embedding strategies evaluation has
been performed through simulation and Integer Linear Programming (ILP)
solving tools, and it showed that it is possible to reduce the physical
infrastructure energy consumption without jeopardizing the virtual resources
acceptance. This fact can be further increased by allowing virtual resource
adaptation through time. However, one should have in mind the costs
associated to adaptation processes. The costs can be minimized, but the virtual
resource acceptance can be also reduced. This tradeoff has also been subject
of the work in this Thesis.A proliferação do acesso à Internet permite aos utilizadores usar serviços
disponibilizados diretamente através da Internet, o que se traduz numa
mudança de paradigma na forma de usar aplicações e na forma de comunicar,
popularizando desta forma o conceito denominado de cloud computing. Cloud
computing traz consigo requisitos a dois níveis: ao nível da própria cloud,
geralmente dependente de centros de dados centralizados, onde as
tecnologias de informação e recursos de rede têm que ser capazes de garantir
as exigências destes serviços; e ao nível do acesso, ou seja, dependendo do
serviço que esteja a ser consumido, são necessários diferentes níveis de
qualidade de serviço na rede de acesso, um domínio do operador de rede. Em
síntese, existe uma clara dependência da cloud na rede. No entanto, o papel
que a rede tem vindo a desempenhar neste âmbito é reduzido, sendo
principalmente um fornecedor de conectividade (best-effort) tanto no dominio
da cloud como no da rede de acesso.
O trabalho desenvolvido nesta Tese permite uma integração efetiva dos
domínios de cloud e operador de rede, dando assim à cloud o efetivo suporte
da rede. Para tal, apresentamos uma plataforma e um conjunto de
mecanismos associados para gestão e controlo integrado de domínios cloud
computing e operador de rede por forma a fornecer serviços fim-a-fim. Além
disso, elaboramos um estudo aprofundado sobre o mapeamento de recursos
virtuais neste ambiente integrado. O estudo centra-se na maximização da
incorporação de recursos virtuais na infraestrutura física por meio de
estratégias de mapeamento ótimas (considerando a alocação inicial de
recursos, bem como adaptações ao longo do tempo), enquanto que se
minimizam os custos associados ao consumo de energia. Este estudo é feito
para cenários de apenas um domínio e para cenários com múltiplos domínios.
Além disso, exploramos como o operador de rede pode aproveitar o referido
ambiente integrado para suportar funções de rede tradicionais. Neste sentido,
estudamos como as funções de rede virtualizadas devem ser modeladas e
geridas num ambiente cloud e estendemos a plataforma de acordo com este
conceito.
No âmbito desta Tese foi feita uma avaliação extensa das soluções propostas,
avaliando os seus benefícios. Implementámos provas de conceito por forma a
demonstrar as mais-valias, viabilidade e fácil implantação das soluções
propostas. Além disso, a avaliação das estratégias de mapeamento foi
realizada através de ferramentas de simulação e de programação linear inteira,
mostrando que é possível reduzir o consumo de energia da infraestrutura
física, sem comprometer a aceitação de recursos virtuais. Este aspeto pode
ser melhorado através da adaptação de recursos virtuais ao longo do tempo.
No entanto, deve-se ter em mente os custos associados aos processos de
adaptação. Os custos podem ser minimizados, mas isso implica uma redução
na aceitação de recursos virtuais. Esta compensação foi também um tema
abordado nesta Tese
Trust and integrity in distributed systems
In the last decades, we have witnessed an exploding growth of the Internet. The massive adoption of distributed systems on the Internet allows users to offload their computing intensive work to remote servers, e.g. cloud. In this context, distributed systems are pervasively used in a number of difference scenarios, such as web-based services that receive and process data, cloud nodes where company data and processes are executed, and softwarised networks that process packets. In these systems, all the computing entities need to trust each other and co-operate in order to work properly.
While the communication channels can be well protected by protocols like TLS or IPsec, the problem lies in the expected behaviour of the remote computing platforms, because they are not under the direct control of end users and do not offer any guarantee that they will behave as agreed. For example, the remote party may use non-legitimate services for its own convenience (e.g. illegally storing received data and routed packets), or the remote system may misbehave due to an attack (e.g. changing deployed services). This is especially important because most of these computing entities need to expose interfaces towards the Internet, which makes them easier to be attacked. Hence, software-based security solutions alone are insufficient to deal with the current scenario of distributed systems. They must be coupled with stronger means such as hardware-assisted protection.
In order to allow the nodes in distributed system to trust each other, their integrity must be presented and assessed to predict their behaviour. The remote attestation technique of trusted computing was proposed to specifically deal with the integrity issue of remote entities, e.g. whether the platform is compromised with bootkit attacks or cracked kernel and services. This technique relies on a hardware chip called Trusted Platform Module (TPM), which is available in most business class laptops, desktops and servers. The TPM plays as the hardware root of trust, which provides a special set of capabilities that allows a physical platform to present its integrity state.
With a TPM equipped in the motherboard, the remote attestation is the procedure that a physical node provides hardware-based proof of the software components loaded in this platform, which can be evaluated by other entities to conclude its integrity state. Thanks to the hardware TPM, the remote attestation procedure is resistant to software attacks. However, even though the availability of this chip is high, its actual usage is low.
The major reason is that trusted computing has very little flexibility, since its goal is to provide strong integrity guarantees. For instance, remote attestation result is positive if and only if the software components loaded in the platform are expected and loaded in a specific order, which limits its applicability in real-world scenarios. For such reasons, this technique is especially hard to be applied on software services running in application layer, that are loaded in random order and constantly updated. Because of this, current remote attestation techniques provide incomplete solution. They only focus on the boot phase of physical platforms but not on the services, not to mention the services running in virtual instances.
This work first proposes a new remote attestation framework with the capability of presenting and evaluating the integrity state not only of the boot phase of physical platforms but also of software services at load time, e.g. whether the software is legitimate or not. The framework allows users to know and understand the integrity state of the whole life cycle of the services they are interacting with, thus the users can make informed decision whether to send their data or trust the received results.
Second, based on the remote attestation framework this thesis proposes a method to bind the identity of secure channel endpoint to a specific physical platform and its integrity state. Secure channels are extensively adopted in distributed systems to protect data transmitted from one platform to another. However, they do not convey any information about the integrity state of the platform or the service that generates and receives this data, which leaves ample space for various attacks. With the binding of the secure channel endpoint and the hardware TPM, users are protected from relay attacks (with hardware-based identity) and malicious or cracked platform and software (with remote attestation).
Third, with the help of the remote attestation framework, this thesis introduces a new method to include the integrity state of software services running in virtual containers in the evidence generated by the hardware TPM. This solution is especially important for softwarised network environments. Softwarised network was proposed to provide dynamic and flexible network deployment which is an ever complex task nowadays. Its main idea is to switch hardware appliances to softwarised network functions running inside virtual instances, that are full-fledged computational systems and accessible from the Internet, thus their integrity is at stake. Unfortunately, currently remote attestation work is not able to provide hardware-based integrity evidence for software services running inside virtual instances, because the direct link between the internal of virtual instances and hardware root of trust is missing. With the solution proposed in this thesis, the integrity state of the softwarised network functions running in virtual containers can be presented and evaluated with hardware-based evidence, implying the integrity of the whole softwarised network.
The proposed remote attestation framework, trusted channel and trusted softwarised network are implemented in separate working prototypes. Their performance was evaluated and proved to be excellent, allowing them to be applied in real-world scenarios. Moreover, the implementation also exposes various APIs to simplify future integration with different management platforms, such as OpenStack and OpenMANO
Recommended from our members
A secure and scalable communication framework for inter-cloud services
A lot of contemporary cloud computing platforms offer Infrastructure-as-a-Service provisioning model, which offers to deliver basic virtualized computing resources like storage, hardware, and networking as on-demand and dynamic services. However, a single cloud service provider does not have limitless resources to offer to its users, and increasingly users are demanding the features of extensibility and inter-operability with other cloud service providers. This has increased the complexity of the cloud ecosystem and resulted in the emergence of the concept of an Inter-Cloud environment where a cloud computing platform can use the infrastructure resources of other cloud computing platforms to offer a greater value and flexibility to its users. However, there are no common models or standards in existence that allows the users of the cloud service providers to provision even some basic services across multiple cloud service providers seamlessly, although admittedly it is not due to any inherent incompatibility or proprietary nature of the foundation technologies on which these cloud computing platforms are built. Therefore, there is a justified need of investigating models and frameworks which allow the users of the cloud computing technologies to benefit from the added values of the emerging Inter-Cloud environment. In this dissertation, we present a novel security model and protocols that aims to cover one of the most important gaps in a subsection of this field, that is, the problem domain of provisioning secure communication within the context of a multi-provider Inter-Cloud environment. Our model offers a secure communication framework that enables a user of multiple cloud service providers to provision a dynamic application-level secure virtual private network on top of the participating cloud service providers. We accomplish this by taking leverage of the scalability, robustness, and flexibility of peer-to-peer overlays and distributed hash tables, in addition to novel usage of applied cryptography techniques to design secure and efficient admission control and resource discovery protocols. The peer-to-peer approach helps us in eliminating the problems of manual configurations, key management, and peer churn that are encountered when
setting up the secure communication channels dynamically, whereas the secure admission control and secure resource discovery protocols plug the security gaps that are commonly found in the peer-to-peer overlays. In addition to the design and architecture of our research contributions, we also present the details of a prototype implementation containing all of the elements of our research, as well as showcase our experimental results detailing the performance, scalability, and overheads of our approach, that have been carried out on actual (as
opposed to simulated) multiple commercial and non-commercial cloud computing platforms. These results demonstrate that our architecture incurs minimal latency and throughput overheads for the Inter-Cloud VPN connections among the virtual machines of a service deployed on multiple cloud platforms, which are 5% and 10% respectively. Our results also show that our admission control scheme is approximately 82% more efficient and our secure resource discovery scheme is about 72% more efficient than a standard PKI-based (Public Key Infrastructure) scheme
Internet of Things From Hype to Reality
The Internet of Things (IoT) has gained significant mindshare, let alone attention, in academia and the industry especially over the past few years. The reasons behind this interest are the potential capabilities that IoT promises to offer. On the personal level, it paints a picture of a future world where all the things in our ambient environment are connected to the Internet and seamlessly communicate with each other to operate intelligently. The ultimate goal is to enable objects around us to efficiently sense our surroundings, inexpensively communicate, and ultimately create a better environment for us: one where everyday objects act based on what we need and like without explicit instructions
Recommended from our members
A pattern-based framework for the design of secure and dependable SDN/NFV-enabled networks
As the world becomes an interconnected network where objects and humans interact, cyber and physical networks appear to play an important role in smart ecosystems due to their increasing use on critical infrastructure and smart cities. Software Defined Networking (SDN) and Network Function Virtualisation (NFV) are a promising combination for programmable connectivity, rapid service provisioning and service chaining as they offer the necessary end-to-end optimisations. However, with the actual exponential growth of connected devices, future networks, such as SDN and NFV, require open architectures, facilitated by standards and a strong ecosystem.In this thesis, a model-based approach is proposed to support the design and verification of secure and dependable SDN/NFV-enabled networks. The model is based on the development of a pattern-based approach to design executable patterns as solutions for reusable designs and interactions of objects, encoded in a rule based reasoning system, able to guarantee security and dependability (S&D) properties in SDN/NFV enabled networks. To execute S&D patterns, a pattern based framework is implemented for the insertion of patterns at design and at runtime level. The developed pattern framework highlights also the benefit of leveraging the flexibility of SDN/NFV-enabled networks to deploy enhanced reactive security mechanisms for the protection of the industrial network via the use of service function chaining (SFC). To prove the importance of this approach and the functionality of the pattern framework, different pattern instances are implemented to guarantee S&D in network infrastructures. The developed design patterns are able to design network topologies, guarantee network properties and offer security service provisioning and chaining. Finally, in order to evaluate the developed patterns in the pattern framework, three different use cases are described, where a number of usage scenarios are deployed and evaluated experimentally
Practical Encryption Gateways to Integrate Legacy Industrial Machinery
Future industrial networks will consist of a mixture of old and new components, due to the very long life-cycles of industrial machines on the one hand and the need to change in the face of trends like Industry 4.0 or the industrial Internet of things on the other. These networks will be very heterogeneous and will serve legacy as well as new use cases in parallel. This will result in an increased demand for network security and precisely within this domain, this thesis tries to answer one specific question: how to make it possible for legacy industrial machines to run securely in those future heterogeneous industrial networks.
The need for such a solution arises from the fact, that legacy machines are very outdated and hence vulnerable systems, when assessing them from an IT security standpoint. For various reasons, they cannot be easily replaced or upgraded and with the opening up of industrial networks to the Internet, they become prime attack targets. The only way to provide security for them, is by protecting their network traffic.
The concept of encryption gateways forms the basis of our solution. These are special network devices, that are put between the legacy machine and the network. The gateways encrypt data traffic from the machine before it is put on the network and decrypt traffic coming from the network accordingly. This results in a separation of the machine from the network by virtue of only decrypting and passing through traffic from other authenticated gateways. In effect, they protect communication data in transit and shield the legacy machines from potential attackers within the rest of the network, while at the same time retaining their functionality. Additionally, through the specific placement of gateways inside the network, fine-grained security policies become possible. This approach can reduce the attack surface of the industrial network as a whole considerably.
As a concept, this idea is straight forward and not new. Yet, the devil is in the details and no solution specifically tailored to the needs of the industrial environment and its legacy components existed prior to this work.
Therefore, we present in this thesis concrete building blocks in the direction of a generally applicable encryption gateway solution that allows to securely integrate legacy industrial machinery and respects industrial requirements. This not only entails works in the direction of network security, but also includes works in the direction of guaranteeing the availability of the communication links that are protected by the gateways, works to simplify the usability of the gateways as well as the management of industrial data flows by the gateways
Architectures and Standards for Spatial Data Infrastructures and Digital Government: European Union Location Framework Guidelines
This document provides an overview of the architecture(s) and standards for Spatial Data Infrastructures (SDI) and Digital Government. The document describes the different viewpoints according to the Reference Model for Open and Distributed Processing (RM-ODP) which is often used in both the SDI and e-Government worlds: the enterprise viewpoint, the engineering viewpoint, the information viewpoint, the computational viewpoint and the technological viewpoint. The document not only describes these viewpoints with regard to SDI and e-Government implementations, but also how the architecture(s) and standards of SDI and e-Government relate. It indicates which standards and tools can be used and provides examples of implementations in different areas, such as process modelling, metadata, data and services. In addition, the annex provides an overview of the most commonly used standards and technologies for SDI and e-Government.JRC.B.6-Digital Econom
Creation of value with open source software in the telecommunications field
Tese de doutoramento. Engenharia Electrotécnica e de Computadores. Faculdade de Engenharia. Universidade do Porto. 200
Preliminary Specification of Services and Protocols
This document describes the preliminary specification of services and protocols for the Crutial Architecture. The Crutial Architecture definition, first addressed in Crutial Project Technical Report D4 (January 2007), intends to reply to a grand challenge of computer science and control engineering: how to achieve resilience of critical information infrastructures, in particular in the electrical sector. The definitions herein elaborate on the major architectural options and components established in the Preliminary Architecture Specification (D4), with special relevance to the Crutial middleware building blocks, and are based on the fault, synchrony and topological models defined in the same document. The document, in general lines, describes the Runtime Support Services and APIs, and the Middleware Services and APIs. Then, it delves into the protocols, describing: Runtime Support Protocols, and Middleware Services Protocols. The Runtime Support Services and APIs chapter features as a main component, the Proactive-Reactive Recovery Service, whose aim is to guarantee perpetual execution of any components it protects. The Middleware Services and APIs chapter describes our approach to intrusion-tolerant middleware. The middleware comprises several layers. The Multipoint Network layer is the lowest layer of CRUTIAL's middleware, and features an abstraction of basic communication services, such as provided by standard protocols, like IP, IPsec, UDP, TCP and SSL/TLS. The Communication Support Services feature two important building blocks: the Randomized Intrusion-Tolerant Services (RITAS), and the Overlay Protection Layer (OPL) against DoS attacks. The Activity Support Services currently defined comprise the CIS Protection service, and the Access Control and Authorization service. Protection as described in this report is implemented by mechanisms and protocols residing on a device called Crutial Information Switch (CIS). The Access Control and Authorization service is implemented through PolyOrBAC, which defines the rules for information exchange and collaboration between sub-modules of the architecture, corresponding in fact to different facilities of the CII's organizations.The Monitoring and Failure Detection layer contains a preliminary definition of the middleware services devoted to monitoring and failure detection activities. The remaining chapters describe the protocols implementing the above-mentioned services: Runtime Support Protocols, and Middleware Services Protocol
Establishing security and privacy policies for an on-line auction
The current Enterprise Resource Planning (ERP) project is a proposal to use business-to-business electronic commerce to provide a means of developing markets for end-of-life products and their components. The objective is to develop a science and technology base for a scalable and secure hub for reverse logistics e-commerce in which users can buy and sell used or surplus products, components, and materials as well as provide a service for disposing of them responsibly. A critical part of the project is the design of security architecture, as well as security and privacy policies for the project\u27s on-line electronic marketplace. Security for the auction website should focus on three concerns: prevention, detection, and response. Prevention consists of four basic characteristics of computer security: authentication, confidentiality, integrity, and availability. We will also analyze some of the vulnerabilities and common attacks of sites on the web, and ways to defend against them. Detection involves several approaches to monitor traffic on the internal network and log the activities of users. This is important to provide forensic evidence when a site is compromised. Detection, however, is useless without some type of response, either through patching new-found security holes, contacting vendors to report security weaknesses and new viruses, or contacting local and federal agencies to assist in closing those holes or bringing violators to justice. We will look at these issues, as well as trust in auctions - allowing buyers and sellers to determine if a user if trustworthy or not - and automatic schemes for preventing a fraudulent user from exploiting that trust