12 research outputs found
Towards Migrating Security Policies along with Virtual Machines in Cloud
Multi-tenancy and elasticity are important characteristics of every cloud. Multi-tenancy can be economical; however, it raises some security concerns. For example, contender companies may have Virtual Machines (VM) on the same server and have access to the same resources. There is always the possibility that one of them tries to get access to the opponent's data. In order to address these concerns, each tenant in the cloud should be secured separately and firewalls are one of the means that can help in that regard. Firewalls also protect virtual machines from the outside threats using access control lists and policies. On the other hand, virtual machines migrate frequently in an elastic cloud and this raises another apprehension about what happens to the security policies that are associated with the migrated virtual machine.
In this thesis, we primarily contribute by proposing a novel framework that coordinates the mobility of the associated security policies along with the virtual machine in Software-Defined Networks (SDN). We then design and develop a prototype application called Migration Application (MigApp), based on our framework that moves security policies and coordinates virtual machine and security policy migration. MigApp runs on top of SDN controllers and uses a distributed messaging system in order to interact with virtual machine monitor and other MigApp instances. We integrate MigApp with Floodlight controller and evaluate our work through simulations.
In addition, we prepare a test-bed for security testing in clouds that are based on traditional networks. We focus on virtual machine migration and use open-source utilities to equip this test-bed. We design an architecture based on GNS3 network emulator in order to provide a distributed testing environment. We then propose a virtual machine migration framework on Oracle VirtualBox; and finally, we enrich the security aspect of framework by adding firewall rule migration and security verification mechanisms into it
Supervisión de la Instalación y Montaje de los Sistemas de Comunicaciones Voz, Data, CCTV y CATV con la Tecnología CAT7A en el Nuevo Hospital San Juan de Dios de la Ciudad de Pisco
En este trabajo – informe de experiencia profesional haremos llegar una descripción de los sistemas de comunicación (DATA, VOZ, CCTV Y CATV) instalados en el Hospital San Juan de Dios de la ciudad de Pisco.
El sistema consta de una red de Cableado Estructurado Categoría 7A, y de un backbone de Fibra Óptica Multimodo 50/125 um.
El Cableado Estructurado que fue implementado fue diseñado conforme a los Estándares ANSI/TIA/EIA e ISO 11801. Estos Estándares definen la estructura del Sistema de Cableado de la siguiente manera:
Facilidades de Entrada
Cuarto de Equipos
Cableado Backbone
Cuarto de Telecomunicaciones
Cableado Horizontal
Área de Trabajo
Finalmente se realizará una descripción de las pruebas de certificación de cableado estructurado de fibra óptica y la red de cobre (Cable Categoría 7A).
PALABRAS CLAVES
Cableado estructurado, fibra óptica, data, voz, cctv y catv
Intelligent Network Infrastructures: New Functional Perspectives on Leveraging Future Internet Services
The Internet experience of the 21st century is by far very different from that of the early '80s. The Internet has adapted itself to become what it really is today, a very successful business platform of global scale. As every highly successful technology, the Internet has suffered from a natural process of ossification. Over the last 30 years, the technical solutions adopted to leverage emerging applications can be divided in two categories. First, the addition of new functionalities either patching existing protocols or adding new upper layers. Second, accommodating traffic grow with higher bandwidth links. Unfortunately, this approach is not suitable to provide the proper ground for a wide gamma of new applications. To be deployed, these future Internet applications require from the network layer advanced capabilities that the TCP/IP stack and its derived protocols can not provide by design in a robust, scalable fashion. NGNs (Next Generation Networks) on top of intelligent telecommunication infrastructures are being envisioned to support future Internet Services. This thesis contributes with three proposals to achieve this ambitious goal.
The first proposal presents a preliminary architecture to allow NGNs to seamlessly request advanced services from layer 1 transport networks, such as QoS guaranteed point-to-multipoint circuits. This architecture is based on virtualization techniques applied to layer 1 networks, and hides from NGNs all complexities of interdomain provisioning. Moreover, the economic aspects involved were also considered, making the architecture attractive to carriers. The second contribution regards a framework to develop DiffServ-MPLS capable networks based exclusively on open source software and commodity PCs. The developed DiffServ-MPLS flexible software router was designed to allow NGN prototyping, that make use of pseudo virtual circuits and assured QoS as a starting point of development. The third proposal presents a state of the art routing and wavelength assignment algorithm for photonic networks. This algorithm considers physical layer impairments to 100% guarantee the requested QoS profile, even in case of single network failures. A number of novel techniques were applied to offer lower blocking probability when compared with recent proposed algorithms, without impacting on setup delay time
Toward Automated Network Management and Operations.
Network management plays a fundamental role in the operation and well-being of today's networks. Despite the best effort of existing support systems and tools, management operations in large service provider and enterprise networks remain mostly manual. Due to the larger scale of modern networks, more complex network functionalities, and higher network dynamics, human operators are increasingly short-handed. As a result, network misconfigurations are frequent, and can result in violated service-level agreements and degraded user experience. In this dissertation, we develop various tools and systems to understand, automate, augment, and evaluate network management operations. Our thesis is that by introducing formal abstractions, like deterministic finite automata, Petri-Nets and databases, we can build new support systems that systematically capture domain knowledge, automate network management operations, enforce network-wide properties to prevent misconfigurations, and simultaneously reduce manual effort. The theme for our systems is to build a knowledge plane based on the proposed abstractions, allowing network-wide reasoning and guidance for network operations. More importantly, the proposed systems require no modification to the existing Internet infrastructure and network devices, simplifying adoption. We show that our systems improve both timeliness and correctness in performing realistic and large-scale network operations. Finally, to address the current limitations and difficulty of evaluating novel network management systems, we have designed a distributed network testing platform that relies on network and device virtualization to provide realistic environments and isolation to production networks.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/78837/1/chenxu_1.pd
Integração da Cloud com a rede do operador
Mestrado em Engenharia Electrónica e TelecomunicaçõesA utilização das aplicações e as suas formas de comunicar mudaram muito com a proliferação do acesso à Internet. Com esta alteração muitas das aplicações passaram a estar acomodadas em equipamento do fornecedor em vez do equipamento do utilizador. Cloud computing (CC) é o conceito que veio ”patrocinar” ainda mais esta mudança. Hoje o fornecimento destes serviços é suportado pelo serviço Best Effort que a Internet disponibiliza. Este é um modelo viável para alguns serviços, mas simplesmente inaceitável para outros (por exemplo, transmissões de vídeo). No sentido de colmatar esta lacuna, existe uma grande aposta nos serviços integrados de cloud e de rede. A este paradigma denominamos de Cloud Networking. Este paradigma requer o estabelecimento a pedido e um controlo e gestão automática de recursos de rede e cloud, em que a virtualização de rede e de recursos cloud é uma peça fundamental, não só pela sua facilidade de migração de recursos virtuais entre diferentes máquinas físicas, mas também pela flexibilidade do estabelecimento de aplicações e serviços diferentes. Neste contexto o recente conceito de software-defined networks (SDN) pode vir ajudar a melhorar o desempenho dos serviços disponibilizados na cloud.
Assim, esta dissertação tem dois objetivos. O primeiro visa trabalhar em mecanismos de gestão de recursos de cloud e rede de uma forma integrada. Concretamente esta dissertação propõe um algoritmo de mapeamento, bem como um mecanismo de reconfiguração de links por forma a otimizar a alocação de recursos e aumentar a aceitação de pedidos. O segundo objetivo passou por criar um bloco funcional de decisão de mapeamento e reconfiguração que se encaixa numa arquitetura SDN. Este bloco é responsável por receber, analisar e mapear pedidos de serviços de conetividade sobre uma rede Openflow. Os algoritmos usados neste componente têm em conta as considerações alcançadas na primeira parte da dissertação.
Os resultados obtidos permitem verificar que o algoritmo de mapeamento de recursos de cloud e rede, bem como o mecanismo de reconfiguração de links, proporcionam um desempenho significativamente superior aos algoritmos do estado da arte, com uma maior aceitação e ganhos à custa de uma utilização inferior dos recursos de rede, e com um consumo energético inferior. O bloco funcional fecha o ciclo básico de controlo da arquitetura SDN para a receção e tratamento de serviços de conetividade. O estudo global dá uma noção do desempenho geral da arquitetura completa, e o estudo individual das diferentes partes do bloco funcional permite perceber quais as partes dentro do componente proposto que deverão ser melhoradas no futuro.The use of applications and their ways of communicating have greatly changed with the proliferation of Internet access. With this, these applications have come to be accommodated in the equipment supplier rather than the user equipment. Cloud computing (CC) is the concept that came and “sponsored” even more this change. Today the supply of these services is supported by the Best Effort service that the Internet provides. This is a feasible model for some services, but it is simply unacceptable to others (i.e video streams). In order to fill this gap, there is a big bet in integrating cloud and network elements together. To this paradigm we call Cloud Networking. This paradigm requires the establishment, application monitoring and automatic management of network and cloud resources, where both network and cloud virtualization are a key role, not only because of its easiness migration of virtual resources between physical machines, but also by the flexibility of setting different applications and services. In this context, the software-defined networks (SDN) can help improve the performance of the available cloud services.
This Dissertation has two objectives. The first one is to work on mechanisms of resource management and cloud network in an integrated way. Specifically, this Dissertation proposes a mapping algorithm as well as a mechanism for reconfiguring links to optimize resource allocation and increase the acceptance of applications. The second goal is the creation of a functional component for mapping decision and reoptimization that fits in an SDN framework. This component is responsible for receiving, analyzing and mapping requests for connectivity services over an OpenFlow network. The algorithms used in this component take into account the considerations achieved with the first part of the Dissertation.
The results lead us to conclude that the proposed mapping algorithm for cloud and network resources, as well as the mechanism for reconfiguring links, achieve a performance significantly superior to the state of art algorithms, with a higher acceptation and gains at the expense of a lower utilization of network resources, and a lower energy consumption. The functional component closes the control basic cycle of the SDN framework to the reception and treatment of connectivity services. The global performance study gives perception of the general performance of the complete SDN solution, and the individual study of the different parts of the functional component allows us to understand the parts inside the proposed component that should be improved in the future
Recommended from our members
Multiservice Ethernet Digital Distributed Antenna Systems
Over 90% of wireless communications traffic occurs indoors and in-building wireless coverage is still one of the biggest obstacles for wireless users. As the growing demands on wireless capacity, coverage and connectivity have led to 4G and 5G standards, it has also become increasingly important to design and implement future-proof indoor wireless services in a cost effective manner. This thesis introduces a novel multi-service digital distributed antenna systems (DDAS) for indoor wireless coverage, which not only is able to transport multiple wireless carriers from different vendors and mobile operators, but also allows a converged architecture to integrate indoor wireless system with existing Ethernet infrastructures. The Cloud Radio Access Networks (C-RAN) has been suggested by major telecom vendors as the main architecture for last-mile coverage in 5G. However, the digital fronthaul interface defined in common public radio interface (CPRI), which is most widely adopted standard for C-RAN, requires very expensive infrastructures to be built due to the high data rate generated after digitisation. A solution has been introduced at the University of Cambridge previously to remove the digital redundancy by using a data compression technique which has shown 3-times higher transmission efficiency than CPRI. This thesis extends the concept to a more robust architecture allowing multiple wireless services to be transmitted simultaneously as well as being carried over standard Ethernet without losing the Quality of End-user Experience (QoE) and the Quality of Service (QoS) of in-building mobile network.
A two-channel DDAS system with data compression algorithm is experimentally demonstrated, showing wide RF dynamic range for both 4G LTE service and 3G WCDMA service simultaneously carried over a single fibre-based infrastructure. The system leads to the design and implementation of full-service DDAS system allowing 14 channels (all 2/3/4G service from three major mobile operators) to be carried over single 10Gbps network. Typically, the system using CPRI will need over 30Gbps network to be installed for wireless coverage.
Another key aspect covered in this thesis is the design and implementation of the multi-service DDAS over Ethernet (Eth-DDAS). Due to the stringent latency requirement in wireless services, mitigation of delays and errors in frame ordering has become a key challenge for putting DDAS over Ethernet. To overcome these problems, a special Eth-DDAS frame structure is proposed in this thesis. After digitisation, digital signal bearing RF information is packetised onto Ethernet-compatible frames with additional timestamps and sequence numbers before transported via fibre to the receiver. Three latency scenarios are tested with different payload sizes of the proposed frame structure and real-time RF performance is measured to prove the capability of implementation of such system in real-life using commercial off-the-shelf (COTS) ADC/DAC and FPGAs
A Relevance Model for Threat-Centric Ranking of Cybersecurity Vulnerabilities
The relentless and often haphazard process of tracking and remediating vulnerabilities is a top concern for cybersecurity professionals. The key challenge they face is trying to identify a remediation scheme specific to in-house, organizational objectives. Without a strategy, the result is a patchwork of fixes applied to a tide of vulnerabilities, any one of which could be the single point of failure in an otherwise formidable defense. This means one of the biggest challenges in vulnerability management relates to prioritization. Given that so few vulnerabilities are a focus of real-world attacks, a practical remediation strategy is to identify vulnerabilities likely to be exploited and focus efforts towards remediating those vulnerabilities first. The goal of this research is to demonstrate that aggregating and synthesizing readily accessible, public data sources to provide personalized, automated recommendations that an organization can use to prioritize its vulnerability management strategy will offer significant improvements over what is currently realized using the Common Vulnerability Scoring System (CVSS). We provide a framework for vulnerability management specifically focused on mitigating threats using adversary criteria derived from MITRE ATT&CK. We identify the data mining steps needed to acquire, standardize, and integrate publicly available cyber intelligence data sets into a robust knowledge graph from which stakeholders can infer business logic related to known threats. We tested our approach by identifying vulnerabilities in academic and common software associated with six universities and four government facilities. Ranking policy performance was measured using the Normalized Discounted Cumulative Gain (nDCG). Our results show an average 71.5% to 91.3% improvement towards the identification of vulnerabilities likely to be targeted and exploited by cyber threat actors. The ROI of patching using our policies resulted in a savings in the range of 23.3% to 25.5% in annualized unit costs. Our results demonstrate the efficiency of creating knowledge graphs to link large data sets to facilitate semantic queries and create data-driven, flexible ranking policies. Additionally, our framework uses only open standards, making implementation and improvement feasible for cyber practitioners and academia
Towards lightweight, low-latency network function virtualisation at the network edge
Communication networks are witnessing a dramatic growth in the number of connected mobile devices, sensors and the Internet of Everything (IoE) equipment, which have been estimated to exceed 50 billion by 2020, generating zettabytes of traffic each year. In addition, networks are stressed to serve the increased capabilities of the mobile devices (e.g., HD cameras) and to fulfil the users' desire for always-on, multimedia-oriented, and low-latency connectivity.
To cope with these challenges, service providers are exploiting softwarised, cost-effective, and flexible service provisioning, known as Network Function Virtualisation (NFV). At the same time, future networks are aiming to push services to the edge of the network, to close physical proximity from the users, which has the potential to reduce end-to-end latency, while increasing the flexibility and agility of allocating resources. However, the heavy footprint of today's NFV platforms and their lack of dynamic, latency-optimal orchestration prevents them from being used at the edge of the network.
In this thesis, the opportunities of bringing NFV to the network edge are identified. As a concrete solution, the thesis presents Glasgow Network Functions (GNF), a container-based NFV framework that allocates and dynamically orchestrates lightweight virtual network functions (vNFs) at the edge of the network, providing low-latency network services (e.g., security functions or content caches) to users. The thesis presents a powerful formalisation for the latency-optimal placement of edge vNFs and provides an exact solution using Integer Linear Programming, along with a placement scheduler that relies on Optimal Stopping Theory to efficiently re-calculate the placement following roaming users and temporal changes in latency characteristics.
The results of this work demonstrate that GNF's real-world vNF examples can be created and hosted on a variety of hosting devices, including VMs from public clouds and low-cost edge devices typically found at the customer's premises. The results also show that GNF can carefully manage the placement of vNFs to provide low-latency guarantees, while minimising the number of vNF migrations required by the operators to keep the placement latency-optimal
Privacy by (re)design: a comparative study of the protection of personal information in the mobile applications ecosystem under United States, European Union and South African law.
Doctoral Degree. University of KwaZulu-Natal, Durban.The dissertation presents a comparative desktop study of the application of a Privacy by Design
(PbD) approach to the protection of personal information in the mobile applications ecosystem
under the Children’s Online Privacy Protection Act (COPPA) and the California Consumer
Protection Act (CCPA) in the United States, the General Data Protection Regulation (GDPR)
in the European Union, and the Protection of Personal Information Act (POPIA) in South
Africa.
The main problem considered in the thesis is whether there is an ‘accountability
gap’ within the legislation selected for comparative study. This is analysed by examining
whether the legislation can be enforced against parties other than the app developer in the
mobile app ecosystem, as it is theorised that only on this basis will the underlying technologies
and architecture of mobile apps be changed to support a privacy by (re)design approach. The
key research question is what legal approach is to be adopted to enforce such an approach
within the mobile apps ecosystem.
It describes the complexity of the mobile apps ecosystem, identifying the key
role players and the processing operations that take place.
It sets out what is encompassed by the conceptual framework of PbD, and why
the concept of privacy by (re)design may be more appropriate in the context of mobile apps
integrating third party services and products. It identifies the core data protection principles of
data minimisation and accountability, and the nature of informed consent, as being essential to
an effective PbD approach.
It concludes that without strengthening the legal obligations pertaining to the
sharing of personal information with third parties, neither regulatory guidance, as is preferred
in the United States, nor a direct legal obligation, as created by article 25 of the GDPR, is
adequate to enforce a PbD approach within the mobile apps ecosystem. It concludes that
although a PbD approach is implied for compliance by a responsible party with POPIA,
legislative reforms are necessary. It proposes amendments to POPIA to address inadequacies
in the requirements for notice, and to impose obligations on a responsible party in relation to
the sharing of personal information with third parties who will process the personal information
for further, separate purposes