730 research outputs found

    A novel multipath-transmission supported software defined wireless network architecture

    Get PDF
    The inflexible management and operation of today\u27s wireless access networks cannot meet the increasingly growing specific requirements, such as high mobility and throughput, service differentiation, and high-level programmability. In this paper, we put forward a novel multipath-transmission supported software-defined wireless network architecture (MP-SDWN), with the aim of achieving seamless handover, throughput enhancement, and flow-level wireless transmission control as well as programmable interfaces. In particular, this research addresses the following issues: 1) for high mobility and throughput, multi-connection virtual access point is proposed to enable multiple transmission paths simultaneously over a set of access points for users and 2) wireless flow transmission rules and programmable interfaces are implemented into mac80211 subsystem to enable service differentiation and flow-level wireless transmission control. Moreover, the efficiency and flexibility of MP-SDWN are demonstrated in the performance evaluations conducted on a 802.11 based-testbed, and the experimental results show that compared to regular WiFi, our proposed MP-SDWN architecture achieves seamless handover and multifold throughput improvement, and supports flow-level wireless transmission control for different applications

    Towards an Efficient Management and Orchestration Framework for Virtual Network Security Functions

    Get PDF
    The recent years have witnessed a growth in the number of users connected to computer networks, due mainly to megatrends such as Internet of Things (IoT), Industry 4.0, and Smart Grids. Simultaneously, service providers started offering vertical services related to a specific business case (e.g., automotive, banking, and e-health) requiring more and more scalability and flexibility for the infrastructures and their management. NFV and SDN technologies are a clear way forward to address these challenges even though they are still in their early stages. Security plays a central role in this scenario, mainly because it must follow the rapid evolution of computer networks and the growing number of devices. The main issue is to protect the end-user from the increasing threats, and for this reason, we propose in this paper a security framework compliant to the Security-as-a-Service paradigm. In order to implement this framework, we leverage NFV and SDN technologies, using a user-centered approach. This allows to customize the security service starting from user preferences. Another goal of our work is to highlight the main relevant challenges encountered in the design and implementation of our solution. In particular, we demonstrate how significant is to choose an efficient way to configure the Virtual Network Security Functions in terms of performance. Furthermore, we also address the nontrivial problem of Service Function Chaining in an NFV MANO platform and we show what are the main challenges with respect to this problem

    Privacy-preserving deanonymization of Dark Web Tor Onion services for criminal investigations

    Get PDF
    Tese de Mestrado, Engenharia Informática, 2022, Universidade de Lisboa, Faculdade de CiênciasTor is one of the most popular anonymity networks in the world. Users of this platform range from dissidents to cybercriminals or even ordinary citizens concerned with their privacy. It is based on advanced security mechanisms that provide strong guarantees against traffic correlation attacks that can deanonymize its users and services. Torpedo is the first known traffic correlation attack on Tor that aims at deanonymizing onion services’ (OS) sessions. In a federated way, servers belonging to ISPs around the globe can process deanonymization queries of specific IPs. With the abstraction of an interface, these queries can be submitted by an operator to deanonymize OSes and clients. Initial results showed that this attack was able to identify the IP addresses of OS sessions with high confidence (no false positives). However, Torpedo required ISPs to share sensitive network traffic of their clients between each other. Thus, in this work, we seek to complement the previously developed research with the introduction and study of privacy-preserving machine learning techniques, aiming to develop and assess a new attack vector on Tor that can preserve the privacy of the inputs of each party involved in a computation, allowing ISPs to encrypt their network traffic before correlation. In more detail, we leverage, test and assess a ML-oriented multi-party computation framework on top of Torpedo (TF Encrypted) and we also develop a preliminary extension for training the model with differential privacy using TF Privacy. Our evaluation concludes that the performance and precision of the system were not significantly affected by the execution of multi-party computation between ISPs, but the same was not true when we additionally introduced a pre-defined amount of random noise to the gradients by training the model with differential privac

    Progressive introduction of network softwarization in operational telecom networks: advances at architectural, service and transport levels

    Get PDF
    Technological paradigms such as Software Defined Networking, Network Function Virtualization and Network Slicing are altogether offering new ways of providing services. This process is widely known as Network Softwarization, where traditional operational networks adopt capabilities and mechanisms inherit form the computing world, such as programmability, virtualization and multi-tenancy. This adoption brings a number of challenges, both from the technological and operational perspectives. On the other hand, they provide an unprecedented flexibility opening opportunities to developing new services and new ways of exploiting and consuming telecom networks. This Thesis first overviews the implications of the progressive introduction of network softwarization in operational networks for later on detail some advances at different levels, namely architectural, service and transport levels. It is done through specific exemplary use cases and evolution scenarios, with the goal of illustrating both new possibilities and existing gaps for the ongoing transition towards an advanced future mode of operation. This is performed from the perspective of a telecom operator, paying special attention on how to integrate all these paradigms into operational networks for assisting on their evolution targeting new, more sophisticated service demands.Programa de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Eduardo Juan Jacob Taquet.- Secretario: Francisco Valera Pintor.- Vocal: Jorge López Vizcaín

    Energy and performance-optimized scheduling of tasks in distributed cloud and edge computing systems

    Get PDF
    Infrastructure resources in distributed cloud data centers (CDCs) are shared by heterogeneous applications in a high-performance and cost-effective way. Edge computing has emerged as a new paradigm to provide access to computing capacities in end devices. Yet it suffers from such problems as load imbalance, long scheduling time, and limited power of its edge nodes. Therefore, intelligent task scheduling in CDCs and edge nodes is critically important to construct energy-efficient cloud and edge computing systems. Current approaches cannot smartly minimize the total cost of CDCs, maximize their profit and improve quality of service (QoS) of tasks because of aperiodic arrival and heterogeneity of tasks. This dissertation proposes a class of energy and performance-optimized scheduling algorithms built on top of several intelligent optimization algorithms. This dissertation includes two parts, including background work, i.e., Chapters 3–6, and new contributions, i.e., Chapters 7–11. 1) Background work of this dissertation. Chapter 3 proposes a spatial task scheduling and resource optimization method to minimize the total cost of CDCs where bandwidth prices of Internet service providers, power grid prices, and renewable energy all vary with locations. Chapter 4 presents a geography-aware task scheduling approach by considering spatial variations in CDCs to maximize the profit of their providers by intelligently scheduling tasks. Chapter 5 presents a spatio-temporal task scheduling algorithm to minimize energy cost by scheduling heterogeneous tasks among CDCs while meeting their delay constraints. Chapter 6 gives a temporal scheduling algorithm considering temporal variations of revenue, electricity prices, green energy and prices of public clouds. 2) Contributions of this dissertation. Chapter 7 proposes a multi-objective optimization method for CDCs to maximize their profit, and minimize the average loss possibility of tasks by determining task allocation among Internet service providers, and task service rates of each CDC. A simulated annealing-based bi-objective differential evolution algorithm is proposed to obtain an approximate Pareto optimal set. A knee solution is selected to schedule tasks in a high-profit and high-quality-of-service way. Chapter 8 formulates a bi-objective constrained optimization problem, and designs a novel optimization method to cope with energy cost reduction and QoS improvement. It jointly minimizes both energy cost of CDCs, and average response time of all tasks by intelligently allocating tasks among CDCs and changing task service rate of each CDC. Chapter 9 formulates a constrained bi-objective optimization problem for joint optimization of revenue and energy cost of CDCs. It is solved with an improved multi-objective evolutionary algorithm based on decomposition. It determines a high-quality trade-off between revenue maximization and energy cost minimization by considering CDCs’ spatial differences in energy cost while meeting tasks’ delay constraints. Chapter 10 proposes a simulated annealing-based bees algorithm to find a close-to-optimal solution. Then, a fine-grained spatial task scheduling algorithm is designed to minimize energy cost of CDCs by allocating tasks among multiple green clouds, and specifies running speeds of their servers. Chapter 11 proposes a profit-maximized collaborative computation offloading and resource allocation algorithm to maximize the profit of systems and guarantee that response time limits of tasks are met in cloud-edge computing systems. A single-objective constrained optimization problem is solved by a proposed simulated annealing-based migrating birds optimization. This dissertation evaluates these algorithms, models and software with real-life data and proves that they improve scheduling precision and cost-effectiveness of distributed cloud and edge computing systems

    Software Defined Application Delivery Networking

    Get PDF
    In this thesis we present the architecture, design, and prototype implementation details of AppFabric. AppFabric is a next generation application delivery platform for easily creating, managing and controlling massively distributed and very dynamic application deployments that may span multiple datacenters. Over the last few years, the need for more flexibility, finer control, and automatic management of large (and messy) datacenters has stimulated technologies for virtualizing the infrastructure components and placing them under software-based management and control; generically called Software-defined Infrastructure (SDI). However, current applications are not designed to leverage this dynamism and flexibility offered by SDI and they mostly depend on a mix of different techniques including manual configuration, specialized appliances (middleboxes), and (mostly) proprietary middleware solutions together with a team of extremely conscientious and talented system engineers to get their applications deployed and running. AppFabric, 1) automates the whole control and management stack of application deployment and delivery, 2) allows application architects to define logical workflows consisting of application servers, message-level middleboxes, packet-level middleboxes and network services (both, local and wide-area) composed over application-level routing policies, and 3) provides the abstraction of an application cloud that allows the application to dynamically (and automatically) expand and shrink its distributed footprint across multiple geographically distributed datacenters operated by different cloud providers. The architecture consists of a hierarchical control plane system called Lighthouse and a fully distributed data plane design (with no special hardware components such as service orchestrators, load balancers, message brokers, etc.) called OpenADN . The current implementation (under active development) consists of ~10000 lines of python and C code. AppFabric will allow applications to fully leverage the opportunities provided by modern virtualized Software-Defined Infrastructures. It will serve as the platform for deploying massively distributed, and extremely dynamic next generation application use-cases, including: Internet-of-Things/Cyber-Physical Systems: Through support for managing distributed gather-aggregate topologies common to most Internet-of-Things(IoT) and Cyber-Physical Systems(CPS) use-cases. By their very nature, IoT and CPS use cases are massively distributed and have different levels of computation and storage requirements at different locations. Also, they have variable latency requirements for their different distributed sites. Some services, such as device controllers, in an Iot/CPS application workflow may need to gather, process and forward data under near-real time constraints and hence need to be as close to the device as possible. Other services may need more computation to process aggregated data to drive long term business intelligence functions. AppFabric has been designed to provide support for such very dynamic, highly diversified and massively distributed application use-cases. Network Function Virtualization: Through support for heterogeneous workflows, application-aware networking, and network-aware application deployments, AppFabric will enable new partnerships between Application Service Providers (ASPs) and Network Service Providers (NSPs). An application workflow in AppFabric may comprise of application services, packet and message-level middleboxes, and network transport services chained together over an application-level routing substrate. The Application-level routing substrate allows policy-based service chaining where the application may specify policies for routing their application traffic over different services based on application-level content or context. Virtual worlds/multiplayer games: Through support for creating, managing and controlling dynamic and distributed application clouds needed by these applications. AppFabric allows the application to easily specify policies to dynamically grow and shrink the application\u27s footprint over different geographical sites, on-demand. Mobile Apps: Through support for extremely diversified and very dynamic application contexts typical of such applications. Also, AppFabric provides support for automatically managing massively distributed service deployment and controlling application traffic based on application-level policies. This allows mobile applications to provide the best Quality-of-Experience to its users without This thesis is the first to handle and provide a complete solution for such a complex and relevant architectural problem that is expected to touch each of our lives by enabling exciting new application use-cases that are not possible today. Also, AppFabric is a non-proprietary platform that is expected to spawn lots of innovations both in the design of the platform itself and the features it provides to applications. AppFabric still needs many iterations, both in terms of design and implementation maturity. This thesis is not the end of journey for AppFabric but rather just the beginning

    Survey and Analysis of Production Distributed Computing Infrastructures

    Full text link
    This report has two objectives. First, we describe a set of the production distributed infrastructures currently available, so that the reader has a basic understanding of them. This includes explaining why each infrastructure was created and made available and how it has succeeded and failed. The set is not complete, but we believe it is representative. Second, we describe the infrastructures in terms of their use, which is a combination of how they were designed to be used and how users have found ways to use them. Applications are often designed and created with specific infrastructures in mind, with both an appreciation of the existing capabilities provided by those infrastructures and an anticipation of their future capabilities. Here, the infrastructures we discuss were often designed and created with specific applications in mind, or at least specific types of applications. The reader should understand how the interplay between the infrastructure providers and the users leads to such usages, which we call usage modalities. These usage modalities are really abstractions that exist between the infrastructures and the applications; they influence the infrastructures by representing the applications, and they influence the ap- plications by representing the infrastructures
    • …
    corecore