1,484 research outputs found

    On service optimization in community network micro-clouds

    Get PDF
    Cotutela Universitat Politècnica de Catalunya i KTH Royal Institute of TechnologyInternet coverage in the world is still weak and local communities are required to come together and build their own network infrastructures. People collaborate for the common goal of accessing the Internet and cloud services by building Community networks (CNs). The use of Internet cloud services has grown over the last decade. Community network cloud infrastructures (i.e. micro-clouds) have been introduced to run services inside the network, without the need to consume them from the Internet. CN micro-clouds aims for not only an improved service performance, but also an entry point for an alternative to Internet cloud services in CNs. However, the adaptation of the services to be used in CN micro-clouds have their own challenges since the use of low-capacity devices and wireless connections without a central management is predominant in CNs. Further, large and irregular topology of the network, high software and hardware diversity and different service requirements in CNs, makes the CN micro-clouds a challenging environment to run local services, and to achieve service performance and quality similar to Internet cloud services. In this thesis, our main objective is the optimization of services (performance, quality) in CN micro-clouds, facilitating entrance to other services and motivating members to make use of CN micro-cloud services as an alternative to Internet services. We present an approach to handle services in CN micro-cloud environments in order to improve service performance and quality that can be approximated to Internet services, while also giving to the community motivation to use CN micro-cloud services. Furthermore, we break the problem into different levels (resource, service and middleware), propose a model that provides improvements for each level and contribute with information that helps to support the improvements (in terms of service performance and quality) in the other levels. At the resource level, we facilitate the use of community devices by utilizing virtualization techniques that isolate and manage CN micro-cloud services in order to have a multi-purpose environment that fosters services in the CN micro-cloud environment. At the service level, we build a monitoring tool tailored for CN micro-clouds that helps us to analyze service behavior and performance in CN micro-clouds. Subsequently, the information gathered enables adaptation of the services to the environment in order to improve their quality and performance under CN environments. At the middleware level, we build overlay networks as the main communication system according to the social information in order to improve paths and routes of the nodes, and improve transmission of data across the network by utilizing the relationships already established in the social network or community of practices that are related to the CNs. Therefore, service performance in CN micro-clouds can become more stable with respect to resource usage, performance and user perceived quality.Acceder a Internet sigue siendo un reto en muchas partes del mundo y las comunidades locales se ven en la necesidad de colaborar para construir sus propias infraestructuras de red. Los usuarios colaboran por el objetivo común de acceder a Internet y a los servicios en la nube construyendo redes comunitarias (RC). El uso de servicios de Internet en la nube ha crecido durante la última década. Las infraestructuras de nube en redes comunitarias (i.e., micronubes) han aparecido para albergar servicios dentro de las mismas redes, sin tener que acceder a Internet para usarlos. Las micronubes de las RC no solo tienen por objetivo ofrecer un mejor rendimiento, sino también ser la puerta de entrada en las RC hacia una alternativa a los servicios de Internet en la nube. Sin embargo, la adaptación de los servicios para ser usados en micronubes de RC conlleva sus retos ya que el uso de dispositivos de recursos limitados y de conexiones inalámbricas sin una gestión centralizada predominan en las RC. Más aún, la amplia e irregular topología de la red, la diversidad en el hardware y el software y los diferentes requisitos de los servicios en RC convierten en un desafío albergar servicios locales en micronubes de RC y obtener un rendimiento y una calidad del servicio comparables a los servicios de Internet en la nube. Esta tesis tiene por objetivo la optimización de servicios (rendimiento, calidad) en micronubes de RC, facilitando la entrada a otros servicios y motivando a sus miembros a usar los servicios en la micronube de RC como una alternativa a los servicios en Internet. Presentamos una aproximación para gestionar los servicios en entornos de micronube de RC para mejorar su rendimiento y calidad comparable a los servicios en Internet, a la vez que proporcionamos a la comunidad motivación para usar los servicios de micronube en RC. Además, dividimos el problema en distintos niveles (recursos, servicios y middleware), proponemos un modelo que proporciona mejoras para cada nivel y contribuye con información que apoya las mejoras (en términos de rendimiento y calidad de los servicios) en los otros niveles. En el nivel de los recursos, facilitamos el uso de dispositivos comunitarios al emplear técnicas de virtualización que aíslan y gestionan los servicios en micronubes de RC para obtener un entorno multipropósito que fomenta los servicios en el entorno de micronube de RC. En el nivel de servicio, construimos una herramienta de monitorización a la medida de las micronubes de RC que nos ayuda a analizar el comportamiento de los servicios y su rendimiento en micronubes de RC. Luego, la información recopilada permite adaptar los servicios al entorno para mejorar su calidad y rendimiento bajo las condiciones de una RC. En el nivel de middleware, construimos redes de overlay que actúan como el sistema de comunicación principal de acuerdo a información social para mejorar los caminos y las rutas de los nodos y mejoramos la transmisión de datos a lo largo de la red al utilizar las relaciones preestablecidas en la red social o la comunidad de prácticas que están relacionadas con las RC. De este modo, el rendimiento en las micronubes de RC puede devenir más estable respecto al uso de recursos, el rendimiento y la calidad percibidas por el usuario.Postprint (published version

    De-ossifying the Internet Transport Layer : A Survey and Future Perspectives

    Get PDF
    ACKNOWLEDGMENT The authors would like to thank the anonymous reviewers for their useful suggestions and comments.Peer reviewedPublisher PD

    Implementation of an Adaptive Stream Scaling Media Proxy with the data path in the Linux Kernel

    Get PDF
    Streaming media is becoming increasingly common, and bandwidth demands are rising. Centralized distribution is becoming prohibitively hard to achieve and geographically aware hierarchical distribution has emerged as a viable alternative, and has brought with it an increased demand for CPU processing power. There has also been an influx in an abundance of different devices on which to receive media - from HD plasma screens to 2" cell phone displays - and the need for media to be able to adapt to devices of differing capabilities are becoming obvious. For this thesis we have implemented a media proxy using RTSP/RTP. It is implemented partially in kernelspace, which has drastically reduced the CPU-cost of relaying data from a server to multiple clients. This has been combined with a proof-of-concept implementation for adaptively scaling a SVC media stream according to available bandwidth

    Real time stream processing for Internet of things and sensing environments

    Get PDF
    Includes bibliographical references.2015 Fall.Improvements in miniaturization and networking capabilities of sensors have contributed to the proliferation of Internet of Things (IoT) and continuous sensing environments. Data streams generated in such settings must keep pace with generation rates and be processed in real time. Challenges in accomplishing this include: high data arrival rates, buffer overflows, context-switches during processing, and object creation overheads. We propose a holistic framework that addresses the CPU, memory, network, and kernel issues involved in stream processing. Our prototype, Neptune, builds on the Granules cloud runtime and leverages its support for scheduling packets and communications based on publish/subscribe, peer to peer, and point-to-point. The framework maximizes bandwidth utilization in the presence of small messages via the use of buffering and dynamic compactions of packets based on their entropy. Our use of thread-pools and batched processing reduces context switches and improves effective CPU utilizations. The framework alleviates memory pressure that can lead to swapping, page faults, and thrashing through efficient reuse of objects. To cope with buffer overflows we rely on flow control and throttling the preceding stages of a processing pipeline. Our correctness criteria included deadlock/livelock avoidance, and ordered and exactly-once processing. Our benchmarks demonstrate the suitability of the Granules/Neptune combination and we contrast our performance with Apache Storm, the dominant stream-processing framework developed by Twitter. At a single node, we are able to achieve a processing rate of ~2 million stream packets per-second. In a distributed cluster setup, we are able to achieve a processing rate of ~100 million stream packets per-second with a near-optimal bandwidth utilization

    Data Movement Challenges and Solutions with Software Defined Networking

    Get PDF
    With the recent rise in cloud computing, applications are routinely accessing and interacting with data on remote resources. Interaction with such remote resources for the operation of media-rich applications in mobile environments is also on the rise. As a result, the performance of the underlying network infrastructure can have a significant impact on the quality of service experienced by the user. Despite receiving significant attention from both academia and industry, computer networks still face a number of challenges. Users oftentimes report and complain about poor experiences with their devices and applications, which can oftentimes be attributed to network performance when downloading or uploading application data. This dissertation investigates problems that arise with data movement across computer networks and proposes novel solutions to address these issues through software defined networking (SDN). SDN is lauded to be the paradigm of choice for next generation networks. While academia explores use cases in various contexts, industry has focused on data center and wide area networks. There is a significant range of complex and application-specific network services that can potentially benefit from SDN, but introduction and adoption of such solutions remains slow in production networks. One impeding factor is the lack of a simple yet expressive enough framework applicable to all SDN services across production network domains. Without a uniform framework, SDN developers create disjoint solutions, resulting in untenable management and maintenance overhead. The SDN-based solutions developed in this dissertation make use of a common agent-based approach. The architecture facilitates application-oriented SDN design with an abstraction composed of software agents on top of the underlying network. There are three key components modern and future networks require to deliver exceptional data transfer performance to the end user: (1) user and application mobility, (2) high throughput data transfer, and (3) efficient and scalable content distribution. Meeting these key components will not only ensure the network can provide robust and reliable end-to-end connectivity, but also that network resources will be used efficiently. First, mobility support is critical for user applications to maintain connectivity to remote, cloud-based resources. Today\u27s network users are frequently accessing such resources while on the go, transitioning from network to network with the expectation that their applications will continue to operate seamlessly. As users perform handovers between heterogeneous networks or between networks across administrative domains, the application becomes responsible for maintaining or establishing new connections to remote resources. Although application developers often account for such handovers, the result is oftentimes visible to the user through diminished quality of service (e.g. rebuffering in video streaming applications). Many intra-domain handover solutions exist for handovers in WiFi and cellular networks, such as mobile IP, but they are architecturally complex and have not been integrated to form a scalable, inter-domain solution. A scalable framework is proposed that leverages SDN features to implement both horizontal and vertical handovers for heterogeneous wireless networks within and across administrative domains. User devices can select an appropriate network using an on-board virtual SDN implementation that manages available network interfaces. An SDN-based counterpart operates in the network core and edge to handle user migrations as they transition from one edge attachment point to another. The framework was developed and deployed as an extension to the Global Environment for Network Innovations (GENI) testbed; however, the framework can be deployed on any OpenFlow enabled network. Evaluation revealed users can maintain existing application connections without breaking the sockets and requiring the application to recover. Second, high throughput data transfer is essential for user applications to acquire large remote data sets. As data sizes become increasingly large, often combined with their locations being far from the applications, the well known impact of lower Transmission Control Protocol (TCP) throughput over large delay-bandwidth product paths becomes more significant to these applications. While myriads of solutions exist to alleviate the problem, they require specialized software and/or network stacks at both the application host and the remote data server, making it hard to scale up to a large range of applications and execution environments. This results in high throughput data transfer that is available to only a select subset of network users who have access to such specialized software. An SDN based solution called Steroid OpenFlow Service (SOS) has been proposed as a network service that transparently increases the throughput of TCP-based data transfers across large networks. SOS shifts the complexity of high performance data transfer from the end user to the network; users do not need to configure anything on the client and server machines participating in the data transfer. The SOS architecture supports seamless high performance data transfer at scale for multiple users and for high bandwidth connections. Emphasis is placed on the use of SOS as a part of a larger, richer data transfer ecosystem, complementing and compounding the efforts of existing data transfer solutions. Non-TCP-based solutions, such as Aspera, can operate seamlessly alongside an SOS deployment, while those based on TCP, such as wget, curl, and GridFTP, can leverage SOS for throughput improvement beyond what a single TCP connection can provide. Through extensive evaluation in real-world environments, the SOS architecture is proven to be flexibly deployable on a variety of network architectures, from cloud-based, to production networks, to scaled up, high performance data center environments. Evaluation showed that the SOS architecture scales linearly through the addition of SOS “agents†to the SOS deployment, providing data transfer performance improvement to multiple users simultaneously. An individual data transfer enhanced by SOS was shown to have increased throughput nearly forty times the same data transfer without SOS assistance. Third, efficient and scalable video content distribution is imperative as the demand for multimedia content over the Internet increases. Current state of the art solutions consist of vast content distribution networks (CDNs) where content is oftentimes hosted in duplicate at various geographically distributed locations. Although CDNs are useful for the dissemination of static content, they do not provide a clear and scalable model for the on demand production and distribution of live, streaming content. IP multicast is a popular solution to scalable video content distribution; however, it is seldom used due to deployment and operational complexity. Inspired from the distributed design of todays CDNs and the distribution trees used by IP multicast, a SDN based framework called GENI Cinema (GC) is proposed to allow for the distribution of live video content at scale. GC allows for the efficient management and distribution of live video content at scale without the added architectural complexity and inefficiencies inherent to contemporary solutions such as IP multicast. GC has been deployed as an experimental, nation-wide live video distribution service using the GENI network, broadcasting live and prerecorded video streams from conferences for remote attendees, from the classroom for distance education, and for live sporting events. GC clients can easily and efficiently switch back and forth between video streams with improved switching latency latency over cable, satellite, and other live video providers. The real world dep loyments and evaluation of the proposed solutions show how SDN can be used as a novel way to solve current data transfer problems across computer networks. In addition, this dissertation is expected to provide guidance for designing, deploying, and debugging SDN-based applications across a variety of network topologies

    A CASE STUDY OF VARIOUS WIRELESS NETWORK SIMULATION TOOLS

    Get PDF
    4G is the fastest developing system in the history of mobile communication networks. Network connectivity is paramount for all kinds of big enterprises.  4G not only provides super-fast connectivity to millions of users, but can also act as an enterprise network connectivity enabler and it has inherent advantages such as higher bandwidth, low latency, higher spectrum efficiency along with backward compatibility and future proofing. The design of the 4G based Long Term Evolution physical network provides the required flexibility for optimization during the development phase. In this paper LTE Network related supporting simulation tools is presented to demonstrate the need of Hardware co-simulation of the LTE system. After the feasibility analysis, the importance of the model is to be ported Field Programmable Gate Array platform is examined in survey in detail with the supporting inferences along with the comparison of different wireless network simulators suitable for LTE
    • …
    corecore