475 research outputs found

    Enabling the web of things: facilitating deployment, discovery and resource access to IoT objects using embedded web services

    Get PDF
    Today, the IETF Constrained Application Protocol (CoAP) is being standardised. CoAP takes the internet of things to the next level: it enables the implementation of RESTful web services on embedded devices, thus enabling the construction of an easily accessible web of things. However, before tiny objects can make themselves available through embedded web services, several manual configuration steps are still needed to integrate a sensor network within an existing networking environment. In this paper, we describe a novel self-organisation solution to facilitate the deployment of constrained networks and enable the discovery, end-to-end connectivity and service usage of these newly deployed sensor nodes. By using embedded web service technology, the need of other protocols on these resource constrained devices is avoided. It allows automatic hierarchical discovery of CoAP servers, resulting in a browsable hierarchy of CoAP servers, which can be accessed both over CoAP and hypertext transfer protocol.The research leading to these results has received funding from the European Union’s Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 258885 (SPITFIRE project), from the iMinds ICON project O’CareCloudS, from a VLIR PhD grant to Isam Ishaq and through an FWO pos tdoc research grant for Eli De Poorter

    Enabling the web of things: facilitating deployment, discovery and resource access to IoT objects using embedded web services

    Get PDF
    Today, the IETF Constrained Application Protocol (CoAP) is being standardised. CoAP takes the internet of things to the next level: it enables the implementation of RESTful web services on embedded devices, thus enabling the construction of an easily accessible web of things. However, before tiny objects can make themselves available through embedded web services, several manual configuration steps are still needed to integrate a sensor network within an existing networking environment. In this paper, we describe a novel self-organisation solution to facilitate the deployment of constrained networks and enable the discovery, end-to-end connectivity and service usage of these newly deployed sensor nodes. By using embedded web service technology, the need of other protocols on these resource constrained devices is avoided. It allows automatic hierarchical discovery of CoAP servers, resulting in a browsable hierarchy of CoAP servers, which can be accessed both over CoAP and hypertext transfer protocol

    Performance Optimization in Wireless Local Area Networks

    Get PDF
    Wireless Local Area Networks (WLAN) are becoming more and more important for providing wireless broadband access. Applications and networking scenarios evolve continuously and in an unpredictable way, attracting the attention of academic institutions, research centers and industry. For designing an e cient WLAN is necessary to carefully plan coverage and to optimize the network design parameters, such as AP locations, channel assignment, power allocation, MAC protocol, routing algorithm, etc... In this thesis we approach performance optimization in WLAN at di erent layer of the OSI model. Our rst approach is at Network layer. Starting from a Hybrid System modeling the ow of tra c in the network, we propose a Hybrid Linear Varying Parameter algorithm for identifying the link quality that could be used as metric in routing algorithms. Go down to Data Link, it is well known that CSMA (Carrier Sense Multiple Access) protocols exhibit very poor performance in case of multi-hop transmissions, because of inter-link interference due to imperfect carrier sensing. We propose two novel algorithms, that are combining Time Division Multiple Access for grouping contending nodes in non-interfering sets with Carrier Sense Multiple Access for managing the channel access behind a set. In the rst solution, a game theoretical study of intra slot contention is introduced, in the second solution we apply an optimization algorithm to nd the optimal degree between contention and scheduling. Both the presented solutions improve the network performance with respect to CSMA and TDMA algorithms. Finally we analyze the network performance at Physical Layer. In case of WLAN, we can only use three orthogonal channels in an unlicensed spectrum, so the frequency assignments should be subject to frequent adjustments, according to the time-varying amount of interference which is not under the control of the provider. This problem make necessary the introduction of an automatic network planning solution, since a network administrator cannot continuously monitor and correct the interference conditions su ered in the network. We propose a novel protocol based on a distributed machine learning mechanism in which the nodes choose, automatically and autonomously in each time slot, the optimal channel for transmitting through a weighted combination of protocols

    Optimal and probabilistic resource and capability analysis for network slice as a service

    Get PDF
    Network Slice as a Service is one of the key concepts of the fifth generation of mobile networks (5G). 5G supports new use cases, like the Internet of Things (IoT), massive Machine Type Communication (mMTC) and Ultra-Reliable and Low Latency Communication (URLLC) as well as significant improvements of the conventional Mobile Broadband (MBB) use case. In addition, safety and security critical use cases move into focus. These use cases involve diverging requirements, e.g. network reliability, latency and throughput. Network virtualization and end-to-end mobile network slicing are seen as key enablers to handle those differing requirements and providing mobile network services for the various 5G use cases and between different tenants. Network slices are isolated, virtualized, end-to-end networks optimized for specific use cases. But still they share a common physical network infrastructure. Through logical separation of the network slices on a common end-to-end mobile network infrastructure, an efficient usage of the underlying physical network infrastructure provided by multiple Mobile Service Providers (MSPs) in enabled. Due to the dynamic lifecycle of network slices there is a strong demand for efficient algorithms for the so-called Network Slice Embedding (NSE) problem. Efficient and reliable resource provisioning for Network Slicing as a Service, requires resource allocation based on a mapping of virtual network slice elements on the serving physical mobile network infrastructure. In this thesis, first of all, a formal Network Slice Instance Admission (NSIA) process is presented, based on the 3GPP standardization. This process allows to give fast feedback to a network operator or tenant on the feasibility of embedding incoming Network Slice Instance Requests (NSI-Rs). In addition, corresponding services for NSIA and feasibility checking services are defined in the context of the ETSI ZSM Reference Architecture Framework. In the main part of this work, a mathematical model for solving the NSE Problem formalized as a standardized Linear Program (LP) is presented. The presented solution provides a nearly optimal embedding. This includes the optimal subset of Network Slice Instances (NSIs) to be selected for embedding, in terms of network slice revenue and costs, and the optimal allocation of associated network slice applications, functions, services and communication links on the 5G end-to-end mobile network infrastructure. It can be used to solve the online as well as the offline NSIA problem automatically in different variants. In particular, low latency network slices require deployment of their services and applications, including Network Functions (NFs) close to the user, i.e., at the edge of the mobile network. Since the users of those services might be widely distributed and mobile, multiple instances of the same application are required to be available on numerous distributed edge clouds. A holistic approach for tackling the problem of NSE with edge computing is provided by our so-called Multiple Application Instantiation (MAI) variant of the NSE LP solution. It is capable of determining the optimal number of application instances and their optimal deployment locations on the edge clouds, even for multiple User Equipment (UE) connectivity scenarios. In addition to that multi-path, also referred to as path-splitting, scenarios with a latency sensitive objective function, which guarantees the optimal network utilization as well as minimum latency in the network slice communication, is included. Resource uncertainty, as well as reuse and overbooking of resources guaranteed by Service Level Agreements (SLAs) are discussed in this work. There is a consensus that over-provisioning of mobile communication bands is economically infeasible and certain risk of network overload is accepted for the majority of the 5G use cases. A probabilistic variant of the NSE problem with an uncertainty-aware objective function and a resource availability confidence analysis are presented. The evaluation shows the advantages and the suitability of the different variants of the NSE formalization, as well as its scalability and computational limits in a practical implementation

    An investigation into dynamical bandwidth management and bandwidth redistribution using a pool of cooperating interfacing gateways and a packet sniffer in mobile cloud computing

    Get PDF
    Mobile communication devices are increasingly becoming an essential part of almost every aspect of our daily life. However, compared to conventional communication devices such as laptops, notebooks, and personal computers, mobile devices still lack in terms of resources such as processor, storage and network bandwidth. Mobile Cloud Computing is intended to augment the capabilities of mobile devices by moving selected workloads away from resource-limited mobile devices to resource-intensive servers hosted in the cloud. Services hosted in the cloud are accessed by mobile users on-demand via the Internet using standard thick or thin applications installed on their devices. Nowadays, users of mobile devices are no longer satisfied with best-effort service and demand QoS when accessing and using applications and services hosted in the cloud. The Internet was originally designed to provide best-effort delivery of data packets, with no guarantee on packet delivery. Quality of Service has been implemented successfully in provider and private networks since the Internet Engineering Task Force introduced the Integrated Services and Differentiated Services models. These models have their legacy but do not adequately address the Quality of Service needs in Mobile Cloud Computing where users are mobile, traffic differentiation is required per user, device or application, and packets are routed across several network domains which are independently administered. This study investigates QoS and bandwidth management in Mobile Cloud Computing and considers a scenario where a virtual test-bed made up of GNS3 network software emulator, Cisco IOS image, Wireshark packet sniffer, Solar-Putty, and Firefox web browser appliance is set up on a laptop virtualized with VMware Workstation 15 Pro. The virtual test-bed is in turn connected to the real world Internet via the host laptop's Ethernet Network Interface Card. Several virtual Firefox appliances are set up as endusers and generate traffic by launching web applications such as video streaming, file download and Internet browsing. The traffic generated by the end-users and bandwidth used is measured, monitored, and tracked using a Wireshark packet sniffer installed on all interfacing gateways that connect the end-users to the cloud. Each gateway aggregates the demand of connected hosts and delivers Quality of Service to connected users based on the Quality of Service policies and mechanisms embedded in the gateway. Analysis of the results shows that a packet sniffer deployed at a suitable point in the network can identify, measure and track traffic usage per user, device or application in real-time. The study has also demonstrated that when deployed in the gateway connecting users to the cloud, it provides network-wide monitoring and traffic statistics collected can be fed to other functional components of the gateway where a dynamical bandwidth management scheme can be applied to instantaneously allocate and redistribute bandwidth to target users as they roam around the network from one location to another. This approach is however limited and ensuring end-to-end Quality of Service requires mechanisms and policies to be extended across all network layers along the traffic path between the user and the cloud in order to guarantee a consistent treatment of traffic

    Network virtualization as an integrated solution for emergency communication

    Get PDF
    In this paper the Virtual Private Ad Hoc Networking (VPAN) platform is introduced as an integrated networking solution for many applications that require secure transparent continuous connectivity using heterogeneous devices and network technologies. This is done by creating a virtual logical self-organizing network on top of existing network technologies reducing complexity and maintaining session continuity right from the start. One of the most interesting applications relies in the field of emergency communication with its specific needs which will be discussed in this paper and matched in detail against the architecture and features of the VPAN platform. The concept and dynamics are demonstrated and evaluated with measurements done on real hardware

    Mobility architecture for the global internet

    Full text link
    • …
    corecore