65 research outputs found

    Middleware for wireless sensor network virtualization

    Get PDF
    Sensor and network virtualization technology are used in smart home, smart grid, smart city and many other applications of Internet of Things (IoT) that deploy Wireless Sensor Network (WSN) to facilitate multiple sensor data transmission over multiple networks. Existing WSNs are designed for a specific application running on low data rate network. The challenge is how to ensure multiple sensor data for multiple applications be transmitted over multiple heterogeneous networks having different transmission rates while ensuring Quality-of-Service (QoS). The research has developed a middleware that provides sensor and network virtualization with guaranteed QoS. The middleware was designed comprising of two layers: Application Dependent Layer Middleware (ADLM) and Network Dependent Layer Middleware (NDLM). The ADLM combined multiple sensor data to form services based of Service Oriented Application (SOA). It is comprised of service handling manager that combines various sensor data and form services, QoS manager that assigns priority and service scheduling manager that forwards the service frames. The NDLM facilitated seamless transmissions of various service data over multiple heterogeneous networks. It consists of hypervisor which is composed of flowvisor and the powervisor. The flowvisor is madeup of transmit and routing managers responsible for routing and transmitting service packets. The powervisor consists of a resource manager that determines and selects the node with the highest battery power. The middleware was implemented and evaluated on a real experimental testbed. The experimental results showed that the middleware increased throughput by 8.7% and reduced the numbers of packets transmissions from the node by 68.7% compared to proxy middleware using SOA. In addition, end-to-end transmission delay was reduced by 85.2% when compared to SenShare using SOA. The flowvisor at the gateway decreased the waiting time of packets in the queue by 59.8%, when the flowvisor raised the output rate up to 2.5 times the maximum arrival rate of WSN packets. The powervisor increased the node’s life time by 17.6% when compared to VITRO by limiting the transmission power to the existing battery voltage level. In brief, the middleware has provided guaranteed QoS by increasing throughput, reducing end-to-end delay and minimizing energy consumption. The middleware is highly recommended for IoT applications such as smart city and smart grid

    Exploring the Challenges of a Flexible, Feature Rich IoT Testbed

    Get PDF
    IoT is a field of technology of ever growing importance in our daily lives. From smart cities, health devices, climate observations, appliances, and so much more, IoT surrounds us now more than ever. The types of devices being added to IoT networks is ever growing, and as this variety of hardware and software increases, so does the difficulty of working with them. Ensuring inter-compatibility between devices, testing new communication protocols, and writing software for emerging technologies becomes a complex challenge. To help solve this challenge are IoT Testbeds. IoT Testbeds help developers, researchers, and many more groups of people explore and test their IoT solutions in contexts of real IoT Devices. These testbeds exist today, but as far as we know, no Jack of all trades testbed exists that supports all features one might want from a testbed. This thesis will introduce a first draft of a new testbed. Introducing a system design, architecture, and implementation that theoretically and practically implements all these features. Also highlighting issues with this design and ways to tackle them. In the end contributing a foundation onto which a powerful system could be built. The challenge the thesis aims to tackle is, in short: What are the needed features that make up a good testbed? And how can we incorporate these features into a simple, flexible, unified system

    Infrastructure sharing of 5G mobile core networks on an SDN/NFV platform

    Get PDF
    When looking towards the deployment of 5G network architectures, mobile network operators will continue to face many challenges. The number of customers is approaching maximum market penetration, the number of devices per customer is increasing, and the number of non-human operated devices estimated to approach towards the tens of billions, network operators have a formidable task ahead of them. The proliferation of cloud computing techniques has created a multitude of applications for network services deployments, and at the forefront is the adoption of Software-Defined Networking (SDN) and Network Functions Virtualisation (NFV). Mobile network operators (MNO) have the opportunity to leverage these technologies so that they can enable the delivery of traditional networking functionality in cloud environments. The benefit of this is reductions seen in the capital and operational expenditures of network infrastructure. When going for NFV, how a Virtualised Network Function (VNF) is designed, implemented, and placed over physical infrastructure can play a vital role on the performance metrics achieved by the network function. Not paying careful attention to this aspect could lead to the drastically reduced performance of network functions thus defeating the purpose of going for virtualisation solutions. The success of mobile network operators in the 5G arena will depend heavily on their ability to shift from their old operational models and embrace new technologies, design principles and innovation in both the business and technical aspects of the environment. The primary goal of this thesis is to design, implement and evaluate the viability of data centre and cloud network infrastructure sharing use case. More specifically, the core question addressed by this thesis is how virtualisation of network functions in a shared infrastructure environment can be achieved without adverse performance degradation. 5G should be operational with high penetration beyond the year 2020 with data traffic rates increasing exponentially and the number of connected devices expected to surpass tens of billions. Requirements for 5G mobile networks include higher flexibility, scalability, cost effectiveness and energy efficiency. Towards these goals, Software Defined Networking (SDN) and Network Functions Virtualisation have been adopted in recent proposals for future mobile networks architectures because they are considered critical technologies for 5G. A Shared Infrastructure Management Framework was designed and implemented for this purpose. This framework was further enhanced for performance optimisation of network functions and underlying physical infrastructure. The objective achieved was the identification of requirements for the design and development of an experimental testbed for future 5G mobile networks. This testbed deploys high performance virtualised network functions (VNFs) while catering for the infrastructure sharing use case of multiple network operators. The management and orchestration of the VNFs allow for automation, scalability, fault recovery, and security to be evaluated. The testbed developed is readily re-creatable and based on open-source software

    Sophisticated Batteryless Sensing

    Get PDF
    Wireless embedded sensing systems have revolutionized scientific, industrial, and consumer applications. Sensors have become a fixture in our daily lives, as well as the scientific and industrial communities by allowing continuous monitoring of people, wildlife, plants, buildings, roads and highways, pipelines, and countless other objects. Recently a new vision for sensing has emerged---known as the Internet-of-Things (IoT)---where trillions of devices invisibly sense, coordinate, and communicate to support our life and well being. However, the sheer scale of the IoT has presented serious problems for current sensing technologies---mainly, the unsustainable maintenance, ecological, and economic costs of recycling or disposing of trillions of batteries. This energy storage bottleneck has prevented massive deployments of tiny sensing devices at the edge of the IoT. This dissertation explores an alternative---leave the batteries behind, and harvest the energy required for sensing tasks from the environment the device is embedded in. These sensors can be made cheaper, smaller, and will last decades longer than their battery powered counterparts, making them a perfect fit for the requirements of the IoT. These sensors can be deployed where battery powered sensors cannot---embedded in concrete, shot into space, or even implanted in animals and people. However, these batteryless sensors may lose power at any point, with no warning, for unpredictable lengths of time. Programming, profiling, debugging, and building applications with these devices pose significant challenges. First, batteryless devices operate in unpredictable environments, where voltages vary and power failures can occur at any time---often devices are in failure for hours. Second, a device\u27s behavior effects the amount of energy they can harvest---meaning small changes in tasks can drastically change harvester efficiency. Third, the programming interfaces of batteryless devices are ill-defined and non- intuitive; most developers have trouble anticipating the problems inherent with an intermittent power supply. Finally, the lack of community, and a standard usable hardware platform have reduced the resources and prototyping ability of the developer. In this dissertation we present solutions to these challenges in the form of a tool for repeatable and realistic experimentation called Ekho, a reconfigurable hardware platform named Flicker, and a language and runtime for timely execution of intermittent programs called Mayfly

    Contributions to Edge Computing

    Get PDF
    Efforts related to Internet of Things (IoT), Cyber-Physical Systems (CPS), Machine to Machine (M2M) technologies, Industrial Internet, and Smart Cities aim to improve society through the coordination of distributed devices and analysis of resulting data. By the year 2020 there will be an estimated 50 billion network connected devices globally and 43 trillion gigabytes of electronic data. Current practices of moving data directly from end-devices to remote and potentially distant cloud computing services will not be sufficient to manage future device and data growth. Edge Computing is the migration of computational functionality to sources of data generation. The importance of edge computing increases with the size and complexity of devices and resulting data. In addition, the coordination of global edge-to-edge communications, shared resources, high-level application scheduling, monitoring, measurement, and Quality of Service (QoS) enforcement will be critical to address the rapid growth of connected devices and associated data. We present a new distributed agent-based framework designed to address the challenges of edge computing. This actor-model framework implementation is designed to manage large numbers of geographically distributed services, comprised from heterogeneous resources and communication protocols, in support of low-latency real-time streaming applications. As part of this framework, an application description language was developed and implemented. Using the application description language a number of high-order management modules were implemented including solutions for resource and workload comparison, performance observation, scheduling, and provisioning. A number of hypothetical and real-world use cases are described to support the framework implementation

    Designing Data Spaces

    Get PDF
    This open access book provides a comprehensive view on data ecosystems and platform economics from methodical and technological foundations up to reports from practical implementations and applications in various industries. To this end, the book is structured in four parts: Part I “Foundations and Contexts” provides a general overview about building, running, and governing data spaces and an introduction to the IDS and GAIA-X projects. Part II “Data Space Technologies” subsequently details various implementation aspects of IDS and GAIA-X, including eg data usage control, the usage of blockchain technologies, or semantic data integration and interoperability. Next, Part III describes various “Use Cases and Data Ecosystems” from various application areas such as agriculture, healthcare, industry, energy, and mobility. Part IV eventually offers an overview of several “Solutions and Applications”, eg including products and experiences from companies like Google, SAP, Huawei, T-Systems, Innopay and many more. Overall, the book provides professionals in industry with an encompassing overview of the technological and economic aspects of data spaces, based on the International Data Spaces and Gaia-X initiatives. It presents implementations and business cases and gives an outlook to future developments. In doing so, it aims at proliferating the vision of a social data market economy based on data spaces which embrace trust and data sovereignty

    A Survey on Data Plane Programming with P4: Fundamentals, Advances, and Applied Research

    Full text link
    With traditional networking, users can configure control plane protocols to match the specific network configuration, but without the ability to fundamentally change the underlying algorithms. With SDN, the users may provide their own control plane, that can control network devices through their data plane APIs. Programmable data planes allow users to define their own data plane algorithms for network devices including appropriate data plane APIs which may be leveraged by user-defined SDN control. Thus, programmable data planes and SDN offer great flexibility for network customization, be it for specialized, commercial appliances, e.g., in 5G or data center networks, or for rapid prototyping in industrial and academic research. Programming protocol-independent packet processors (P4) has emerged as the currently most widespread abstraction, programming language, and concept for data plane programming. It is developed and standardized by an open community and it is supported by various software and hardware platforms. In this paper, we survey the literature from 2015 to 2020 on data plane programming with P4. Our survey covers 497 references of which 367 are scientific publications. We organize our work into two parts. In the first part, we give an overview of data plane programming models, the programming language, architectures, compilers, targets, and data plane APIs. We also consider research efforts to advance P4 technology. In the second part, we analyze a large body of literature considering P4-based applied research. We categorize 241 research papers into different application domains, summarize their contributions, and extract prototypes, target platforms, and source code availability.Comment: Submitted to IEEE Communications Surveys and Tutorials (COMS) on 2021-01-2
    • …
    corecore