120 research outputs found

    Distributed scheduling algorithms for LoRa-based wide area cyber-physical systems

    Get PDF
    Low Power Wide Area Networks (LPWAN) are a class of wireless communication protocols that work over long distances, consume low power and support low datarates. LPWANs have been designed for monitoring applications, with sparse communication from nodes to servers and sparser from servers to nodes. Inspite of their initial design, LPWANs have the potential to target applications with higher and stricter requirements like those of Cyber-Physical Systems (CPS). Due to their long-range capabilities, LPWANs can specifically target CPS applications distributed over a wide-area, which is referred to as Wide-Area CPS (WA-CPS). Augmenting WA-CPSs with wireless communication would allow for more flexible, low-cost and easily maintainable deployment. However, wireless communications come with problems like reduced reliability and unpredictable latencies, making them harder to use for CPSs. With this intention, this thesis explores the use of LPWANs, specifically LoRa, to meet the communication and control requirements of WA-CPSs. The thesis focuses on using LoRa due to its high resilience to noise, several communication parameters to choose from and a freely modifiable communication stack and servers making it ideal for research and deployment. However, LoRaWAN suffers from low reliability due to its ALOHA channel access method. The thesis posits that "Distributed algorithms would increase the protocol's reliability allowing it to meet the requirements of WA-CPSs". Three different application scenarios are explored in this thesis that leverage unexplored aspects of LoRa to meet their requirements. The application scenarios are delay-tolerant vehicular networks, multi-stakeholder WA-CPS deployments and water distribution networks. The systems use novel algorithms to facilitate communication between the nodes and gateways to ensure a highly reliable system. The results outperform state-of-art techniques to prove that LoRa is currently under-utilised and can be used for CPS applications.Open Acces

    CASPaR: Congestion Avoidance Shortest Path Routing for Delay Tolerant Networks

    Get PDF
    Unlike traditional TCP/IP-based networks, Delay and Disruption Tolerant Networks (DTNs) may experience connectivity disruptions and guarantee no end-to-end connectivity between source and destination. As the popularity of DTNs continues to rise, so does the need for a robust and low latency routing protocol capable of connecting not only DTNs but also densely populated, dynamic hybrid DTN-MANET. Here we describe a novel DTN routing algorithm referred to as Congestion Avoidance Shortest Path Routing (CASPaR), which seeks to maximize packet delivery probability while minimizing latency. CASPaR attempts this without any direct knowledge of node connectivity outside of its own neighborhood. Our simulation results show that CASPaR outperforms well-known protocols in terms of packet delivery probability and latency while limiting network overhead

    Integrating wireless technologies into intra-vehicular communication

    Full text link
    With the emergence of connected and autonomous vehicles, sensors are increasingly deployed within car. Traffic generated by these sensors congest traditional intra-vehicular networks, such as CAN buses. Furthermore, the large amount of wires needed to connect sensors makes it hard to design cars in a modular way. These limitations have created impetus to use wireless technologies to support intra-vehicular communication. In this dissertation, we tackle the challenge of designing and evaluating data collection protocols for intra-car networks that can operate reliably and efficiently under dynamic channel conditions. First, we evaluate the feasibility of deploying an intra-car wireless network based on the Backpressure Collection Protocol (BCP), which is theoretically proven to be throughput-optimal. We uncover a surprising behavior in which, under certain dynamic channel conditions, the average packet delay of BCP decreases with the traffic load. We propose and analyze a queueing-theoretic model to shed light into the observed phenomenon. As a solution, we propose a new protocol, called replication-based LIFO-backpressure (RBL). Analytical and simulation results indicate that RBL dramatically reduces the delay of BCP at low load, while maintaining its high throughput performance. Next, we propose and implement a hybrid wired/wireless architecture, in which each node is connected to either a wired interface or a wireless interface or both. We propose a new protocol, called Hybrid-Backpressure Collection Protocol (Hybrid-BCP), for the intra-car hybrid networks. Our testbed implementation, based on CAN and ZigBee transceivers, demonstrates the load balancing and routing functionalities of Hybrid-BCP and its resilience to DoS attacks. We further provide simulation results, obtained based on real intra-car RSSI traces, showing that Hybrid-BCP can achieve the same performance as a tree-based protocol while reducing the radio transmission power by a factor of 10. Finally, we present TeaCP, a prototype Toolkit for the evaluation and analysis of Collection Protocols in both simulation and experimental environments. TeaCP evaluates a wide range of standard performance metrics, such as reliability, throughput, and latency. TeaCP further allows visualization of routes and network topology evolution. Through simulation of an intra-car WSN and real lab experiments, we demonstrate the functionality of TeaCP for comparing different collection protocols

    Quality of Service Aware Data Stream Processing for Highly Dynamic and Scalable Applications

    Get PDF
    Huge amounts of georeferenced data streams are arriving daily to data stream management systems that are deployed for serving highly scalable and dynamic applications. There are innumerable ways at which those loads can be exploited to gain deep insights in various domains. Decision makers require an interactive visualization of such data in the form of maps and dashboards for decision making and strategic planning. Data streams normally exhibit fluctuation and oscillation in arrival rates and skewness. Those are the two predominant factors that greatly impact the overall quality of service. This requires data stream management systems to be attuned to those factors in addition to the spatial shape of the data that may exaggerate the negative impact of those factors. Current systems do not natively support services with quality guarantees for dynamic scenarios, leaving the handling of those logistics to the user which is challenging and cumbersome. Three workloads are predominant for any data stream, batch processing, scalable storage and stream processing. In this thesis, we have designed a quality of service aware system, SpatialDSMS, that constitutes several subsystems that are covering those loads and any mixed load that results from intermixing them. Most importantly, we natively have incorporated quality of service optimizations for processing avalanches of geo-referenced data streams in highly dynamic application scenarios. This has been achieved transparently on top of the codebases of emerging de facto standard best-in-class representatives, thus relieving the overburdened shoulders of the users in the presentation layer from having to reason about those services. Instead, users express their queries with quality goals and our system optimizers compiles that down into query plans with an embedded quality guarantee and leaves logistic handling to the underlying layers. We have developed standard compliant prototypes for all the subsystems that constitutes SpatialDSMS

    Congestion control in wireless sensor and 6LoWPAN networks: toward the Internet of Things

    Get PDF
    The Internet of Things (IoT) is the next big challenge for the research community where the IPv6 over low power wireless personal area network (6LoWPAN) protocol stack is a key part of the IoT. Recently, the IETF ROLL and 6LoWPAN working groups have developed new IP based protocols for 6LoWPAN networks to alleviate the challenges of connecting low memory, limited processing capability, and constrained power supply sensor nodes to the Internet. In 6LoWPAN networks, heavy network traffic causes congestion which significantly degrades network performance and impacts on quality of service aspects such as throughput, latency, energy consumption, reliability, and packet delivery. In this paper, we overview the protocol stack of 6LoWPAN networks and summarize a set of its protocols and standards. Also, we review and compare a number of popular congestion control mechanisms in wireless sensor networks (WSNs) and classify them into traffic control, resource control, and hybrid algorithms based on the congestion control strategy used. We present a comparative review of all existing congestion control approaches in 6LoWPAN networks. This paper highlights and discusses the differences between congestion control mechanisms for WSNs and 6LoWPAN networks as well as explaining the suitability and validity of WSN congestion control schemes for 6LoWPAN networks. Finally, this paper gives some potential directions for designing a novel congestion control protocol, which supports the IoT application requirements, in future work

    Sensor data-based decision making

    Get PDF
    Increasing globalization and growing industrial system complexity has amplified the interest in the use of information provided by sensors as a means of improving overall manufacturing system performance and maintainability. However, utilization of sensors can only be effective if the real-time data can be integrated into the necessary business processes, such as production planning, scheduling and execution systems. This integration requires the development of intelligent decision making models that can effectively process the sensor data into information and suggest appropriate actions. To be able to improve the performance of a system, the health of the system also needs to be maintained. In many cases a single sensor type cannot provide sufficient information for complex decision making including diagnostics and prognostics of a system. Therefore, a combination of sensors should be used in an integrated manner in order to achieve desired performance levels. Sensor generated data need to be processed into information through the use of appropriate decision making models in order to improve overall performance. In this dissertation, which is presented as a collection of five journal papers, several reactive and proactive decision making models that utilize data from single and multi-sensor environments are developed. The first paper presents a testbed architecture for Auto-ID systems. An adaptive inventory management model which utilizes real-time RFID data is developed in the second paper. In the third paper, a complete hardware and inventory management solution, which involves the integration of RFID sensors into an extremely low temperature industrial freezer, is presented. The last two papers in the dissertation deal with diagnostic and prognostic decision making models in order to assure the healthy operation of a manufacturing system and its components. In the fourth paper a Mahalanobis-Taguchi System (MTS) based prognostics tool is developed and it is used to estimate the remaining useful life of rolling element bearings using data acquired from vibration sensors. In the final paper, an MTS based prognostics tool is developed for a centrifugal water pump, which fuses information from multiple types of sensors in order to take diagnostic and prognostics decisions for the pump and its components --Abstract, page iv

    Vorhersagbares und zur Laufzeit adaptierbares On-Chip Netzwerk für gemischt kritische Echtzeitsysteme

    Get PDF
    The industry of safety-critical and dependable embedded systems calls for even cheaper, high performance platforms that allow flexibility and an efficient verification of safety and real-time requirements. To cope with the increasing complexity of interconnected functions and to reduce the cost and power consumption of the system, multicore systems are used to efficiently integrate different processing units in the same chip. Networks-on-chip (NoCs), as a modular interconnect, are used as a promising solution for such multiprocessor systems on chip (MPSoCs), due to their scalability and performance. For safety-critical systems, a major goal is the avoidance of hazards. For this, safety-critical systems are qualified or even certified to prove the correctness of the functioning under all possible cases. A predictable behaviour of the NoC can help to ease the qualification process of the system. To achieve the required predictability, designers have two classes of solutions: quality of service mechanisms and (formal) analysis. For mixed-criticality systems, isolation and analysis approaches must be combined to efficiently achieve the desired predictability. Traditional NoC analysis and architecture concepts tackle only a subpart of the challenges: they focus on either performance or predictability. Existing, predictable NoCs are deemed too expensive and inflexible to host a variety of applications with opposing constraints. And state-of-the-art analyses neglect certain platform properties to verify the behaviour. Together this leads to a high over-provisioning of the hardware resources as well as adverse impacts on system performance, and on the flexibility of the system. In this work we tackle these challenges and develop a predictable and runtime-adaptable NoC architecture that efficiently integrates mixed-critical applications with opposing constraints. Additionally, we present a modelling and analysis framework for NoCs that accounts for backpressure. This framework enables to evaluate the performance and reliability early at design time. Hence, the designer can assess multiple design decisions by using abstract models and formal approaches.Die Industrie der sicherheitskritischen und zuverlässigen eingebetteten Systeme verlangt nach noch günstigeren, leistungsfähigeren Plattformen, welche Flexibilität und eine effiziente Überprüfung der Sicherheits- und Echtzeitanforderungen ermöglichen. Um der zunehmenden Komplexität der zunehmend vernetzten Funktionen gerecht zu werden und die Kosten und den Stromverbrauch eines Systems zu reduzieren, werden Mehrkern-Systeme eingesetzt. On-Chip Netzwerke werden aufgrund ihrer Skalierbarkeit und Leistung als vielversprechende Lösung für solch Mehrkern-Systeme eingesetzt. Bei sicherheitskritischen Systemen ist die Vermeidung von Gefahren ein wesentliches Ziel. Dazu werden sicherheitskritische Systeme qualifiziert oder zertifiziert, um die Funktionsfähigkeit in allen möglichen Fällen nachzuweisen. Ein vorhersehbares Verhalten des on-Chip Netzwerks kann dabei helfen, den Qualifizierungsprozess des Systems zu erleichtern. Um die erforderliche Vorhersagbarkeit zu erreichen, gibt es zwei Klassen von Lösungen: Quality of Service Mechanismen und (formale) Analyse. Für Systeme mit gemischter Relevanz müssen Isolationsmechanismen und Analyseansätze kombiniert werden, um die gewünschte Vorhersagbarkeit effizient zu erreichen. Traditionelle Analyse- und Architekturkonzepte für on-Chip Netzwerke lösen nur einen Teil dieser Herausforderungen: sie konzentrieren sich entweder auf Leistung oder Vorhersagbarkeit. Existierende vorhersagbare on-Chip Netzwerke werden als zu teuer und unflexibel erachtet, um eine Vielzahl von Anwendungen mit gegensätzlichen Anforderungen zu integrieren. Und state-of-the-art Analysen vernachlässigen bzw. vereinfachen bestimmte Plattformeigenschaften, um das Verhalten überprüfen zu können. Dies führt zu einer hohen Überbereitstellung der Hardware-Ressourcen als auch zu negativen Auswirkungen auf die Systemleistung und auf die Flexibilität des Systems. In dieser Arbeit gehen wir auf diese Herausforderungen ein und entwickeln eine vorhersehbare und zur Laufzeit anpassbare Architektur für on-Chip Netzwerke, welche gemischt-kritische Anwendungen effizient integriert. Zusätzlich stellen wir ein Modellierungs- und Analyseframework für on-Chip Netzwerke vor, das den Paketrückstau berücksichtigt. Dieses Framework ermöglicht es, Designentscheidungen anhand abstrakter Modelle und formaler Ansätze frühzeitig beurteilen
    corecore