14 research outputs found

    Feasibility Analysis of a LoRa-Based WSN Using Public Transport

    Get PDF
    LoRa (Long Range) is a proprietary radio communication technology exploiting license-free frequency bands, allowing low-rate information exchange over long distances with very low power consumption. Conventional environmental monitoring sensors have the disadvantage of being in fixed positions and distributed over wide areas, thus providing measurements with a spatially insufficient level of detail. Since public transport vehicles travel continuously within cities, they are ideal to house portable monitoring systems for environmental pollution and meteorological parameters. The paper presents a feasibility analysis of aWireless Sensor Network (WSN) to collect this information from the vehicles conveying it to a central node for processing. The communication system is realized by deploying a layer-structured, fault-resistant, multi-hop Low Power Wide Area Network (LPWAN) based on the LoRa technology. Both a theoretical study about electromagnetic propagation and network architecture are addressed with consideration of potential practical network realization

    A cross-layer approach for optimizing the efficiency of wireless sensor and actor networks

    Get PDF
    Recent development has lead to the emergence of distributed Wireless Sensor and Actor Networks (WSAN), which are capable of observing the physical environment, processing the data, making decisions based on the observations and performing appropriate actions. WSANs represent an important extension of Wireless Sensor Networks (WSNs) and may comprise a large number of sensor nodes and a smaller number of actor nodes. The sensor nodes are low-cost, low energy, battery powered devices with restricted sensing, computational and wireless communication capabilities. Actor nodes are resource richer with superior processing capabilities, higher transmission powers and a longer battery life. A basic operational scenario of a typical WSAN application follows the following sequence of events. The physical environment is periodically sensed and evaluated by the sensor nodes. The sensed data is then routed towards an actor node. Upon receiving sensed data, an actor node performs an action upon the physical environment if necessary, i.e. if the occurrence of a disturbance or critical event has been detected. The specific characteristics of sensor and actor nodes combined with some stringent application constraints impose unique requirements for WSANs. The fundamental challenges for WSANs are to achieve low latency, high energy efficiency and high reliability. The latency and energy efficiency requirements are in a trade-off relationship. The communication and coordination inside WSANs is managed via a Communication Protocol Stack (CPS) situated on every node. The requirements of low latency and energy efficiency have to be addressed at every layer of the CPS to ensure overall feasibility of the WSAN. Therefore, careful design of protocol layers in the CPS is crucial in attempting to meet the unique requirements and handle the abovementioned trade-off relationship in WSANs. The traditional CPS, comprising the application, network, medium access control and physical layer, is a layered protocol stack with every layer, a predefined functional entity. However, it has been found that for similar types of networks with similar stringent network requirements, the strictly layered protocol stack approach performs at a sub-optimal level with regards to network efficiency. A modern cross-layer paradigm, which proposes the employment of interactions between layers in the CPS, has recently attracted a lot of attention. The cross-layer approach promotes network efficiency optimization and promises considerable performance gains. It is found that in literature, the adoption of this cross-layer paradigm has not yet been considered for WSANs. In this dissertation, a complete cross-layer enabled WSAN CPS is developed that features the adoption of the cross-layer paradigm towards promoting optimization of the network efficiency. The newly proposed cross-layer enabled CPS entails protocols that incorporate information from other layers into their local decisions. Every protocol layer provides information identified as beneficial to another layer(s) in the CPS via a newly proposed Simple Cross-Layer Framework (SCLF) for WSANs. The proposed complete cross-layer enabled WSAN CPS comprises a Cross-Layer enabled Network-Centric Actuation Control with Data Prioritization (CL-NCAC-DP) application layer (APPL) protocol, a Cross-Layer enabled Cluster-based Hierarchical Energy/Latency-Aware Geographic Routing (CL-CHELAGR) network layer (NETL) protocol and a Cross-Layer enabled Carrier Sense Multiple Access with Minimum Preamble Sampling and Duty Cycle Doubling (CL-CSMA-MPS-DCD) medium access control layer (MACL) protocol. Each of these protocols builds on an existing simple layered protocol that was chosen as a basis for development of the cross-layer enabled protocols. It was found that existing protocols focus primarily on energy efficiency to ensure maximum network lifetime. However, most WSAN applications require latency minimization to be considered with the same importance. The cross-layer paradigm provides means of facilitating the optimization of both latency and energy efficiency. Specifically, a solution to the latency versus energy trade-off is given in this dissertation. The data generated by sensor nodes is prioritised by the APPL and depending on the delay-sensitivity, handled in a specialised manor by every layer of the CPS. Delay-sensitive data packets are handled in order to achieve minimum latency. On the other hand, delay-insensitive non critical data packets are handled in such a way as to achieve the highest energy efficiency. In effect, either latency minimization or energy efficiency receives an elevated precedence according to the type of data that is to be handled. Specifically, the cross-layer enabled APPL protocol provides information pertaining to the delay-sensitivity of sensed data packets to the other layers. Consequently, when a data packet is detected as highly delay-sensitive, the cross-layer enabled NETL protocol changes its approach from energy efficient routing along the maximum residual energy path to routing along the fastest path towards the cluster-head actor node for latency minimizing of the specific packet. This is done by considering information (contained in the SCLF neighbourhood table) from the MACL that entails wakeup schedules and channel utilization at neighbour nodes. Among the added criteria, the next-hop node is primarily chosen based on the shortest time to wakeup. The cross-layer enabled MACL in turn employs a priority queue and a temporary duty cycle doubling feature to enable rapid relaying of delay-sensitive data. Duty cycle doubling is employed whenever a sensor node’s APPL state indicates that it is part of a critical event reporting route. When the APPL protocol state (found in the SCLF information pool) indicates that the node is not part of the critical event reporting route anymore, the MACL reverts back to promoting energy efficiency by disengaging duty cycle doubling and re-employing a combination of a very low duty cycle and preamble sampling. The APPL protocol conversely considers the current queue size of the MACL and temporarily halts the creation of data packets (only if the sensed value is non critical) to prevent a queue overflow and ease congestion at the MACL By simulation it was shown that the cross-layer enabled WSAN CPS consistently outperforms the layered CPS for various network conditions. The average end-to-end latency of delay-sensitive critical data packets is decreased substantially. Furthermore, the average end-to-end latency of delay-insensitive data packets is also decreased. Finally, the energy efficiency performance is decreased by a tolerable insignificant minor margin as expected. The trivial increase in energy consumption is overshadowed by the high margin of increase in latency performance for delay-sensitive critical data packets. The newly proposed cross-layer CPS achieves an immense latency performance increase for WSANs, while maintaining excellent energy efficiency. It has hence been shown that the adoption of the cross-layer paradigm by the WSAN CPS proves hugely beneficial with regards to the network efficiency performance. This increases the feasibility of WSANs and promotes its application in more areas.Dissertation (MEng)--University of Pretoria, 2009.Electrical, Electronic and Computer Engineeringunrestricte

    Towards Practical and Secure Channel Impulse Response-based Physical Layer Key Generation

    Get PDF
    Der derzeitige Trend hin zu “smarten” GerĂ€ten bringt eine Vielzahl an Internet-fĂ€higen und verbundenen GerĂ€ten mit sich. Die entsprechende Kommunikation dieser GerĂ€te muss zwangslĂ€uïŹg durch geeignete Maßnahmen abgesichert werden, um die datenschutz- und sicherheitsrelevanten Anforderungen an die ĂŒbertragenen Informationen zu erfĂŒllen. Jedoch zeigt die Vielzahl an sicherheitskritischen VorfĂ€llen im Kontext von “smarten” GerĂ€ten und des Internets der Dinge auf, dass diese Absicherung der Kommunikation derzeit nur unzureichend umgesetzt wird. Die Ursachen hierfĂŒr sind vielfĂ€ltig: so werden essentielle Sicherheitsmaßnahmen im Designprozess mitunter nicht berĂŒcksichtigt oder auf Grund von Preisdruck nicht realisiert. DarĂŒber hinaus erschwert die Beschaffenheit der eingesetzten GerĂ€te die Anwendung klassischer Sicherheitsverfahren. So werden in diesem Kontext vorrangig stark auf AnwendungsfĂ€lle zugeschnittene Lösungen realisiert, die auf Grund der verwendeten Hardware meist nur eingeschrĂ€nkte Rechen- und Energieressourcen zur VerfĂŒgung haben. An dieser Stelle können die AnsĂ€tze und Lösungen der Sicherheit auf physikalischer Schicht (physical layer security, PLS) eine Alternative zu klassischer KryptograïŹe bieten. Im Kontext der drahtlosen Kommunikation können hier die Eigenschaften des Übertragungskanals zwischen zwei legitimen Kommunikationspartnern genutzt werden, um Sicherheitsprimitive zu implementieren und damit Sicherheitsziele zu realisieren. Konkret können etwa reziproke Kanaleigenschaften verwendet werden, um einen Vertrauensanker in Form eines geteilten, symmetrischen Geheimnisses zu generieren. Dieses Verfahren wird SchlĂŒsselgenerierung basierend auf KanalreziprozitĂ€t (channel reciprocity based key generation, CRKG) genannt. Auf Grund der weitreichenden VerfĂŒgbarkeit wird dieses Verfahren meist mit Hilfe der Kanaleigenschaft des EmpfangsstĂ€rkenindikators (received signal strength indicator, RSSI) realisiert. Dies hat jedoch den Nachteil, dass alle physikalischen Kanaleigenschaften auf einen einzigen Wert heruntergebrochen werden und somit ein Großteil der verfĂŒgbaren Informationen vernachlĂ€ssigt wird. Dem gegenĂŒber steht die Verwendung der vollstĂ€ndigen Kanalzustandsinformationen (channel state information, CSI). Aktuelle technische Entwicklungen ermöglichen es zunehmend, diese Informationen auch in AlltagsgerĂ€ten zur VerfĂŒgung zu stellen und somit fĂŒr PLS weiterzuverwenden. In dieser Arbeit analysieren wir Fragestellungen, die sich aus einem Wechsel hin zu CSI als verwendetes SchlĂŒsselmaterial ergeben. Konkret untersuchen wir CSI in Form von Ultrabreitband-Kanalimpulsantworten (channel impulse response, CIR). FĂŒr die Untersuchungen haben wir initial umfangreiche Messungen vorgenommen und damit analysiert, in wie weit die grundlegenden Annahmen von PLS und CRKG erfĂŒllt sind und die CIRs sich grundsĂ€tzlich fĂŒr die SchlĂŒsselgenerierung eignen. Hier zeigen wir, dass die CIRs der legitimen Kommunikationspartner eine höhere Ähnlichkeit als die eines Angreifers aufzeigen und das somit ein Vorteil gegenĂŒber diesem auf der physikalischen Schicht besteht, der fĂŒr die SchlĂŒsselgenerierung ausgenutzt werden kann. Basierend auf den Ergebnissen der initialen Untersuchung stellen wir dann grundlegende Verfahren vor, die notwendig sind, um die Ähnlichkeit der legitimen Messungen zu verbessern und somit die SchlĂŒsselgenerierung zu ermöglichen. Konkret werden Verfahren vorgestellt, die den zeitlichen Versatz zwischen reziproken Messungen entfernen und somit die Ähnlichkeit erhöhen, sowie Verfahren, die das in den Messungen zwangslĂ€uïŹg vorhandene Rauschen entfernen. Gleichzeitig untersuchen wir, inwieweit die getroffenen fundamentalen Sicherheitsannahmen aus Sicht eines Angreifers erfĂŒllt sind. Zu diesem Zweck prĂ€sentieren, implementieren und analysieren wir verschiedene praktische Angriffsmethoden. Diese Verfahren umfassen etwa AnsĂ€tze, bei denen mit Hilfe von deterministischen Kanalmodellen oder durch ray tracing versucht wird, die legitimen CIRs vorherzusagen. Weiterhin untersuchen wir Machine Learning AnsĂ€tze, die darauf abzielen, die legitimen CIRs direkt aus den Beobachtungen eines Angreifers zu inferieren. Besonders mit Hilfe des letzten Verfahrens kann hier gezeigt werden, dass große Teile der CIRs deterministisch vorhersagbar sind. Daraus leitet sich der Schluss ab, dass CIRs nicht ohne adĂ€quate Vorverarbeitung als Eingabe fĂŒr Sicherheitsprimitive verwendet werden sollten. Basierend auf diesen Erkenntnissen entwerfen und implementieren wir abschließend Verfahren, die resistent gegen die vorgestellten Angriffe sind. Die erste Lösung baut auf der Erkenntnis auf, dass die Angriffe aufgrund von vorhersehbaren Teilen innerhalb der CIRs möglich sind. Daher schlagen wir einen klassischen Vorverarbeitungsansatz vor, der diese deterministisch vorhersagbaren Teile entfernt und somit das Eingabematerial absichert. Wir implementieren und analysieren diese Lösung und zeigen ihre EffektivitĂ€t sowie ihre Resistenz gegen die vorgeschlagenen Angriffe. In einer zweiten Lösung nutzen wir die FĂ€higkeiten des maschinellen Lernens, indem wir sie ebenfalls in das Systemdesign einbringen. Aufbauend auf ihrer starken Leistung bei der Mustererkennung entwickeln, implementieren und analysieren wir eine Lösung, die lernt, die zufĂ€lligen Teile aus den rohen CIRs zu extrahieren, durch die die KanalreziprozitĂ€t deïŹniert wird, und alle anderen, deterministischen Teile verwirft. Damit ist nicht nur das SchlĂŒsselmaterial gesichert, sondern gleichzeitig auch der Abgleich des SchlĂŒsselmaterials, da Differenzen zwischen den legitimen Beobachtungen durch die Merkmalsextraktion eïŹƒzient entfernt werden. Alle vorgestellten Lösungen verzichten komplett auf den Austausch von Informationen zwischen den legitimen Kommunikationspartnern, wodurch der damit verbundene InformationsabïŹ‚uss sowie Energieverbrauch inhĂ€rent vermieden wird

    Bioinspired metaheuristic algorithms for global optimization

    Get PDF
    This paper presents concise comparison study of newly developed bioinspired algorithms for global optimization problems. Three different metaheuristic techniques, namely Accelerated Particle Swarm Optimization (APSO), Firefly Algorithm (FA), and Grey Wolf Optimizer (GWO) are investigated and implemented in Matlab environment. These methods are compared on four unimodal and multimodal nonlinear functions in order to find global optimum values. Computational results indicate that GWO outperforms other intelligent techniques, and that all aforementioned algorithms can be successfully used for optimization of continuous functions

    Experimental Evaluation of Growing and Pruning Hyper Basis Function Neural Networks Trained with Extended Information Filter

    Get PDF
    In this paper we test Extended Information Filter (EIF) for sequential training of Hyper Basis Function Neural Networks with growing and pruning ability (HBF-GP). The HBF neuron allows different scaling of input dimensions to provide better generalization property when dealing with complex nonlinear problems in engineering practice. The main intuition behind HBF is in generalization of Gaussian type of neuron that applies Mahalanobis-like distance as a distance metrics between input training sample and prototype vector. We exploit concept of neuron’s significance and allow growing and pruning of HBF neurons during sequential learning process. From engineer’s perspective, EIF is attractive for training of neural networks because it allows a designer to have scarce initial knowledge of the system/problem. Extensive experimental study shows that HBF neural network trained with EIF achieves same prediction error and compactness of network topology when compared to EKF, but without the need to know initial state uncertainty, which is its main advantage over EKF

    Applied Metaheuristic Computing

    Get PDF
    For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC

    Building the Future Internet through FIRE

    Get PDF
    The Internet as we know it today is the result of a continuous activity for improving network communications, end user services, computational processes and also information technology infrastructures. The Internet has become a critical infrastructure for the human-being by offering complex networking services and end-user applications that all together have transformed all aspects, mainly economical, of our lives. Recently, with the advent of new paradigms and the progress in wireless technology, sensor networks and information systems and also the inexorable shift towards everything connected paradigm, first as known as the Internet of Things and lately envisioning into the Internet of Everything, a data-driven society has been created. In a data-driven society, productivity, knowledge, and experience are dependent on increasingly open, dynamic, interdependent and complex Internet services. The challenge for the Internet of the Future design is to build robust enabling technologies, implement and deploy adaptive systems, to create business opportunities considering increasing uncertainties and emergent systemic behaviors where humans and machines seamlessly cooperate

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp

    Building the Future Internet through FIRE

    Get PDF
    The Internet as we know it today is the result of a continuous activity for improving network communications, end user services, computational processes and also information technology infrastructures. The Internet has become a critical infrastructure for the human-being by offering complex networking services and end-user applications that all together have transformed all aspects, mainly economical, of our lives. Recently, with the advent of new paradigms and the progress in wireless technology, sensor networks and information systems and also the inexorable shift towards everything connected paradigm, first as known as the Internet of Things and lately envisioning into the Internet of Everything, a data-driven society has been created. In a data-driven society, productivity, knowledge, and experience are dependent on increasingly open, dynamic, interdependent and complex Internet services. The challenge for the Internet of the Future design is to build robust enabling technologies, implement and deploy adaptive systems, to create business opportunities considering increasing uncertainties and emergent systemic behaviors where humans and machines seamlessly cooperate
    corecore