471 research outputs found

    A Survey and Future Directions on Clustering: From WSNs to IoT and Modern Networking Paradigms

    Get PDF
    Many Internet of Things (IoT) networks are created as an overlay over traditional ad-hoc networks such as Zigbee. Moreover, IoT networks can resemble ad-hoc networks over networks that support device-to-device (D2D) communication, e.g., D2D-enabled cellular networks and WiFi-Direct. In these ad-hoc types of IoT networks, efficient topology management is a crucial requirement, and in particular in massive scale deployments. Traditionally, clustering has been recognized as a common approach for topology management in ad-hoc networks, e.g., in Wireless Sensor Networks (WSNs). Topology management in WSNs and ad-hoc IoT networks has many design commonalities as both need to transfer data to the destination hop by hop. Thus, WSN clustering techniques can presumably be applied for topology management in ad-hoc IoT networks. This requires a comprehensive study on WSN clustering techniques and investigating their applicability to ad-hoc IoT networks. In this article, we conduct a survey of this field based on the objectives for clustering, such as reducing energy consumption and load balancing, as well as the network properties relevant for efficient clustering in IoT, such as network heterogeneity and mobility. Beyond that, we investigate the advantages and challenges of clustering when IoT is integrated with modern computing and communication technologies such as Blockchain, Fog/Edge computing, and 5G. This survey provides useful insights into research on IoT clustering, allows broader understanding of its design challenges for IoT networks, and sheds light on its future applications in modern technologies integrated with IoT.acceptedVersio

    Fault-tolerant wireless sensor networks using evolutionary games

    Get PDF
    This dissertation proposes an approach to creating robust communication systems in wireless sensor networks, inspired by biological and ecological systems, particularly by evolutionary game theory. In this approach, a virtual community of agents live inside the network nodes and carry out network functions. The agents use different strategies to execute their functions, and these strategies are tested and selected by playing evolutionary games. Over time, agents with the best strategies survive, while others die. The strategies and the game rules provide the network with an adaptive behavior that allows it to react to changes in environmental conditions by adapting and improving network behavior. To evaluate the viability of this approach, this dissertation also describes a micro-component framework for implementing agent-based wireless sensor network services, an evolutionary data collection protocol built using this framework, ECP, and experiments evaluating the performance of this protocol in a faulty environment. The framework addresses many of the programming challenges in writing network software for wireless sensor networks, while the protocol built using the framework provides a means of evaluating the general viability of the agent-based approach. The results of this evaluation show that an evolutionary approach to designing wireless sensor networks can improve the performance of wireless sensor network protocols in the presence of node failures. In particular, we compared the performance of ECP with a non-evolutionary rule-based variant of ECP. While the purely-evolutionary version of ECP has more routing timeouts than the rule-based approach in failure-free networks, it sends significantly fewer beacon packets and incurs statistically fewer routing timeouts in both simple fault and periodic fault scenarios

    Autonomous Recovery Of Reconfigurable Logic Devices Using Priority Escalation Of Slack

    Get PDF
    Field Programmable Gate Array (FPGA) devices offer a suitable platform for survivable hardware architectures in mission-critical systems. In this dissertation, active dynamic redundancy-based fault-handling techniques are proposed which exploit the dynamic partial reconfiguration capability of SRAM-based FPGAs. Self-adaptation is realized by employing reconfiguration in detection, diagnosis, and recovery phases. To extend these concepts to semiconductor aging and process variation in the deep submicron era, resilient adaptable processing systems are sought to maintain quality and throughput requirements despite the vulnerabilities of the underlying computational devices. A new approach to autonomous fault-handling which addresses these goals is developed using only a uniplex hardware arrangement. It operates by observing a health metric to achieve Fault Demotion using Recon- figurable Slack (FaDReS). Here an autonomous fault isolation scheme is employed which neither requires test vectors nor suspends the computational throughput, but instead observes the value of a health metric based on runtime input. The deterministic flow of the fault isolation scheme guarantees success in a bounded number of reconfigurations of the FPGA fabric. FaDReS is then extended to the Priority Using Resource Escalation (PURE) online redundancy scheme which considers fault-isolation latency and throughput trade-offs under a dynamic spare arrangement. While deep-submicron designs introduce new challenges, use of adaptive techniques are seen to provide several promising avenues for improving resilience. The scheme developed is demonstrated by hardware design of various signal processing circuits and their implementation on a Xilinx Virtex-4 FPGA device. These include a Discrete Cosine Transform (DCT) core, Motion Estimation (ME) engine, Finite Impulse Response (FIR) Filter, Support Vector Machine (SVM), and Advanced Encryption Standard (AES) blocks in addition to MCNC benchmark circuits. A iii significant reduction in power consumption is achieved ranging from 83% for low motion-activity scenes to 12.5% for high motion activity video scenes in a novel ME engine configuration. For a typical benchmark video sequence, PURE is shown to maintain a PSNR baseline near 32dB. The diagnosability, reconfiguration latency, and resource overhead of each approach is analyzed. Compared to previous alternatives, PURE maintains a PSNR within a difference of 4.02dB to 6.67dB from the fault-free baseline by escalating healthy resources to higher-priority signal processing functions. The results indicate the benefits of priority-aware resiliency over conventional redundancy approaches in terms of fault-recovery, power consumption, and resource-area requirements. Together, these provide a broad range of strategies to achieve autonomous recovery of reconfigurable logic devices under a variety of constraints, operating conditions, and optimization criteria

    Emerging Communications for Wireless Sensor Networks

    Get PDF
    Wireless sensor networks are deployed in a rapidly increasing number of arenas, with uses ranging from healthcare monitoring to industrial and environmental safety, as well as new ubiquitous computing devices that are becoming ever more pervasive in our interconnected society. This book presents a range of exciting developments in software communication technologies including some novel applications, such as in high altitude systems, ground heat exchangers and body sensor networks. Authors from leading institutions on four continents present their latest findings in the spirit of exchanging information and stimulating discussion in the WSN community worldwide

    Unified architecture of mobile ad hoc network security (MANS) system

    Get PDF
    In this dissertation, a unified architecture of Mobile Ad-hoc Network Security (MANS) system is proposed, under which IDS agent, authentication, recovery policy and other policies can be defined formally and explicitly, and are enforced by a uniform architecture. A new authentication model for high-value transactions in cluster-based MANET is also designed in MANS system. This model is motivated by previous works but try to use their beauties and avoid their shortcomings, by using threshold sharing of the certificate signing key within each cluster to distribute the certificate services, and using certificate chain and certificate repository to achieve better scalability, less overhead and better security performance. An Intrusion Detection System is installed in every node, which is responsible for colleting local data from its host node and neighbor nodes within its communication range, pro-processing raw data and periodically broadcasting to its neighborhood, classifying normal or abnormal based on pro-processed data from its host node and neighbor nodes. Security recovery policy in ad hoc networks is the procedure of making a global decision according to messages received from distributed IDS and restore to operational health the whole system if any user or host that conducts the inappropriate, incorrect, or anomalous activities that threaten the connectivity or reliability of the networks and the authenticity of the data traffic in the networks. Finally, quantitative risk assessment model is proposed to numerically evaluate MANS security

    Reliable many-to-many routing in wireless sensor networks using ant colony optimisation

    Get PDF
    A wireless Sensor Network (WSN) consists of many simple sensor nodes gathering information, such as air temperature or pollution. Nodes have limited energy resources and computational power. Generally, a WSN consists of source nodes that sense data and sink nodes that require data to be delivered to them; nodes communicate wirelessly to deliver data between them. Reliability is a concern as, due to energy constraints and adverse environments, it is expected that nodes will become faulty. Thus, it is essential to create fault-tolerant routing protocols that can recover from faults and deliver sensed data efficiently. Often studied are networks with a single sink. However, as applications become increasingly sophisticated, WSNs with multiple sources and multiple sinks become increasingly prevalent but the problem is much less studied. Unfortunately, current solutions for such networks are heuristics based on specific network properties, such as number of sources and sinks. It is beneficial to develop efficient (fault-tolerant) routing protocols, independent of network architecture. As such, the use of meta heuristics are advocated. Presented is a solution for efficient many-to-many routing using the meta heuristic Ant Colony Optimisation (ACO). The contributions are: (i) a distributed ACObased many-many routing protocol, (ii) using the novel concept of beacon ants, a fault-tolerant ACO-based routing protocol for many-many WSNs and (iii) demonstrations of how the same framework can be used to generate a routing protocol based on minimum Steiner tree. Results show that, generally, few message packets are sent, so nodes deplete energy slower, leading to longer network lifetimes. The protocol is scalable, becoming more efficient with increasing nodes as routes are proportionally shorter compared to network size. The fault-tolerant variant is shown to recover from failures while remaining efficient, and successful at continuously delivering data. The ACO-based framework is used to create Steiner Trees in WSNs, an NP-hard problem with many potential applications. The ACO concept provides the basis for a framework that enables the generation of efficient routing protocols that can solve numerous problems without changing the ACO concept. Results show the protocols are scalable, efficient, and can successfully deliver data in numerous different topologies

    Decompose and Conquer: Addressing Evasive Errors in Systems on Chip

    Full text link
    Modern computer chips comprise many components, including microprocessor cores, memory modules, on-chip networks, and accelerators. Such system-on-chip (SoC) designs are deployed in a variety of computing devices: from internet-of-things, to smartphones, to personal computers, to data centers. In this dissertation, we discuss evasive errors in SoC designs and how these errors can be addressed efficiently. In particular, we focus on two types of errors: design bugs and permanent faults. Design bugs originate from the limited amount of time allowed for design verification and validation. Thus, they are often found in functional features that are rarely activated. Complete functional verification, which can eliminate design bugs, is extremely time-consuming, thus impractical in modern complex SoC designs. Permanent faults are caused by failures of fragile transistors in nano-scale semiconductor manufacturing processes. Indeed, weak transistors may wear out unexpectedly within the lifespan of the design. Hardware structures that reduce the occurrence of permanent faults incur significant silicon area or performance overheads, thus they are infeasible for most cost-sensitive SoC designs. To tackle and overcome these evasive errors efficiently, we propose to leverage the principle of decomposition to lower the complexity of the software analysis or the hardware structures involved. To this end, we present several decomposition techniques, specific to major SoC components. We first focus on microprocessor cores, by presenting a lightweight bug-masking analysis that decomposes a program into individual instructions to identify if a design bug would be masked by the program's execution. We then move to memory subsystems: there, we offer an efficient memory consistency testing framework to detect buggy memory-ordering behaviors, which decomposes the memory-ordering graph into small components based on incremental differences. We also propose a microarchitectural patching solution for memory subsystem bugs, which augments each core node with a small distributed programmable logic, instead of including a global patching module. In the context of on-chip networks, we propose two routing reconfiguration algorithms that bypass faulty network resources. The first computes short-term routes in a distributed fashion, localized to the fault region. The second decomposes application-aware routing computation into simple routing rules so to quickly find deadlock-free, application-optimized routes in a fault-ridden network. Finally, we consider general accelerator modules in SoC designs. When a system includes many accelerators, there are a variety of interactions among them that must be verified to catch buggy interactions. To this end, we decompose such inter-module communication into basic interaction elements, which can be reassembled into new, interesting tests. Overall, we show that the decomposition of complex software algorithms and hardware structures can significantly reduce overheads: up to three orders of magnitude in the bug-masking analysis and the application-aware routing, approximately 50 times in the routing reconfiguration latency, and 5 times on average in the memory-ordering graph checking. These overhead reductions come with losses in error coverage: 23% undetected bug-masking incidents, 39% non-patchable memory bugs, and occasionally we overlook rare patterns of multiple faults. In this dissertation, we discuss the ideas and their trade-offs, and present future research directions.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147637/1/doowon_1.pd

    A practical guide to design and assess a phylogenomic study

    Full text link
    Over the last decade, molecular systematics has undergone a change of paradigm as high-throughput sequencing now makes it possible to reconstruct evolutionary relationships using genome-scale datasets. The advent of 'big data' molecular phylogenetics provided a battery of new tools for biologists but simultaneously brought new methodological challenges. The increase in analytical complexity comes at the price of highly specific training in computational biology and molecular phy- logenetics, resulting very often in a polarized accumulation of knowledge (technical on one side and biological on the other). Interpreting the robustness of genome-scale phylogenetic studies is not straightforward, particularly as new methodological developments have consistently shown that the general belief of 'more genes, more robustness' often does not apply, and because there is a range of systematic errors that plague phylogenomic investigations. This is particularly problematic because phylogenomic studies are highly heterogeneous in their methodology, and best practices are often not clearly defined. The main aim of this article is to present what I consider as the ten most important points to take into consideration when plan- ning a well-thought-out phylogenomic study and while evaluating the quality of published papers. The goal is to provide a practical step-by-step guide that can be easily followed by nonexperts and phylogenomic novices in order to assess the tech- nical robustness of phylogenomic studies or improve the experimental design of a project

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp

    Autonomous self-repair systems : A thesis submitted in partial fulfilment of the requirements for the Degree of Doctor of Philosophy at Lincoln University

    Get PDF
    Regeneration is an important and wonderful phenomenon in nature and plays a key role in living organisms that are capable of recovery from trivial to serious injury to reclaim a fully functional state and pattern/anatomical homeostasis (equilibrium). Studying regeneration can help develop hypotheses for understanding regenerative mechanisms along with advancing synthetic biology for regenerative medicine and development of cancer and anti-ageing drugs. Further, it can contribute to nature-inspired computing for self-repair in other fields. However, despite decades of study, what possible mechanisms and algorithms are used in the regeneration process remain an open question. Therefore, the main goal of this thesis is to propose a comprehensive hypothetical conceptual framework with possible mechanisms and algorithms of biological regeneration that mimics the observed features of regeneration in living organisms and achieves body-wide immortality, similar to the planarian flatworm, about 20mm long and 3mm wide, living in both saltwater and freshwater. This is a problem of collective decision making by the cells in an organism to achieve the high-level goal of returning to normality of both anatomical and functional homeostasis. To fulfil this goal, the proposed framework contains three sub-frameworks corresponding to three main objectives of the thesis: self-regeneration or self-repair (anatomical homeostasis) of a simple in silico tissue and a whole organism consisting of these tissues based on simplified formats of cellular communication, and an extension to more realistic bioelectric communication for restoring both anatomical and bioelectric homeostasis. The first objective is to develop a simple tissue model that regenerates autonomously after damage. Accordingly, we present a computational framework for an autonomous self-repair system that allows for sensing, detecting and regenerating an artificial (in silico) circular tissue containing thousands of cells. This system consists of two sub-models: Global Sensing and Local Sensing that collaborate to sense and repair diverse damages. It is largely a neural system with a perceptron (binary) network performing tissue computations. The results showed that the system is robust and efficient in damage detection and accurate regeneration. The second objective is to extend the simple circular tissue model to other geometric shapes and assemble them into a small virtual organism that regenerates similar to the body-wide immortality of the planarian flatworm. Accordingly, we proposed a computational framework extending the tissue repair framework developed in Objective 1 to model whole organism regeneration that implemented algorithms and mechanisms to achieve accurate and complete regeneration in an (in silico) worm-like organism. The system consists of two levels: tissue and organism levels that integrate to recognise and recover from any damage, even extreme damage cases. The tissue level consists of three tissue repair models for head, body and tail. The organism level connects the tissues together to form the worm. The two levels form an integrated neural feedback control system with perceptron (binary) for tissue computing and linear neural networks for organism-level computing. Our simulation results showed that the framework is very robust in returning the system to the normal state after any small or large scale damage. The last objective is to extend the whole organism regeneration framework developed in Objective 2 by incorporating bioelectricity as the format of communication between cells to make the model better resemble living organisms and to restore not only anatomy but also basic functionality such as restoring body-wide bioelectric pattern needed for physiological functioning in living systems. We greatly extended the second framework by conceptualising and modelling mechanisms and algorithms that mimicked both the pattern and function restoration observed in living organisms and implemented it on the same artificial (in silico) organism developed in Objective 2 but with greater realism of the anatomical structure. This proposed framework consists of three levels that collaborate to fully regenerate the anatomical pattern and maintain bioelectric homeostasis in the in silico worm-like organism. These three levels represent tissue and organism models for regeneration and body-wide bioelectric model for restoring bioelectric homeostasis, respectively. They extend the previous neural feedback control system to integrate another (3rd) level, bioelectric homeostasis. Our simulations showed that the system maintains and restores bioelectric homeostasis accurately under random perturbations of bioelectric status under no damage conditions. It is also very robust and plastic in restoring the system to the normal anatomical pattern and bioelectric homeostasis after any type of damage. Our framework robustly achieves some observations of extreme regeneration of planaria like body-wide immortality. It could also be helpful in engineering for building self-repair robots, biobots and artificial self-repair systems
    • …
    corecore