1,149 research outputs found
Biology Inspired Approach for Communal Behavior in Sensor Networks
Research in wireless sensor network technology has exploded in the last decade. Promises of complex and ubiquitous control of the physical environment by these networks open avenues for new kinds of science and business. Due to the small size and low cost of sensor devices, visionaries promise systems enabled by deployment of massive numbers of sensors working in concert. Although the reduction in size has been phenomenal it results in severe limitations on the computing, communicating, and power capabilities of these devices. Under these constraints, research efforts have concentrated on developing techniques for performing relatively simple tasks with minimal energy expense assuming some form of centralized control. Unfortunately, centralized control does not scale to massive size networks and execution of simple tasks in sparsely populated networks will not lead to the sophisticated applications predicted. These must be enabled by new techniques dependent on local and autonomous cooperation between sensors to effect global functions. As a step in that direction, in this work we detail a technique whereby a large population of sensors can attain a global goal using only local information and by making only local decisions without any form of centralized control
Communal Cooperation in Sensor Networks for Situation Management
Situation management is a rapidly evolving science where managed sources are processed as realtime streams of events and fused in a way that maximizes comprehension, thus enabling better decisions for action. Sensor networks provide a new technology that promises ubiquitous input and action throughout an environment, which can substantially improve information available to the process. Here we describe a NASA program that requires improvements in sensor networks and situation management. We present an approach for massively deployed sensor networks that does not rely on centralized control but is founded in lessons learned from the way biological ecosystems are organized. In this approach, fully distributed data aggregation and integration can be performed in a scalable fashion where individual motes operate based on local information, making local decisions that achieve globally-meaningful results. This exemplifies the robust, fault-tolerant infrastructure required for successful situation management systems
Robust and cheating-resilient power auctioning on Resource Constrained Smart Micro-Grids
The principle of Continuous Double Auctioning (CDA) is known to provide an efficient way of matching supply and demand among distributed selfish participants with limited information. However, the literature indicates that the classic CDA algorithms developed for grid-like applications are centralised and insensitive to the processing resources capacity, which poses a hindrance for their application on resource constrained, smart micro-grids (RCSMG). A RCSMG loosely describes a micro-grid with distributed generators and demand controlled by selfish participants with limited information, power storage capacity and low literacy, communicate over an unreliable infrastructure burdened by limited bandwidth and low computational power of devices. In this thesis, we design and evaluate a CDA algorithm for power allocation in a RCSMG. Specifically, we offer the following contributions towards power auctioning on RCSMGs. First, we extend the original CDA scheme to enable decentralised auctioning. We do this by integrating a token-based, mutual-exclusion (MUTEX) distributive primitive, that ensures the CDA operates at a reasonably efficient time and message complexity of O(N) and O(logN) respectively, per critical section invocation (auction market execution). Our CDA algorithm scales better and avoids the single point of failure problem associated with centralised CDAs (which could be used to adversarially provoke a break-down of the grid marketing mechanism). In addition, the decentralised approach in our algorithm can help eliminate privacy and security concerns associated with centralised CDAs. Second, to handle CDA performance issues due to malfunctioning devices on an unreliable network (such as a lossy network), we extend our proposed CDA scheme to ensure robustness to failure. Using node redundancy, we modify the MUTEX protocol supporting our CDA algorithm to handle fail-stop and some Byzantine type faults of sites. This yields a time complexity of O(N), where N is number of cluster-head nodes; and message complexity of O((logN)+W) time, where W is the number of check-pointing messages. These results indicate that it is possible to add fault tolerance to a decentralised CDA, which guarantees continued participation in the auction while retaining reasonable performance overheads. In addition, we propose a decentralised consumption scheduling scheme that complements the auctioning scheme in guaranteeing successful power allocation within the RCSMG. Third, since grid participants are self-interested we must consider the issue of power theft that is provoked when participants cheat. We propose threat models centred on cheating attacks aimed at foiling the extended CDA scheme. More specifically, we focus on the Victim Strategy Downgrade; Collusion by Dynamic Strategy Change, Profiling with Market Prediction; and Strategy Manipulation cheating attacks, which are carried out by internal adversaries (auction participants). Internal adversaries are participants who want to get more benefits but have no interest in provoking a breakdown of the grid. However, their behaviour is dangerous because it could result in a breakdown of the grid. Fourth, to mitigate these cheating attacks, we propose an exception handling (EH) scheme, where sentinel agents use allocative efficiency and message overheads to detect and mitigate cheating forms. Sentinel agents are tasked to monitor trading agents to detect cheating and reprimand the misbehaving participant. Overall, message complexity expected in light demand is O(nLogN). The detection and resolution algorithm is expected to run in linear time complexity O(M). Overall, the main aim of our study is achieved by designing a resilient and cheating-free CDA algorithm that is scalable and performs well on resource constrained micro-grids. With the growing popularity of the CDA and its resource allocation applications, specifically to low resourced micro-grids, this thesis highlights further avenues for future research. First, we intend to extend the decentralised CDA algorithm to allow for participants’ mobile phones to connect (reconnect) at different shared smart meters. Such mobility should guarantee the desired CDA properties, the reliability and adequate security. Secondly, we seek to develop a simulation of the decentralised CDA based on the formal proofs presented in this thesis. Such a simulation platform can be used for future studies that involve decentralised CDAs. Third, we seek to find an optimal and efficient way in which the decentralised CDA and the scheduling algorithm can be integrated and deployed in a low resourced, smart micro-grid. Such an integration is important for system developers interested in exploiting the benefits of the two schemes while maintaining system efficiency. Forth, we aim to improve on the cheating detection and mitigation mechanism by developing an intrusion tolerance protocol. Such a scheme will allow continued auctioning in the presence of cheating attacks while incurring low performance overheads for applicability in a RCSMG
An Embryonics Inspired Architecture for Resilient Decentralised Cloud Service Delivery
Data-driven artificial intelligence applications arising from Internet of Things technologies can have
profound wide-reaching societal benefits at the cross-section of the cyber and physical domains. Usecases are expanding rapidly. For example, smart-homes and smart-buildings provide intelligent monitoring, resource optimisation, safety, and security for their inhabitants. Smart cities can manage
transport, waste, energy, and crime on large scales. Whilst smart-manufacturing can autonomously
produce goods through the self-management of factories and logistics. As these use-cases expand further, the requirement to ensure data is processed accurately and timely is ever crucial, as many of these
applications are safety critical. Where loss off life and economic damage is a likely possibility in the
event of system failure. While the typical service delivery paradigm, cloud computing, is strong due
to operating upon economies of scale, their physical proximity to these applications creates network
latency which is incompatible with these safety critical applications. To complicate matters further,
the environments they operate in are becoming increasingly hostile. With resource-constrained and
mobile wireless networking, commonplace. These issues drive the need for new service delivery architectures which operate closer to, or even upon, the network devices, sensors and actuators which
compose these IoT applications at the network edge. These hostile and resource constrained environments require adaptation of traditional cloud service delivery models to these decentralised mobile
and wireless environments. Such architectures need to provide persistent service delivery within the
face of a variety of internal and external changes or: resilient decentralised cloud service delivery.
While the current state of the art proposes numerous techniques to enhance the resilience of services
in this manner, none provide an architecture which is capable of providing data processing services in
a cloud manner which is inherently resilient. Adopting techniques from autonomic computing, whose
characteristics are resilient by nature, this thesis presents a biologically-inspired platform modelled
on embryonics. Embryonic systems have an ability to self-heal and self-organise whilst showing capacity to support decentralised data processing. An initial model for embryonics-inspired resilient
decentralised cloud service delivery is derived according to both the decentralised cloud, and resilience
requirements given for this work. Next, this model is simulated using cellular automata, which illustrate the embryonic concept’s ability to provide self-healing service delivery under varying system
component loss. This highlights optimisation techniques, including: application complexity bounds,
differentiation optimisation, self-healing aggression, and varying system starting conditions. All attributes of which can be adjusted to vary the resilience performance of the system depending upon
different resource capabilities and environmental hostilities.
Next, a proof-of-concept implementation is developed and validated which illustrates the efficacy
of the solution. This proof-of-concept is evaluated on a larger scale where batches of tests highlighted
the different performance criteria and constraints of the system. One key finding was the considerable
quantity of redundant messages produced under successful scenarios which were helpful in terms of
enabling resilience yet could increase network contention. Therefore balancing these attributes are
important according to use-case. Finally, graph-based resilience algorithms were executed across
all tests to understand the structural resilience of the system and whether this enabled suitable
measurements or prediction of the application’s resilience. Interestingly this study highlighted that
although the system was not considered to be structurally resilient, the applications were still being
executed in the face of many continued component failures. This highlighted that the autonomic
embryonic functionality developed was succeeding in executing applications resiliently. Illustrating
that structural and application resilience do not necessarily coincide. Additionally, one graph metric,
assortativity, was highlighted as being predictive of application resilience, although not structural
resilience
Performance Evaluation Metrics for Cloud, Fog and Edge Computing: A Review, Taxonomy, Benchmarks and Standards for Future Research
Optimization is an inseparable part of Cloud computing, particularly with the emergence of Fog and Edge paradigms. Not only these emerging paradigms demand reevaluating cloud-native optimizations and exploring Fog and Edge-based solutions, but also the objectives require significant shift from considering only latency to energy, security, reliability and cost. Hence, it is apparent that optimization objectives have become diverse and lately Internet of Things (IoT)-specific born objectives must come into play. This is critical as incorrect selection of metrics can mislead the developer about the real performance. For instance, a latency-aware auto-scaler must be evaluated through latency-related metrics as response time or tail latency; otherwise the resource manager is not carefully evaluated even if it can reduce the cost. Given such challenges, researchers and developers are struggling to explore and utilize the right metrics to evaluate the performance of optimization techniques such as task scheduling, resource provisioning, resource allocation, resource scheduling and resource execution. This is challenging due to (1) novel and multi-layered computing paradigm, e.g., Cloud, Fog and Edge, (2) IoT applications with different requirements, e.g., latency or privacy, and (3) not having a benchmark and standard for the evaluation metrics. In this paper, by exploring the literature, (1) we present a taxonomy of the various real-world metrics to evaluate the performance of cloud, fog, and edge computing; (2) we survey the literature to recognize common metrics and their applications; and (3) outline open issues for future research. This comprehensive benchmark study can significantly assist developers and researchers to evaluate performance under realistic metrics and standards to ensure their objectives will be achieved in the production environments
Biology-Inspired Approach for Communal Behavior in Massively Deployed Sensor Networks
Research in wireless sensor networks has accelerated rapidly in recent years. The promise of ubiquitous control of the physical environment opens the way for new applications that will redefine the way we live and work. Due to the small size and low cost of sensor devices, visionaries promise smart systems enabled by deployment of massive numbers of sensors working in concert. To date, most of the research effort has concentrated on forming ad hoc networks under centralized control, which is not scalable to massive deployments. This thesis proposes an alternative approach based on models inspired by biological systems and reports significant results based on this new approach. This perspective views sensor devices as autonomous organisms in a community interacting as part of an ecosystem rather than as nodes in a computing network. The networks that result from this design make local decisions based on local information in order for the network to achieve global goals, thus we must engineer for emergent behavior in wireless sensor networks. First we implemented a simulator based on cellular automata to be used in algorithm development and assessment. Then we developed efficient algorithms to exploit emergent behavior for finding the average of distributed values, synchronizing distributed clocks, and conducting distributed binary voting. These algorithms are shown to be convergent and efficient by analysis and simulation. Finally, an extension of this perspective is used and demonstrated to provide significant progress on the noise abatement problem for jet aircraft. Using local information and actions, optimal impedance values for an acoustic liner are determined in situ providing the basis for an adaptive noise abatement system that provides superior noise reduction compared with current technology and previous research efforts
- …