72 research outputs found

    Scheduling And Resource Management For Complex Systems: From Large-scale Distributed Systems To Very Large Sensor Networks

    Get PDF
    In this dissertation, we focus on multiple levels of optimized resource management techniques. We first consider a classic resource management problem, namely the scheduling of data-intensive applications. We define the Divisible Load Scheduling (DLS) problem, outline the system model based on the assumption that data staging and all communication with the sites can be done in parallel, and introduce a set of optimal divisible load scheduling algorithms and the related fault-tolerant coordination algorithm. The DLS algorithms introduced in this dissertation exploit parallel communication, consider realistic scenarios regarding the time when heterogeneous computing systems are available, and generate optimal schedules. Performance studies show that these algorithms perform better than divisible load scheduling algorithms based upon sequential communication. We have developed a self-organization model for resource management in distributed systems consisting of a very large number of sites with excess computing capacity. This self-organization model is inspired by biological metaphors and uses the concept of varying energy levels to express activity and goal satisfaction. The model is applied to Pleiades, a service-oriented architecture based on resource virtualization. The self-organization model for complex computing and communication systems is applied to Very Large Sensor Networks (VLSNs). An algorithm for self-organization of anonymous sensor nodes called SFSN (Scale-free Sensor Networks) and an algorithm utilizing the Small-worlds principle called SWAS (Small-worlds of Anonymous Sensors) are introduced. The SFSN algorithm is designed for VLSNs consisting of a fairly large number of inexpensive sensors with limited resources. An important feature of the algorithm is the ability to interconnect sensors without an identity, or physical address used by traditional communication and coordination protocols. During the self-organization phase, the collision-free communication channels allowing a sensor to synchronously forward information to the members of its proximity set are established and the communication pattern is followed during the activity phases. Simulation study shows that the SFSN ensures the scalability, limits the amount of communication and the complexity of coordination. The SWAS algorithm is further improved from SFSN by applying the Small-worlds principle. It is unique in its ability to create a sensor network with a topology approximating small-world networks. Rather than creating shortcuts between pairs of diametrically positioned nodes in a logical ring, we end up with something resembling a double-stranded DNA. By exploiting Small-worlds principle we combine two desirable features of networks, namely high clustering and small path length

    Achieving Energy Efficiency on Networking Systems with Optimization Algorithms and Compressed Data Structures

    Get PDF
    To cope with the increasing quantity, capacity and energy consumption of transmission and routing equipment in the Internet, energy efficiency of communication networks has attracted more and more attention from researchers around the world. In this dissertation, we proposed three methodologies to achieve energy efficiency on networking devices: the NP-complete problems and heuristics, the compressed data structures, and the combination of the first two methods. We first consider the problem of achieving energy efficiency in Data Center Networks (DCN). We generalize the energy efficiency networking problem in data centers as optimal flow assignment problems, which is NP-complete, and then propose a heuristic called CARPO, a correlation-aware power optimization algorithm, that dynamically consolidate traffic flows onto a small set of links and switches in a DCN and then shut down unused network devices for power savings. We then achieve energy efficiency on Internet routers by using the compressive data structure. A novel data structure called the Probabilistic Bloom Filter (PBF), which extends the classical bloom filter into the probabilistic direction, so that it can effectively identify heavy hitters with a small memory foot print to reduce energy consumption of network measurement. To achieve energy efficiency on Wireless Sensor Networks (WSN), we developed one data collection protocol called EDAL, which stands for Energy-efficient Delay-aware Lifetime-balancing data collection. Based on the Open Vehicle Routing problem, EDAL exploits the topology requirements of Compressive Sensing (CS), then implement CS to save more energy on sensor nodes

    Topics in Power Usage in Network Services

    Get PDF
    The rapid advance of computing technology has created a world powered by millions of computers. Often these computers are idly consuming energy unnecessarily in spite of all the efforts of hardware manufacturers. This thesis examines proposals to determine when to power down computers without negatively impacting on the service they are used to deliver, compares and contrasts the efficiency of virtualisation with containerisation, and investigates the energy efficiency of the popular cryptocurrency Bitcoin. We begin by examining the current corpus of literature and defining the key terms we need to proceed. Then we propose a technique for improving the energy consumption of servers by moving them into a sleep state and employing a low powered device to act as a proxy in its place. After this we move on to investigate the energy efficiency of virtualisation and compare the energy efficiency of two of the most common means used to do this. Moving on from this we look at the cryptocurrency Bitcoin. We consider the energy consumption of bitcoin mining and if this compared with the value of bitcoin makes this profitable. Finally we conclude by summarising the results and findings of this thesis. This work increases our understanding of some of the challenges of energy efficient computation as well as proposing novel mechanisms to save energy

    Energy aware task allocation algorithms for wireless sensor networks

    Get PDF
    Complex wireless sensor network (WSN) applications, such as those in Internet of things or in-network processing, are pushing the requirements of energy efficiency and long-term operation of the network drastically. Energy aware task allocation becomes crucial to extend the network lifetime, by efficiently distributing the tasks of applications among sensor nodes. Although task allocation has been deeply studied in wired systems, the resulting approaches are insufficient for WSNs due to limited battery resources and computing capability of WSN nodes, as well as the special wireless communication. This work focuses on designing energy aware task allocation algorithms to extend the network lifetime of WSNs. More precisely, this work firstly proposes a centralized static task allocation algorithm (CSTA) for cluster based WSNs. Since a WSN application can be modeled by a directed acyclic graph (DAG), the task allocation problem is formulated as partitioning the modeled DAG graph into two subgraphs: one for the slave node and the other for the master node. By using a binary vector variable to represent the partition cut, CSTA formulates the problem of maximizing network lifetime as a binary integer linear programming (BILP) problem. It provides one fixed time invariant partition cut (task allocation solution) for each slave node to balance the workload distribution of tasks. Moreover, motivated by the fact that using multiple partition cuts can achieve more balanced workload distribution, this work extends CSTA to a centralized dynamic task allocation algorithm, CDTA. By using a probability vector variable to stand for partition cuts with different weights, CDTA formulates the dynamic task allocation problem as a linear programing (LP) problem. Due to the high complexity of centralized algorithms, this work further proposes a very lightweight distributed optimal on-line task allocation algorithm (DOOTA). Through an indepth analysis, it proves that the optimal task allocation solution consists of at most two partition cuts for each slave nodes. Based on this analysis, DOOTA enables each slave node to calculate its own optimal task allocation solution by negotiating with the master node with a very short time. These contributions significantly improve the application performance for WSNs, but also for other domains, e.g, mobile edge/fog computing. Furthermore, the proposed task allocation algorithms are extended for different task scenarios and network structures, i.e., applications with conditional tasks, joint local and global applications and multi-hop mesh network. Given a condition triggered application, it is modeled by a DAG graph with conditional branches. This conditional DAG is further decomposed into multiple stationary DAG graphs without conditional branches according to the satisfaction probability of each condition. Based on this modeling, a static and a dynamic condition triggered task allocation algorithms (SCTTA and DCTTA) are proposed by considering the multiple stationary DAG simultaneously. Targeting the joint local and global applications, this work designs a static and a dynamic joint task allocation algorithms, SJTA and DJTA, based on BILP and LP, respectively. The modeling of local task allocation problem does not change, while the global task allocation problem is modeled by dividing the global DAG graph into different subgraphs mapping to the slave and master nodes. Besides the extensions for different task scenarios, this work presents a dynamic task allocation algorithm for multi-hop mesh networks (DTA-mhop) as well. The corresponding task allocation problem is modeled by dividing the DAG graph of each sensor node into multiple subgraphs mapping to itself, the routing and sink nodes. By using the summation of assigned tasks for each node, DTA-mhop formulates the lifetime maximization as a LP problem. The proposed task allocation algorithms are firstly evaluated using simulations and real WSN applications, in terms of network lifetime increase and algorithm runtime. In order to investigate the algorithm's performance in realistic scenarios, the CSTA, CDTA and DOOTA algorithms are implemented in a real WSN based on the OpenMote platform. Both the simulation and implementation results show that the network lifetime can be dramatically extended. Remarkably, the network lifetime improvements are more significant for addressing complex applications. The proposed task allocation algorithms are therefore suitable for WSNs, and they can also be easily adapted to other wireless domains

    Scheduling Tasks on Intermittently-Powered Real-Time Systems

    Get PDF
    Batteryless systems go through sporadic power on and off phases due to intermittently available energy; thus, they are called intermittent systems. Unfortunately, this intermittence in power supply hinders the timely execution of tasks and limits such devices’ potential in certain application domains, e.g., healthcare, live-stock tracking. Unlike prior work on time-aware intermittent systems that focuses on timekeeping [1, 2, 3] and discarding expired data [4], this dissertation concentrates on finishing task execution on time. I leverage the data processing and control layer of batteryless systems by developing frameworks that (1) integrate energy harvesting and real-time systems, (2) rethink machine learning algorithms for an energy-aware imprecise task scheduling framework, (3) develop scheduling algorithms that, along with deciding what to compute, answers when to compute and when to harvest, and (4) utilize distributed systems that collaboratively emulate a persistently powered system. Scheduling Framework for Intermittently Powered Computing Systems. Batteryless systems rely on sporadically available harvestable energy. For example, kinetic-powered motion detector sensors on the impalas can only harvest energy when the impalas are moving, which cannot be ascertained in advance. This uncertainty poses a unique real-time scheduling problem where existing real-time algorithms fail due to the interruption in execution time. This dissertation proposes a unified scheduling framework that includes both harvesting and computing. Imprecise Deep Neural Network Inference in Deadline-Aware Intermittent Systems. This dissertation proposes Zygarde- an energy-aware and outcome-aware soft-real-time imprecise deep neural network (DNN) task scheduling framework for intermittent systems. Zygarde leverages the semantic diversity of input data and layer-dependent expressiveness of deep features and infers only the necessary DNN layers based on available time and energy. Zygarde proposes a novel technique to determine the imprecise boundary at the runtime by exploiting the clustering classifiers and specialized offline training of the DNNs to minimize the loss of accuracy due to partial execution. It also proposes a single metric, η to represent a system’s predictability that measures how close a harvesterâs harvesting pattern is to a constant energy source. Besides, Zygarde consists of a scheduling algorithm that takes available time, available energy, impreciseness, and the classifier's performance into account. Scheduling Mutually Exclusive Computing and Harvesting Tasks in Deadline-Aware Intermittent Systems. The lack of sufficient ambient energy to directly power the intermittent systems introduces mutually exclusive computing and charging cycles of intermittently powered systems. This introduces a challenging real-time scheduling problem where the existing real-time algorithms fail due to the lack of interruption in execution time. To address this, this dissertation proposes Celebi, which considers the dynamics of the available energy and schedules when to harvest and when to compute in batteryless systems. Using data-driven simulation and real-world experiments, this dissertation shows that Celebi significantly increases the number of tasks that complete execution before their deadline when power was only available intermittently. Persistent System Emulation with Distributed Intermittent System. Intermittently-powered sensing and computing systems go through sporadic power-on and off periods due to the uncertain availability of energy sources. Despite the recent efforts to advance time-sensitive intermittent systems, such systems fail to capture important target events when the energy is absent for a prolonged time. This event miss limits the potential usage of intermittent systems in fault- intolerant and safety-critical applications. To address this problem, this dissertation proposes Falinks, a framework that allows a swarm of distributed intermittently powered nodes to collaboratively imitate the sensing and computing capabilities of a persistently powered system. This framework provides power-on and off schedules for the swamp of intermittent nodes which has no communication capability with each other.Doctor of Philosoph

    Resource-Efficient Communication in the Presence of Adversaries

    Get PDF
    This dissertation presents algorithms for achieving communication in the presence of adversarial attacks in large, decentralized, resource-constrained networks. We consider abstract single-hop communication settings where a set of senders wishes to directly communicate with a set of receivers . These results are then extended to provide resource-efficient, multi-hop communication in wireless sensor networks (WSNs), where energy is critically scarce, and peer-to-peer (P2P) networks, where bandwidth and computational power are limited. Our algorithms are provably correct in the face of attacks by a computationally bounded adversary who seeks to disrupt communication between correct participants. The first major result in this dissertation addresses a general scenario involving single-hop communication in a time-slotted network where a single sender in wishes to transmit a message to a single receiver in . The two players share a communication channel; however, there exists an adversary who aims to prevent the transmission of by periodically blocking this channel. There are costs to send, receive or block on the channel, and we ask: How much do the two players need to spend relative to the adversary in order to guarantee transmission of the message? This problem abstracts many types of conflict in information networks, and the associated costs represent an expenditure of network resources. We show that it is significantly more costly for the adversary to block than for the two players to achieve communication. Specifically, if the cost to send, receive and block in a slot are fixed constants, and the adversary spends a total of slots to try to block the message, then both the sender and receiver must be active in only O(ᵠ⁻¹ + 1) slots in expectation to transmit , where φ = (1+ √5)/2 is the golden ratio. Surprisingly, this result holds even if (1) the value of is unknown to either player; (2) the adversary knows the algorithms of both players, but not their random bits; and (3) the adversary is able to launch attacks using total knowledge of past actions of both players. Finally, these results are applied to two concrete problems. First, we consider jamming attacks in WSNs and address the fundamental task of propagating from a single device to all others in a WSN in the presence of faults; this is the problem of reliable broadcast. Second, we examine how our algorithms can mitigate application-level distributed denial-of-service attacks in wired client-server scenarios. The second major result deals with a single-hop communication problem where now consists of multiple senders and there is still a single receiver who wishes to obtain a message . However, many of the senders (strictly less than half) can be faulty, failing to send or sending incorrect messages. While the majority of the senders possess , rather than listening to all of and majority filtering on the received data, we desire an algorithm that allows the single receiver to decide on in a more efficient manner. To investigate this scenario, we define and devise algorithms for a new data streaming problem called the Bad Santa problem which models the selection dilemma faced by the receiver. With our results for the Bad Santa problem, we consider the problem of energy-efficient reliable broadcast. All previous results on reliable broadcast require devices to spend significant time in the energy-expensive receiving state which is a critical problem in WSNs where devices are typically battery powered. In a popular WSN model, we give a reliable broadcast protocol that achieves optimal fault tolerance (i.e., tolerates the maximum number of faults in this WSN model) and improves over previous results by achieving an expected quadratic decrease in the cost to each device. For the case where the number of faults is within a (1-∊)-factor of the optimal fault tolerance, for any constant ∊>0, we give a reliable broadcast protocol that improves further by achieving an expected (roughly) exponential decrease in the cost to each device. The third and final major result of this dissertation addresses single-hop communication where and both consist of multiple peers that need to communicate in an attack-resistant P2P network. There are several analytical results on P2P networks that can tolerate an adversary who controls a large number of peers and uses them to disrupt network functionality. Unfortunately, in such systems, operations such as data retrieval and message sending incur significant communication costs. Here, we employ cryptographic techniques to define two protocols both of which are more efficient than existing solutions. For a network of peers, our first protocol is deterministic with O(log²) message complexity and our second protocol is randomized with expected O(log ) message complexity; both improve over all previous results. The hidden constants and setup costs for our protocols are small and no trusted third party is required. Finally, we present an analysis showing that our protocols are practical for deployment under significant churn and adversarial behaviour

    Efficient Passive Clustering and Gateways selection MANETs

    Get PDF
    Passive clustering does not employ control packets to collect topological information in ad hoc networks. In our proposal, we avoid making frequent changes in cluster architecture due to repeated election and re-election of cluster heads and gateways. Our primary objective has been to make Passive Clustering more practical by employing optimal number of gateways and reduce the number of rebroadcast packets

    A Fog Computing Approach for Cognitive, Reliable and Trusted Distributed Systems

    Get PDF
    In the Internet of Things era, a big volume of data is generated/gathered every second from billions of connected devices. The current network paradigm, which relies on centralised data centres (a.k.a. Cloud computing), becomes an impractical solution for IoT data storing and processing due to the long distance between the data source (e.g., sensors) and designated data centres. It worth noting that the long distance in this context refers to the physical path and time interval of when data is generated and when it get processed. To explain more, by the time the data reaches a far data centre, the importance of the data can be depreciated. Therefore, the network topologies have evolved to permit data processing and storage at the edge of the network, introducing what so-called fog Computing. The later will obviously lead to improvements in quality of service via processing and responding quickly and efficiently to varieties of data processing requests. Although fog computing is recognized as a promising computing paradigm, it suffers from challenging issues that involve: i) concrete adoption and management of fogs for decentralized data processing. ii) resources allocation in both cloud and fog layers. iii) having a sustainable performance since fog have a limited capacity in comparison with cloud. iv) having a secure and trusted networking environment for fogs to share resources and exchange data securely and efficiently. Hence, the thesis focus is on having a stable performance for fog nodes by enhancing resources management and allocation, along with safety procedures, to aid the IoT-services delivery and cloud computing in the ever growing industry of smart things. The main aspects related to the performance stability of fog computing involves the development of cognitive fog nodes that aim at provide fast and reliable services, efficient resources managements, and trusted networking, and hence ensure the best Quality of Experience, Quality of Service and Quality of Protection to end-users. Therefore the contribution of this thesis in brief is a novel Fog Resource manAgeMEnt Scheme (FRAMES) which has been proposed to crystallise fog distribution and resource management with an appropriate service's loads distribution and allocation based on the Fog-2-Fog coordination. Also, a novel COMputIng Trust manageMENT (COMITMENT) which is a software-based approach that is responsible for providing a secure and trusted environment for fog nodes to share their resources and exchange data packets. Both FRAMES and COMITMENT are encapsulated in the proposed Cognitive Fog (CF) computing which aims at making fog able to not only act on the data but also interpret the gathered data in a way that mimics the process of cognition in the human mind. Hence, FRAMES provide CF with elastic resource managements for load balancing and resolving congestion, while the COMITMENT employ trust and recommendations models to avoid malicious fog nodes in the Fog-2-Fog coordination environment. The proposed algorithms for FRAMES and COMITMENT have outperformed the competitive benchmark algorithms, namely Random Walks Offloading (RWO) and Nearest Fog Offloading (NFO) in the experiments to verify the validity and performance. The experiments were conducted on the performance (in terms of latency), load balancing among fog nodes and fogs trustworthiness along with detecting malicious events and attacks in the Fog-2-Fog environment. The performance of the proposed FRAMES's offloading algorithms has the lowest run-time (i.e., latency) against the benchmark algorithms (RWO and NFO) for processing equal-number of packets. Also, COMITMENT's algorithms were able to detect the collaboration requests whether they are secure, malicious or anonymous. The proposed work shows potential in achieving a sustainable fog networking paradigm and highlights significant benefits of fog computing in the computing ecosystem

    Designing and Deploying Internet of Things Applications in the Industry: An Empirical Investigation

    Get PDF
    RÉSUMÉ : L’Internet des objets (IdO) a pour objectif de permettre la connectivité à presque tous les objets trouvés dans l’espace physique. Il étend la connectivité aux objets de tous les jours et o˙re la possibilité de surveiller, de suivre, de se connecter et d’intéragir plus eÿcacement avec les actifs industriels. Dans l’industrie de nos jours, les réseaux de capteurs connectés surveillent les mouvements logistiques, fabriquent des machines et aident les organisations à améliorer leur eÿcacité et à réduire les coûts. Cependant, la conception et l’implémentation d’un réseau IdO restent, aujourd’hui, une tâche particulièrement diÿcile. Nous constatons un haut niveau de fragmentation dans le paysage de l’IdO, les développeurs se complaig-nent régulièrement de la diÿculté à intégrer diverses technologies avec des divers objets trouvés dans les systèmes IdO et l’absence des directives et/ou des pratiques claires pour le développement et le déploiement d’application IdO sûres et eÿcaces. Par conséquent, analyser et comprendre les problèmes liés au développement et au déploiement de l’IdO sont primordiaux pour permettre à l’industrie d’exploiter son plein potentiel. Dans cette thèse, nous examinons les interactions des spécialistes de l’IdO sur le sites Web populaire, Stack Overflow et Stack Exchange, afin de comprendre les défis et les problèmes auxquels ils sont confrontés lors du développement et du déploiement de di˙érentes appli-cations de l’IdO. Ensuite, nous examinons le manque d’interopérabilité entre les techniques développées pour l’IdO, nous étudions les défis que leur intégration pose et nous fournissons des directives aux praticiens intéressés par la connexion des réseaux et des dispositifs de l’IdO pour développer divers services et applications. D’autre part, la sécurité étant essen-tielle au succès de cette technologie, nous étudions les di˙érentes menaces et défis de sécurité sur les di˙érentes couches de l’architecture des systèmes de l’IdO et nous proposons des contre-mesures. Enfin, nous menons une série d’expériences qui vise à comprendre les avantages et les incon-vénients des déploiements ’serverful’ et ’serverless’ des applications de l’IdO afin de fournir aux praticiens des directives et des recommandations fondées sur des éléments probants relatifs à de tels déploiements. Les résultats présentés représentent une étape très importante vers une profonde compréhension de ces technologies très prometteuses. Nous estimons que nos recommandations et nos suggestions aideront les praticiens et les bâtisseurs technologiques à améliorer la qualité des logiciels et des systèmes de l’IdO. Nous espérons que nos résultats pourront aider les communautés et les consortiums de l’IdO à établir des normes et des directives pour le développement, la maintenance, et l’évolution des logiciels de l’IdO.----------ABSTRACT : Internet of Things (IoT) aims to bring connectivity to almost every object found in the phys-ical space. It extends connectivity to everyday things, opens up the possibility to monitor, track, connect, and interact with industrial assets more eÿciently. In the industry nowadays, we can see connected sensor networks monitor logistics movements, manufacturing machines, and help organizations improve their eÿciency and reduce costs as well. However, designing and implementing an IoT network today is still a very challenging task. We are witnessing a high level of fragmentation in the IoT landscape and developers regularly complain about the diÿculty to integrate diverse technologies of various objects found in IoT systems, and the lack of clear guidelines and–or practices for developing and deploying safe and eÿcient IoT applications. Therefore, analyzing and understanding issues related to the development and deployment of the Internet of Things is utterly important to allow the industry to utilize its fullest potential. In this thesis, we examine IoT practitioners’ discussions on the popular Q&A websites, Stack Overflow and Stack Exchange, to understand the challenges and issues that they face when developing and deploying di˙erent IoT applications. Next, we examine the lack of interoper-ability among technologies developed for IoT and study the challenges that their integration poses and provide guidelines for practitioners interested in connecting IoT networks and de-vices to develop various services and applications. Since security issues are center to the success of this technology, we also investigate di˙erent security threats and challenges across di˙erent layers of the architecture of IoT systems and propose countermeasures. Finally, we conduct a series of experiments to understand the advantages and trade-o˙s of serverful and serverless deployments of IoT applications in order to provide practitioners with evidence-based guidelines and recommendations on such deployments. The results presented in this thesis represent a first important step towards a deep understanding of these very promising technologies. We believe that our recommendations and suggestions will help practitioners and technology builders improve the quality of IoT software and systems. We also hope that our results can help IoT communities and consortia establish standards and guidelines for the development, maintenance, and evolution of IoT software and systems
    corecore