229 research outputs found

    Versatile, Scalable, and Accurate Simulation of Distributed Applications and Platforms

    Get PDF
    International audienceThe study of parallel and distributed applications and platforms, whether in the cluster, grid, peer-to-peer, volunteer, or cloud computing domain, often mandates empirical evaluation of proposed algorithmic and system solutions via simulation. Unlike direct experimentation via an application deployment on a real-world testbed, simulation enables fully repeatable and configurable experiments for arbitrary hypothetical scenarios. Two key concerns are accuracy (so that simulation results are scientifically sound) and scalability (so that simulation experiments can be fast and memory-efficient). While the scalability of a simulator is easily measured, the accuracy of many state-of-the-art simulators is largely unknown because they have not been sufficiently validated. In this work we describe recent accuracy and scalability advances made in the context of the SimGrid simulation framework. A design goal of SimGrid is that it should be versatile, i.e., applicable across all aforementioned domains. We present quantitative results that show that SimGrid compares favorably to state-of-the-art domain-specific simulators in terms of scalability, accuracy, or the trade-off between the two. An important implication is that, contrary to popular wisdom, striving for versatility in a simulator is not an impediment but instead is conducive to improving both accuracy and scalability

    PhyNetLab: An IoT-Based Warehouse Testbed

    Full text link
    Future warehouses will be made of modular embedded entities with communication ability and energy aware operation attached to the traditional materials handling and warehousing objects. This advancement is mainly to fulfill the flexibility and scalability needs of the emerging warehouses. However, it leads to a new layer of complexity during development and evaluation of such systems due to the multidisciplinarity in logistics, embedded systems, and wireless communications. Although each discipline provides theoretical approaches and simulations for these tasks, many issues are often discovered in a real deployment of the full system. In this paper we introduce PhyNetLab as a real scale warehouse testbed made of cyber physical objects (PhyNodes) developed for this type of application. The presented platform provides a possibility to check the industrial requirement of an IoT-based warehouse in addition to the typical wireless sensor networks tests. We describe the hardware and software components of the nodes in addition to the overall structure of the testbed. Finally, we will demonstrate the advantages of the testbed by evaluating the performance of the ETSI compliant radio channel access procedure for an IoT warehouse

    Bio-Inspired Tools for a Distributed Wireless Sensor Network Operating System

    Get PDF
    The problem which I address in this thesis is to find a way to organise and manage a network of wireless sensor nodes using a minimal amount of communication. To find a solution I explore the use of Bio-inspired protocols to enable WSN management while maintaining a low communication overhead. Wireless Sensor Networks (WSNs) are loosely coupled distributed systems comprised of low-resource, battery powered sensor nodes. The largest problem with WSN management is that communication is the largest consumer of a sensor node’s energy. WSN management systems need to use as little communication as possible to prolong their operational lifetimes. This is the Wireless Sensor Network Management Problem. This problem is compounded because current WSN management systems glue together unrelated protocols to provide system services causing inter-protocol interference. Bio-inspired protocols provide a good solution because they enable the nodes to self-organise, use local area communication, and can combine their communication in an intelligent way with minimal increase in communication. I present a combined protocol and MAC scheduler to enable multiple service protocols to function in a WSN at the same time without causing inter-protocol interference. The scheduler is throughput optimal as long as the communication requirements of all of the protocols remain within the communication capacity of the network. I show that the scheduler improves a dissemination protocol’s performance by 35%. A bio-inspired synchronisation service is presented which enables wireless sensor nodes to self organise and provide a time service. Evaluation of the protocol shows an 80% saving in communication over similar bio-inspired synchronisation approaches. I then add an information dissemination protocol, without significantly increasing communication. This is achieved through the ability of our bio-inspired algorithms to combine their communication in an intelligent way so that they are able to offer multiple services without requiring a great deal of inter-node communication.Open Acces

    Performance metrics and routing in vehicular ad hoc networks

    Get PDF
    The aim of this thesis is to propose a method for enhancing the performance of Vehicular Ad hoc Networks (VANETs). The focus is on a routing protocol where performance metrics are used to inform the routing decisions made. The thesis begins by analysing routing protocols in a random mobility scenario with a wide range of node densities. A Cellular Automata algorithm is subsequently applied in order to create a mobility model of a highway, and wide range of density and transmission range are tested. Performance metrics are introduced to assist the prediction of likely route failure. The Good Link Availability (GLA) and Good Route Availability (GRA) metrics are proposed which can be used for a pre-emptive action that has the potential to give better performance. The implementation framework for this method using the AODV routing protocol is also discussed. The main outcomes of this research can be summarised as identifying and formulating methods for pre-emptive actions using a Cellular Automata with NS-2 to simulate VANETs, and the implementation method within the AODV routing protocol

    A flight software development and simulation framework for advanced space systems

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2002.Includes bibliographical references (p. 293-302).Distributed terrestrial computer systems employ middleware software to provide communications abstractions and reduce software interface complexity. Embedded applications are adopting the same approaches, but must make provisions to ensure that hard real-time temporal performance can be maintained. This thesis presents the development and validation of a middleware system tailored to spacecraft flight software development. Our middleware runs on the Generalized Flight Operations Processing Simulator (GFLOPS) and is called the GFLOPS Rapid Real-time Development Environment (GRRDE). GRRDE provides publish-subscribe communication services between software components. These services help to reduce the complexity of managing software interfaces. The hard real-time performance of these services has been verified with General Timed Automata modelling and extensive run-time testing. Several example applications illustrate the use of GRRDE to support advanced flight software development. Two technology-focused studies examine automatic code generation and autonomous fault protection within the GRRDE framework. A complex simulation of the TechSat 21 distributed spacebased radar mission highlights the utility of the approach for large-scale applications.by John Patrick Enright.Ph.D

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions

    Hardware-Aware Algorithm Designs for Efficient Parallel and Distributed Processing

    Get PDF
    The introduction and widespread adoption of the Internet of Things, together with emerging new industrial applications, bring new requirements in data processing. Specifically, the need for timely processing of data that arrives at high rates creates a challenge for the traditional cloud computing paradigm, where data collected at various sources is sent to the cloud for processing. As an approach to this challenge, processing algorithms and infrastructure are distributed from the cloud to multiple tiers of computing, closer to the sources of data. This creates a wide range of devices for algorithms to be deployed on and software designs to adapt to.In this thesis, we investigate how hardware-aware algorithm designs on a variety of platforms lead to algorithm implementations that efficiently utilize the underlying resources. We design, implement and evaluate new techniques for representative applications that involve the whole spectrum of devices, from resource-constrained sensors in the field, to highly parallel servers. At each tier of processing capability, we identify key architectural features that are relevant for applications and propose designs that make use of these features to achieve high-rate, timely and energy-efficient processing.In the first part of the thesis, we focus on high-end servers and utilize two main approaches to achieve high throughput processing: vectorization and thread parallelism. We employ vectorization for the case of pattern matching algorithms used in security applications. We show that re-thinking the design of algorithms to better utilize the resources available in the platforms they are deployed on, such as vector processing units, can bring significant speedups in processing throughout. We then show how thread-aware data distribution and proper inter-thread synchronization allow scalability, especially for the problem of high-rate network traffic monitoring. We design a parallelization scheme for sketch-based algorithms that summarize traffic information, which allows them to handle incoming data at high rates and be able to answer queries on that data efficiently, without overheads.In the second part of the thesis, we target the intermediate tier of computing devices and focus on the typical examples of hardware that is found there. We show how single-board computers with embedded accelerators can be used to handle the computationally heavy part of applications and showcase it specifically for pattern matching for security-related processing. We further identify key hardware features that affect the performance of pattern matching algorithms on such devices, present a co-evaluation framework to compare algorithms, and design a new algorithm that efficiently utilizes the hardware features.In the last part of the thesis, we shift the focus to the low-power, resource-constrained tier of processing devices. We target wireless sensor networks and study distributed data processing algorithms where the processing happens on the same devices that generate the data. Specifically, we focus on a continuous monitoring algorithm (geometric monitoring) that aims to minimize communication between nodes. By deploying that algorithm in action, under realistic environments, we demonstrate that the interplay between the network protocol and the application plays an important role in this layer of devices. Based on that observation, we co-design a continuous monitoring application with a modern network stack and augment it further with an in-network aggregation technique. In this way, we show that awareness of the underlying network stack is important to realize the full potential of the continuous monitoring algorithm.The techniques and solutions presented in this thesis contribute to better utilization of hardware characteristics, across a wide spectrum of platforms. We employ these techniques on problems that are representative examples of current and upcoming applications and contribute with an outlook of emerging possibilities that can build on the results of the thesis

    Towards Dynamic Vehicular Clouds

    Get PDF
    Motivated by the success of the conventional cloud computing, Vehicular Clouds were introduced as a group of vehicles whose corporate computing, sensing, communication, and physical resources can be coordinated and dynamically allocated to authorized users. One of the attributes that set Vehicular Clouds apart from conventional clouds is resource volatility. As vehicles enter and leave the cloud, new computing resources become available while others depart, creating a volatile environment where the task of reasoning about fundamental performance metrics becomes very challenging. The goal of this thesis is to design an architecture and model for a dynamic Vehicular Cloud built on top of moving vehicles on highways. We present our envisioned architecture for dynamic Vehicular Cloud, consisting of vehicles moving on the highways and multiple communication stations installed along the highway, and investigate the feasibility of such systems. The dynamic Vehicular Cloud is based on two-way communications between vehicles and the stations. We provide a communication protocol for vehicle-to-infrastructure communications enabling a dynamic Vehicular Cloud. We explain the structure of the proposed protocol in detail and then provide analytical predictions and simulation results to investigate the accuracy of our design and predictions. Just as in conventional clouds, job completion time ranks high among the fundamental quantitative performance figures of merit. In general, predicting job completion time requires full knowledge of the probability distributions of the intervening random variables. More often than not, however, the data center manager does not know these distribution functions. Instead, using accumulated empirical data, she may be able to estimate the first moments of these random variables. Yet, getting a handle on the expected job completion time is a very important problem that must be addressed. With this in mind, another contribution of this thesis is to offer easy-to-compute approximations of job completion time in a dynamic Vehicular Cloud involving vehicles on a highway. We assume estimates of the first moment of the time it takes the job to execute without any overhead attributable to the working of the Vehicular Cloud. A comprehensive set of simulations have shown that our approximations are very accurate. As mentioned, a major difference between the conventional cloud and the Vehicular Cloud is the availability of the computational nodes. The vehicles, which are the Vehicular Cloud\u27s computational resources, arrive and depart at random times, and as a result, this characteristic may cause failure in executing jobs and interruptions in the ongoing services. To handle these interruptions, once a vehicle is ready to leave the Vehicular Cloud, if the vehicle is running a job, the job and all intermediate data stored by the departing vehicle must be migrated to an available vehicle in the Vehicular Cloud

    Real-Time Wireless Sensor-Actuator Networks for Cyber-Physical Systems

    Get PDF
    A cyber-physical system (CPS) employs tight integration of, and coordination between computational, networking, and physical elements. Wireless sensor-actuator networks provide a new communication technology for a broad range of CPS applications such as process control, smart manufacturing, and data center management. Sensing and control in these systems need to meet stringent real-time performance requirements on communication latency in challenging environments. There have been limited results on real-time scheduling theory for wireless sensor-actuator networks. Real-time transmission scheduling and analysis for wireless sensor-actuator networks requires new methodologies to deal with unique characteristics of wireless communication. Furthermore, the performance of a wireless control involves intricate interactions between real-time communication and control. This thesis research tackles these challenges and make a series of contributions to the theory and system for wireless CPS. (1) We establish a new real-time scheduling theory for wireless sensor-actuator networks. (2) We develop a scheduling-control co-design approach for holistic optimization of control performance in a wireless control system. (3) We design and implement a wireless sensor-actuator network for CPS in data center power management. (4) We expand our research to develop scheduling algorithms and analyses for real-time parallel computing to support computation-intensive CPS

    Interference-aware adaptive spectrum management for wireless networks using unlicensed frequency bands

    Get PDF
    The growing demand for ubiquitous broadband network connectivity and continuously falling prices in hardware operating on the unlicensed bands have put Wi-Fi technology in a position to lead the way in rapid innovation towards high performance wireless for the future. The success story of Wi-Fi contributed to the development of widespread variety of options for unlicensed access (e.g., Bluetooth, Zigbee) and has even sparked regulatory bodies in several countries to permit access to unlicensed devices in portions of the spectrum initially licensed to TV services. In this thesis we present novel spectrum management algorithms for networks employing 802.11 and TV white spaces broadly aimed at efficient use of spectrum under consideration, lower contention (interference) and high performance. One of the target scenarios of this thesis is neighbourhood or citywide wireless access. For this, we propose the use of IEEE 802.11-based multi-radio wireless mesh network using omnidirectional antennae. We develop a novel scalable protocol termed LCAP for efficient and adaptive distributed multi-radio channel allocation. In LCAP, nodes autonomously learn their channel allocation based on neighbourhood and channel usage information. This information is obtained via a novel neighbour discovery protocol, which is effective even when nodes do not share a common channel. Extensive simulation-based evaluation of LCAP relative to the state-of-the-art Asynchronous Distributed Colouring (ADC) protocol demonstrates that LCAP is able to achieve its stated objectives. These objectives include efficient channel utilisation across diverse traffic patterns, protocol scalability and adaptivity to factors such as external interference. Motivated by the non-stationary nature of the network scenario and the resulting difficulty of establishing convergence of LCAP, we consider a deterministic alternative. This approach employs a novel distributed priority-based mechanism where nodes decide on their channel allocations based on only local information. Key enabler of this approach is our neighbour discovery mechanism. We show via simulations that this mechanism exhibits similar performance to LCAP. Another application scenario considered in this thesis is broadband access to rural areas. For such scenarios, we consider the use of long-distance 802.11 mesh networks and present a novel mechanism to address the channel allocation problem in a traffic-aware manner. The proposed approach employs a multi-radio architecture using directional antennae. Under this architecture, we exploit the capability of the 802.11 hardware to use different channel widths and assign widths to links based on their relative traffic volume such that side-lobe interference is mitigated. We show that this problem is NP-complete and propose a polynomial time, greedy channel allocation algorithm that guarantees valid channel allocations for each node. Evaluation of the proposed algorithm via simulations of real network topologies shows that it consistently outperforms fixed width allocation due to its ability to adapt to spatio-temporal variations in traffic demands. Finally, we consider the use of TV-white-spaces to increase throughput for in-home wireless networking and relieve the already congested unlicensed bands. To the best of our knowledge, our work is the first to develop a scalable micro auctioning mechanism for sharing of TV white space spectrum through a geolocation database. The goal of our approach is to minimise contention among secondary users, while not interfering with primary users of TV white space spectrum (TV receivers and microphone users). It enables interference-free and dynamic sharing of TVWS among home networks with heterogeneous spectrum demands, while resulting in revenue generation for database and broadband providers. Using white space availability maps from the UK, we validate our approach in real rural, urban and dense-urban residential scenarios. Our results show that our mechanism is able to achieve its stated objectives of attractiveness to both the database provider and spectrum requesters, scalability and efficiency for dynamic spectrum distribution in an interference-free manner
    • 

    corecore