1,160 research outputs found

    Analysis and Design of Communication Policies for Energy-Constrained Machine-Type Devices

    Get PDF
    This thesis focuses on the modelling, analysis and design of novel communication strategies for wireless machine-type communication (MTC) systems to realize the notion of Internet of things (IoT). We consider sensor based MTC devices which acquire physical information from the environment and transmit it to a base station (BS) while satisfying application specific quality-of-service (QoS) requirements. Due to the wireless and unattended operation, these MTC devices are mostly battery-operated and are severely energy-constrained. In addition, MTC systems require low-latency, perpetual operation, massive-access, etc. Motivated by these critical requirements, this thesis proposes optimal data communication policies for four different network scenarios. In the first two scenarios, each MTC device transmits data on a dedicated orthogonal channel and either (i) possess an initially fully charged battery of finite capacity, or (ii) possess the ability to harvest energy and store it in a battery of finite capacity. In the other two scenarios, all MTC devices share a single channel and are either (iii) allocated individual non-overlapping transmission times, or (iv) randomly transmit data on predefined time slots. The proposed novel techniques and insights gained from this thesis aim to better utilize the limited energy resources of machine-type devices in order to effectively serve the future wireless networks. Firstly, we consider a sensor based MTC device communicates with a BS, and devise optimal data compression and transmission policies with an objective to prolong the device-lifetime. We formulate joint optimization problems aiming to maximize the device-lifetime whilst satisfying the delay and bit-error-rate constraints. Our results show significant improvement in device-lifetime. Importantly, the gain is most profound in the low latency regime. Secondly, we consider a sensor based MTC device that is served by a hybrid BS which wirelessly transfers power to the device and receives data transmission from the device. The MTC device employs data compression in order to reduce the energy cost of data transmission. Thus, we propose to jointly optimize the harvesting-time, compression and transmission design, to minimize the energy cost of the system under given delay constraint. The proposed scheme reduces energy consumption up to 19% when data compression is employed. Thirdly, we consider multiple MTC devices transmit data to a BS following the time division multiple access (TDMA). Conventionally, the energy-efficiency performance in TDMA is optimized through multi-user scheduling, i.e., changing the transmission time allocated to different devices. In such a system, the sequence of devices for transmission, i.e., who transmits first and who transmits second, etc., does not have any impact on the energy-efficiency. We consider that data compression is performed before transmission. We jointly optimize both multi-user sequencing and scheduling along with the compression and transmission rate. Our results show that multi-user sequence optimization achieves up to 45% improvement in the energy-efficiency at MTC devices. Lastly, we consider contention resolution diversity slotted ALOHA (CRDSA) with transmit power diversity where each packet copy from a device is transmitted at a randomly selected power level. It results in inter-slot received power diversity, which is exploited by employing a signal-to-interference-plus-noise ratio based successive interference cancellation (SIC) receiver. We propose a message passing algorithm to model the SIC decoding and formulate an optimization problem to determine the optimal transmit power distribution subject to energy constraints. We show that the proposed strategy provides up to 88% system load performance improvement for massive-MTC systems

    PSBS: Practical Size-Based Scheduling

    Full text link
    Size-based schedulers have very desirable performance properties: optimal or near-optimal response time can be coupled with strong fairness guarantees. Despite this, such systems are very rarely implemented in practical settings, because they require knowing a priori the amount of work needed to complete jobs: this assumption is very difficult to satisfy in concrete systems. It is definitely more likely to inform the system with an estimate of the job sizes, but existing studies point to somewhat pessimistic results if existing scheduler policies are used based on imprecise job size estimations. We take the goal of designing scheduling policies that are explicitly designed to deal with inexact job sizes: first, we show that existing size-based schedulers can have bad performance with inexact job size information when job sizes are heavily skewed; we show that this issue, and the pessimistic results shown in the literature, are due to problematic behavior when large jobs are underestimated. Once the problem is identified, it is possible to amend existing size-based schedulers to solve the issue. We generalize FSP -- a fair and efficient size-based scheduling policy -- in order to solve the problem highlighted above; in addition, our solution deals with different job weights (that can be assigned to a job independently from its size). We provide an efficient implementation of the resulting protocol, which we call Practical Size-Based Scheduler (PSBS). Through simulations evaluated on synthetic and real workloads, we show that PSBS has near-optimal performance in a large variety of cases with inaccurate size information, that it performs fairly and it handles correctly job weights. We believe that this work shows that PSBS is indeed pratical, and we maintain that it could inspire the design of schedulers in a wide array of real-world use cases.Comment: arXiv admin note: substantial text overlap with arXiv:1403.599

    Ensuring Service Level Agreements for Composite Services by Means of Request Scheduling

    Get PDF
    Building distributed systems according to the Service-Oriented Architecture (SOA) allows simplifying the integration process, reducing development costs and increasing scalability, interoperability and openness. SOA endorses the reusability of existing services and aggregating them into new service layers for future recycling. At the same time, the complexity of large service-oriented systems negatively reflects on their behavior in terms of the exhibited Quality of Service. To address this problem this thesis focuses on using request scheduling for meeting Service Level Agreements (SLAs). The special focus is given to composite services specified by means of workflow languages. The proposed solution suggests using two level scheduling: global and local. The global policies assign the response time requirements for component service invocations. The local scheduling policies are responsible for performing request scheduling in order to meet these requirements. The proposed scheduling approach can be deployed without altering the code of the scheduled services, does not require a central point of control and is platform independent. The experiments, conducted using a simulation, were used to study the effectiveness and the feasibility of the proposed scheduling schemes in respect to various deployment requirements. The validity of the simulation was confirmed by comparing its results to the results obtained in experiments with a real-world service. The proposed approach was shown to work well under different traffic conditions and with different types of SLAs

    Sigmoid: An auto-tuned load balancing algorithm for heterogeneous systems

    Get PDF
    A challenge that heterogeneous system programmers face is leveraging the performance of all the devices that integrate the system. This paper presents Sigmoid, a new load balancing algorithm that efficiently co-executes a single OpenCL data-parallel kernel on all the devices of heterogeneous systems. Sigmoid splits the workload proportionally to the capabilities of the devices, drastically reducing response time and energy consumption. It is designed around several features; it is dynamic, adaptive, guided and effortless, as it does not require the user to give any parameter, adapting to the behaviourof each kernel at runtime. To evaluate Sigmoid's performance, it has been implemented in Maat, a system abstraction library. Experimental results with different kernel types show that Sigmoid exhibits excellent performance, reaching a utilization of 90%, together with energy savings up to 20%, always reducing programming effort compared to OpenCL, and facilitating the portability to other heterogeneous machines.This work has been supported by the Spanish Science and Technology Commission under contract PID2019-105660RB-C22 and the European HiPEAC Network of Excellence

    Creating an Agent Based Framework to Maximize Information Utility

    Get PDF
    With increased reliance on communications to conduct military operations, information centric network management becomes vital. A Defense department study of information management for net-centric operations lists the need for tools for information triage (based on relevance, priority, and quality) to counter information overload, semi-automated mechanisms for assessment of quality and relevance of information, and advances to enhance cognition and information understanding in the context of missions [30]. Maximizing information utility to match mission objectives is a complex problem that requires a comprehensive solution in information classification, in scheduling, in resource allocation, and in QoS support. Of these research areas, the resource allocation mechanism provides a framework to build the entire solution. Through an agent based mindset, the lessons of robot control architecture are applied to the network domain. The task of managing information flows is achieved with a hybrid reactive architecture. By demonstration, the reactive agent responds to the observed state of the network through the Unified Behavior Framework (UBF). As information flows relay through the network, agents in the network nodes limit resource contention to improve average utility and create a network with smarter bandwidth utilization. While this is an important result for information maximization, the agent based framework may have broader applications for managing communication networks

    COSMIC: A Model for Multiprocessor Performance Analysis

    Get PDF
    COSMIC, the Combined Ordering Scheme Model with Isolated Components, describes the execution of specific algorithms on multiprocessors and facilitates analysis of their performance. Building upon previous modeling efforts such as Petri nets, COSMIC structures the modeling of a system along several issues including computational and overhead costs due to sequencing of operations, synchronization between operations, and contention for limited resources. This structuring allows us to isolate the performance impact associated with each issue. Finally, studying the performance of a system while executing a specific algorithm gives insight into its performance under realistic operating conditions. The model also allows us to study realistically sized algorithms with ease, especially when they are regularly structured. During the analysis of a system modeled by COSMIC, a set timed Petri nets is produced. These Petri nets are then analyzed to determine measures of the systems performance. To facilitate the specification, manipulation, and analysis of large timed Petri nets, a set of tools has been developed. These tools take advantage of several special properties of the timed Petri nets that greatly reduce the computational resources required to calculate the required measures. From this analysis, performance measures show not only total performance, but also present a breakdown of these results into several specific categories

    A computer simulation study of work flow in a manufacturing system

    Get PDF
    Imperial Users onl
    corecore