3,524 research outputs found

    Metrics, fundamental trade-offs and control policies for delay-sensitive applications in volatile environments

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 137-142).With the explosion of consumer demand, media streaming will soon be the dominant type of Internet traffic. Since such applications are intrinsically delay-sensitive, the conventional network control policies and coding algorithms may not be appropriate tools for data dissemination over networks. The major issue with design and analysis of delay-sensitive applications is the notion of delay, which significantly varies across different applications and time scales. We present a framework for studying the problem of media streaming in an unreliable environment. The focus of this work is on end-user experience for such applications. First, we take an analytical approach to study fundamental rate-delay-reliability trade-offs in the context of media streaming for a single receiver system. We consider the probability of interruption in media playback (buffer underflow) as well as the number of initially buffered packets (initial waiting time) as the Quality of user Experience (QoE) metrics. We characterize the optimal trade-off between these metrics as a function of system parameters such as the packet arrival rate and the file size, for different channel models. For a memoryless channel, we model the receiver's queue dynamics as an M/D/1 queue. Then, we show that for arrival rates slightly larger than the play rate, the minimum initial buffering required to achieve certain level of interruption probability remains bounded as the file size grows. For the case where the arrival rate and the play rate match, the minimum initial buffer size should scale as the square root of the file size. We also study media streaming over channels with memory, modeled using Markovian arrival processes. We characterize the optimal trade-off curves for the infinite file size case, in such Markovian environments. Second, we generalize the results to the case of multiple servers or peers streaming to a single receiver. Random linear network coding allows us to simplify the packet selection strategies and alleviate issues such as duplicate packet reception. We show that the multi-server streaming problem over a memoryless channel can be transformed into a single-server streaming problem, for which we have characterized QoE trade-offs. Third, we study the design of media streaming applications in the presence of multiple heterogeneous wireless access methods with different access costs. Our objective is to analytically characterize the trade-off between usage cost and QoE metrics. We model each access network as a server that provides packets to the user according to a Poisson process with a certain rate and cost. User must make a decision on how many packets to buffer before playback, and which networks to access during the playback. We design, analyze and compare several control policies. In particular, we show that a simple Markov policy with a threshold structure performs the best. We formulate the problem of finding the optimal control policy as a Markov Decision Process (MDP) with a probabilistic constraint. We present the Hamilton-Jacobi-Bellman (HJB) equation for this problem by expanding the state space, and exploit it as a verification method for optimality of the proposed control policy. We use the tools and techniques developed for media streaming applications in the context of power supply networks. We study the value of storage in securing reliability of a system with uncertain supply and demand, and supply friction. We assume storage, when available, can be used to compensate, fully or partially, for the surge in demand or loss of supply. We formulate the problem of optimal utilization of storage with the objective of maximizing system reliability as minimization of the expected discounted cost of blackouts over an infinite horizon. We show that when the stage cost is linear in the size of the blackout, the optimal policy is myopic in the sense that all shocks are compensated by storage up to the available level of storage. However, when the stage cost is strictly convex, it may be optimal to curtail some of the demand and allow a small current blackout in the interest of maintaining a higher level of reserve to avoid a large blackout in the future. Finally, we examine the value of storage capacity in improving system's reliability, as well as the effects of the associated optimal policies under different stage costs on the probability distribution of blackouts.by Ali ParandehGheibi.Ph.D

    Energy-Sustainable IoT Connectivity: Vision, Technological Enablers, Challenges, and Future Directions

    Full text link
    Technology solutions must effectively balance economic growth, social equity, and environmental integrity to achieve a sustainable society. Notably, although the Internet of Things (IoT) paradigm constitutes a key sustainability enabler, critical issues such as the increasing maintenance operations, energy consumption, and manufacturing/disposal of IoT devices have long-term negative economic, societal, and environmental impacts and must be efficiently addressed. This calls for self-sustainable IoT ecosystems requiring minimal external resources and intervention, effectively utilizing renewable energy sources, and recycling materials whenever possible, thus encompassing energy sustainability. In this work, we focus on energy-sustainable IoT during the operation phase, although our discussions sometimes extend to other sustainability aspects and IoT lifecycle phases. Specifically, we provide a fresh look at energy-sustainable IoT and identify energy provision, transfer, and energy efficiency as the three main energy-related processes whose harmonious coexistence pushes toward realizing self-sustainable IoT systems. Their main related technologies, recent advances, challenges, and research directions are also discussed. Moreover, we overview relevant performance metrics to assess the energy-sustainability potential of a certain technique, technology, device, or network and list some target values for the next generation of wireless systems. Overall, this paper offers insights that are valuable for advancing sustainability goals for present and future generations.Comment: 25 figures, 12 tables, submitted to IEEE Open Journal of the Communications Societ

    5G Multi-access Edge Computing: Security, Dependability, and Performance

    Full text link
    The main innovation of the Fifth Generation (5G) of mobile networks is the ability to provide novel services with new and stricter requirements. One of the technologies that enable the new 5G services is the Multi-access Edge Computing (MEC). MEC is a system composed of multiple devices with computing and storage capabilities that are deployed at the edge of the network, i.e., close to the end users. MEC reduces latency and enables contextual information and real-time awareness of the local environment. MEC also allows cloud offloading and the reduction of traffic congestion. Performance is not the only requirement that the new 5G services have. New mission-critical applications also require high security and dependability. These three aspects (security, dependability, and performance) are rarely addressed together. This survey fills this gap and presents 5G MEC by addressing all these three aspects. First, we overview the background knowledge on MEC by referring to the current standardization efforts. Second, we individually present each aspect by introducing the related taxonomy (important for the not expert on the aspect), the state of the art, and the challenges on 5G MEC. Finally, we discuss the challenges of jointly addressing the three aspects.Comment: 33 pages, 11 figures, 15 tables. This paper is under review at IEEE Communications Surveys & Tutorials. Copyright IEEE 202

    Protecting applications using trusted execution environments

    Get PDF
    While cloud computing has been broadly adopted, companies that deal with sensitive data are still reluctant to do so due to privacy concerns or legal restrictions. Vulnerabilities in complex cloud infrastructures, resource sharing among tenants, and malicious insiders pose a real threat to the confidentiality and integrity of sensitive customer data. In recent years trusted execution environments (TEEs), hardware-enforced isolated regions that can protect code and data from the rest of the system, have become available as part of commodity CPUs. However, designing applications for the execution within TEEs requires careful consideration of the elevated threats that come with running in a fully untrusted environment. Interaction with the environment should be minimised, but some cooperation with the untrusted host is required, e.g. for disk and network I/O, via a host interface. Implementing this interface while maintaining the security of sensitive application code and data is a fundamental challenge. This thesis addresses this challenge and discusses how TEEs can be leveraged to secure existing applications efficiently and effectively in untrusted environments. We explore this in the context of three systems that deal with the protection of TEE applications and their host interfaces: SGX-LKL is a library operating system that can run full unmodified applications within TEEs with a minimal general-purpose host interface. By providing broad system support inside the TEE, the reliance on the untrusted host can be reduced to a minimal set of low-level operations that cannot be performed inside the enclave. SGX-LKL provides transparent protection of the host interface and for both disk and network I/O. Glamdring is a framework for the semi-automated partitioning of TEE applications into an untrusted and a trusted compartment. Based on source-level annotations, it uses either dynamic or static code analysis to identify sensitive parts of an application. Taking into account the objectives of a small TCB size and low host interface complexity, it defines an application-specific host interface and generates partitioned application code. EnclaveDB is a secure database using Intel SGX based on a partitioned in-memory database engine. The core of EnclaveDB is its logging and recovery protocol for transaction durability. For this, it relies on the database log managed and persisted by the untrusted database server. EnclaveDB protects against advanced host interface attacks and ensures the confidentiality, integrity, and freshness of sensitive data.Open Acces

    A manifesto for future generation cloud computing: research directions for the next decade

    Get PDF
    The Cloud computing paradigm has revolutionised the computer science horizon during the past decade and has enabled the emergence of computing as the fifth utility. It has captured significant attention of academia, industries, and government bodies. Now, it has emerged as the backbone of modern economy by offering subscription-based services anytime, anywhere following a pay-as-you-go model. This has instigated (1) shorter establishment times for start-ups, (2) creation of scalable global enterprise applications, (3) better cost-to-value associativity for scientific and high performance computing applications, and (4) different invocation/execution models for pervasive and ubiquitous applications. The recent technological developments and paradigms such as serverless computing, software-defined networking, Internet of Things, and processing at network edge are creating new opportunities for Cloud computing. However, they are also posing several new challenges and creating the need for new approaches and research strategies, as well as the re-evaluation of the models that were developed to address issues such as scalability, elasticity, reliability, security, sustainability, and application models. The proposed manifesto addresses them by identifying the major open challenges in Cloud computing, emerging trends, and impact areas. It then offers research directions for the next decade, thus helping in the realisation of Future Generation Cloud Computing

    Dynamic Analysis of Healthcare Service Delivery: Application of Lean and Agile Concepts

    Get PDF
    Hospitals are looking to industry for proven tools to manage increasingly complex operations and reduce costs simultaneously with improving quality of care. Currently, €˜lean€™ is the preferred system redesign paradigm, which focuses on removing process waste and variation. However, the high level of complexity and uncertainty inherent to healthcare make it incredibly challenging to remove variability and achieve the stable process rates necessary for lean redesign efforts to be effective. This research explores the use of an alternative redesign paradigm €“ €˜agile€™ €“ which was developed in manufacturing to optimize product delivery in volatile demand environments with highly variable customer requirements. €˜Agile€™ redesign focuses on increasing system responsiveness to customers through improved resource coordination and flexibility. System dynamics simulation and empirical case study are used to explore the impact of following an agile redesign approach in healthcare on service access, care quality, and cost; determine the comparative effectiveness of individual agile redesign strategies; and identify opportunities where lean methods can contribute to the creation of responsive, agile enterprises by analyzing hybrid lean-agile approaches. This dissertation contributes to the emerging literature on applying supply chain management concepts in healthcare, and opens a new path for designing healthcare systems that provide the right care, at the right time, to the right patient, at the lowest price

    Design Space Exploration and Resource Management of Multi/Many-Core Systems

    Get PDF
    The increasing demand of processing a higher number of applications and related data on computing platforms has resulted in reliance on multi-/many-core chips as they facilitate parallel processing. However, there is a desire for these platforms to be energy-efficient and reliable, and they need to perform secure computations for the interest of the whole community. This book provides perspectives on the aforementioned aspects from leading researchers in terms of state-of-the-art contributions and upcoming trends
    • …
    corecore