4,924 research outputs found
Proactive Resource Allocation: Harnessing the Diversity and Multicast Gains
This paper introduces the novel concept of proactive resource allocation
through which the predictability of user behavior is exploited to balance the
wireless traffic over time, and hence, significantly reduce the bandwidth
required to achieve a given blocking/outage probability. We start with a simple
model in which the smart wireless devices are assumed to predict the arrival of
new requests and submit them to the network T time slots in advance. Using
tools from large deviation theory, we quantify the resulting prediction
diversity gain} to establish that the decay rate of the outage event
probabilities increases with the prediction duration T. This model is then
generalized to incorporate the effect of the randomness in the prediction
look-ahead time T. Remarkably, we also show that, in the cognitive networking
scenario, the appropriate use of proactive resource allocation by the primary
users improves the diversity gain of the secondary network at no cost in the
primary network diversity. We also shed lights on multicasting with predictable
demands and show that the proactive multicast networks can achieve a
significantly higher diversity gain that scales super-linearly with T. Finally,
we conclude by a discussion of the new research questions posed under the
umbrella of the proposed proactive (non-causal) wireless networking framework
Collective Value QoS: A Performance Measure Framework for Distributed Heterogeneous Networks
When users' tasks in a distributed heterogeneous computing environment are allocated resources, and the total demand placed on system resources by the tasks, for a given interval of time, exceeds the resources available, some tasks will receive degraded service, receive no service at all, or may be dropped from the system. One part of a measure to quantify the success of a resource management system (RMS) in such an environment is the collective value of the tasks completed during an interval of time, as perceived by the user, the application, or the policy maker. For the case where a task may be a data communication
request, the collective value of data communication requests that are satisfied during an interval of time is measured. The Flexible Integrated System Capability (FISC) measure
defined here is one way of obtaining a multi-dimensional measure for quantifying this collective value. While the FISC measure itself is not sufficient for scheduling purposes, it can be a critical part of a scheduler or a scheduling heuristic. The primary contribution of this work
is providing a way to measure the collective value accrued by an RMS using a broad range of attributes and to construct a flexible framework that can be extended for particular problem domains.DARPA/ITO Quorum ProgramDARPA/ISO BADD ProgramOffice of Naval Research under ONR grant number N00014-97-1-0804DARPA/ITO AICE program under contract numbers DABT63-99-C-0010 and DABT63-99-C-0012DARPA/ITO Quorum ProgramDARPA/ISO BADD ProgramOffice of Naval Research under ONR grant number N00014-97-1-0804DARPA/ITO AICE program under contract numbers DABT63-99-C-0010 and DABT63-99-C-0012Approved for public release; distribution is unlimited
Recommended from our members
Survey of traffic control schemes and error control schemes for ATM networks
Among the techniques proposed for B-ISDN transfer mode, ATM concept is considered to be the most promising transfer technique because of its flexibility and efficiency. This paper surveys and reviews a number of topics related to ATM networks. Those topics cover congestion control, provision of multiple classes of traffic, and error control. Due to the nature of ATM networks, those issues are far more challenging than in conventional networks. Sorne of the more promising solutions to those issues are surveyed, and the corresponding results on performance are summarized. Future research problems in ATM protocol aspect are also presented
Recommended from our members
Design of Scalable On-Demand Video Streaming Systems Leveraging Video Viewing Patterns
The explosive growth in on-demand access of video across all forms of delivery (Internet, traditional cable, IPTV, wireless) has renewed the interest in scalable delivery methods. Approaches using Content Delivery Networks (CDNs), Peer-to-Peer (P2P) approaches, and their combinations have been proposed as viable options to ease the load on servers and network links. However, there has been little focus on how to take advantage of user viewing patterns to understand their impact on existing mechanisms and to design new solutions that improve the streaming service quality.
In this dissertation, we leverage on the observation that users watch only a small portion of videos to understand the limits of existing designs and to optimize two scalable approaches -- the content placement and P2P Video-on-Demand (VoD) streaming. Then, we present our novel scalable system called Joint-Family which enables adaptive bitrate streaming (ABR) in P2P VoD, supporting user viewing patterns.
We first provide evidence of such user viewing behavior from data collected from a nationally deployed VoD service. In contrast to using a simplistic popularity-based placement and traditionally proposed caching strategies (such as CDNs), we use a Mixed Integer Programming formulation to model the placement problem and employ an innovative approach that scales well. We have performed detailed simulations using actual traces of user viewing sessions (including stream control operations such as pause, fast-forward, and rewind). Our results show that the use of segment-based placement strategy yields substantial savings in both disk storage requirements at origin servers/VHOs as well as network bandwidth use. For example, compared to a simple caching scheme using full videos, our MIP-based placement using segments can achieve up to 71% reduction in peak link bandwidth usage.
Secondly, we note that the policies adopted in existing P2P VoD systems have not taken user viewing behavior -- that users abandon videos -- into account. We show that abandonment can result in increased interruptions and wasted resources. As a result, we reconsider the set of policies to use in the presence of abandonment. Our goal is to balance the conflicting needs of delivering videos without interruptions while minimizing wastage. We find that an Earliest-First chunk selection policy in conjunction with the Earliest-Deadline peer selection policy allows us to achieve high download rates. We take advantage of abandonment by converting peers to "partial seeds"; this increases capacity. We minimize wastage by using a playback lookahead window. We use analysis and simulation experiments using real-world traces to show the effectiveness of our approach.
Finally, we propose Joint-Family, a protocol that combines P2P and adaptive bitrate (ABR) streaming for VoD. While P2P for VoD and ABR have been proposed previously, they have not been studied together because they attempt to tackle problems with seemingly orthogonal goals. We motivate our approach through analysis that overcomes a misconception resulting from prior analytical work, and show that the popularity of a P2P swarm and seed staying time has a significant bearing on the achievable per-receiver download rate. Specifically, our analysis shows that popularity affects swarm efficiency when seeds stay "long enough". We also show that ABR in a P2P setting helps viewers achieve higher playback rates and/or fewer interruptions.
We develop the Joint-Family protocol based on the observations from our analysis. Peers in Joint-Family simultaneously participate in multiple swarms to exchange chunks of different bitrates. We adopt chunk, bitrate, and peer selection policies that minimize occurrence of interruptions while delivering high quality video and improving the efficiency of the system. Using traces from a large-scale commercial VoD service, we compare Joint-Family with existing approaches for P2P VoD and show that viewers in Joint-Family enjoy higher playback rates with minimal interruption, irrespective of video popularity
Joint Time-and Event-Triggered Scheduling in the Linux Kernel
There is increasing interest in using Linux in the real-time domain due to
the emergence of cloud and edge computing, the need to decrease costs, and the
growing number of complex functional and non-functional requirements of
real-time applications. Linux presents a valuable opportunity as it has rich
hardware support, an open-source development model, a well-established
programming environment, and avoids vendor lock-in. Although Linux was
initially developed as a general-purpose operating system, some real-time
capabilities have been added to the kernel over many years to increase its
predictability and reduce its scheduling latency. Unfortunately, Linux
currently has no support for time-triggered (TT) scheduling, which is widely
used in the safety-critical domain for its determinism, low run-time scheduling
latency, and strong isolation properties. We present an enhancement of the
Linux scheduler as a new low-overhead TT scheduling class to support offline
table-driven scheduling of tasks on multicore Linux nodes. Inspired by the Slot
shifting algorithm, we complement the new scheduling class with a low overhead
slot shifting manager running on a non-time-triggered core to provide
guaranteed execution time to real-time aperiodic tasks by using the slack of
the time-triggered tasks and avoiding high-overhead table regeneration for
adding new periodic tasks. Furthermore, we evaluate our implementation on
server-grade hardware with Intel Xeon Scalable Processor.Comment: to appear in Operating Systems Platforms for Embedded Real-Time
applications (OSPERT) workshop 2023 co-hosted with 35th Euromicro conference
on Real-time system
A survey on the chronological evolution of timestamp schedulers in packet switching networks
The interest in solving the issue of congestion or flow control in network established from the first discovery and increase popularity of the Internet in 1967 or earlier. As the use of the network deployed and the popularity increase, the issue grows and the demand for an optimal or tentative solution becomes obvious. Since that time there has been an intensive effort from the scholars and researchers to solve the congestion control problem. The problem get worse by the engagement of novel traffic with
different characteristics for application called realtime
applications such as video and voice applications. Another cause of this demand is the user himself. The attempt in solving the congestion problem in network layer was popular in 90’s.This article will demonstrate chronologically how the attempts toward timestamp based scheduling in the
packet-switch network have been evolved.Furthermore, the benefit and the drawbacks of using a mechanism will be presented. Also, a brief explanation of the mathematical, conceptual or implementation issue of a mechanism is given. The key success of the scheduler in the market will be highlighted. This paper will stimulate the research
thinking to identify the importance and the ability
of scheduling in routers to enhance quality of service (QoS) for real time application over other solution in several layers. In addition it will assist the researcher to distinguish the key failure of other proposed mechanisms which have not been implemented in real routers
Energy Awareness and Scheduling in Mobile Devices and High End Computing
In the context of the big picture as energy demands rise due to growing economies and growing populations, there will be greater emphasis on sustainable supply, conservation, and efficient usage of this vital resource. Even at a smaller level, the need for minimizing energy consumption continues to be compelling in embedded, mobile, and server systems such as handheld devices, robots, spaceships, laptops, cluster servers, sensors, etc. This is due to the direct impact of constrained energy sources such as battery size and weight, as well as cooling expenses in cluster-based systems to reduce heat dissipation. Energy management therefore plays a paramount role in not only hardware design but also in user-application, middleware and operating system design. At a higher level Datacenters are sprouting everywhere due to the exponential growth of Big Data in every aspect of human life, the buzz word these days is Cloud computing. This dissertation, focuses on techniques, specifically algorithmic ones to scale down energy needs whenever the system performance can be relaxed. We examine the significance and relevance of this research and develop a methodology to study this phenomenon.
Specifically, the research will study energy-aware resource reservations algorithms to satisfy both performance needs and energy constraints. Many energy management schemes focus on a single resource that is dedicated to real-time or nonreal-time processing. Unfortunately, in many practical systems the combination of hard and soft real-time periodic tasks, a-periodic real-time tasks, interactive tasks and batch tasks must be supported. Each task may also require access to multiple resources. Therefore, this research will tackle the NP-hard problem of providing timely and simultaneous access to multiple resources by the use of practical abstractions and near optimal heuristics aided by cooperative scheduling. We provide an elegant EAS model which works across the spectrum which uses a run-profile based approach to scheduling. We apply this model to significant applications such as BLAT and Assembly of gene sequences in the Bioinformatics domain. We also provide a simulation for extending this model to cloud computing to answers “what if” scenario questions for consumers and operators of cloud resources to help answers questions of deadlines, single v/s distributed cluster use and impact analysis of energy-index and availability against revenue and ROI
- …