643 research outputs found
HLS: a framework for composing soft real-time schedulers
Journal ArticleHierarchical CPU scheduling has emerged as a way to (1) support applications with diverse scheduling requirements in open systems, and (2) provide load isolation between applications, users, and other resource principals. Most existing work on hierarchical scheduling has focused on systems that provide a fixed scheduling model: the schedulers in part or all of the hierarchy are specified in advance. In this paper we describe a system of guarantees that permits a general hierarchy of soft real-time schedulers-one that contains arbitrary scheduling algorithms at all points within the hierarchy-to be analyzed. This analysis results in deterministic guarantees for threads at the leaves of the hierarchy. We also describe the design, implementation, and performance evaluation of a system for supporting such a hierarchy in the Windows 2000 kernel. Finally, we show that complex scheduling behaviors can be created using small schedulers as components and describe the HLS programming environment
Hierarchical schedulers, performance guarantees, and resource management
ManuscriptAn attractive approach to scheduling applications with diverse CPU scheduling requirements is to use different schedulers for different applications. For example: real-time schedulers allow applications to perform computations before deadlines, time-sharing schedulers provide high throughput for compute-bound processes and fast response time for interactive applications, and gang schedulers and cluster coschedulers permit tightly-coupled parallel applications to achieve high performance in the presence of multiprogramming. Furthermore, individual members of these broad classes of algorithms make tradeoffs that may or may not be appropriate for a given situation. In order to take advantage of these diverse algorithms, we permit schedulers to be arranged in a hierarchy - a root scheduler gives CPU time to the schedulers below it in the hierarchy and so on until an application thread is scheduled by a leaf scheduler. This architecture has a number of advantages
Feedback control of data aggregation in sensor networks
Sensor networks have recently emerged as a new paradigm for distributed sensing and actuation. This paper describes fundamental performance trade-offs in sensor networks and the utility of simple feedback control mechanisms for distributed performance optimization. A data communication and aggregation framework is presented that manipulates the degree of data aggregation to maintain specified acceptable latency bounds on data delivery while attempting to minimize energy consumption. An analytic model is constructed to describe the relationships between timeliness, energy, and the degree of aggregation, as well as to quantify constraints that stem from real-time requirements. Feedback control is used to adapt the degree of data aggregation dynamically in response to network load conditions while meeting application deadlines. The results illustrate the usefulness of feedback control in the sensor network domain. 1
Camera-Independent Single Image Depth Estimation from Defocus Blur
Monocular depth estimation is an important step in many downstream tasks in
machine vision. We address the topic of estimating monocular depth from defocus
blur which can yield more accurate results than the semantic based depth
estimation methods. The existing monocular depth from defocus techniques are
sensitive to the particular camera that the images are taken from. We show how
several camera-related parameters affect the defocus blur using optical physics
equations and how they make the defocus blur depend on these parameters. The
simple correction procedure we propose can alleviate this problem which does
not require any retraining of the original model. We created a synthetic
dataset which can be used to test the camera independent performance of depth
from defocus blur models. We evaluate our model on both synthetic and real
datasets (DDFF12 and NYU depth V2) obtained with different cameras and show
that our methods are significantly more robust to the changes of cameras. Code:
https://github.com/sleekEagle/defocus_camind.gi
MiddleGAN: Generate Domain Agnostic Samples for Unsupervised Domain Adaptation
In recent years, machine learning has achieved impressive results across
different application areas. However, machine learning algorithms do not
necessarily perform well on a new domain with a different distribution than its
training set. Domain Adaptation (DA) is used to mitigate this problem. One
approach of existing DA algorithms is to find domain invariant features whose
distributions in the source domain are the same as their distribution in the
target domain. In this paper, we propose to let the classifier that performs
the final classification task on the target domain learn implicitly the
invariant features to perform classification. It is achieved via feeding the
classifier during training generated fake samples that are similar to samples
from both the source and target domains. We call these generated samples
domain-agnostic samples. To accomplish this we propose a novel variation of
generative adversarial networks (GAN), called the MiddleGAN, that generates
fake samples that are similar to samples from both the source and target
domains, using two discriminators and one generator. We extend the theory of
GAN to show that there exist optimal solutions for the parameters of the two
discriminators and one generator in MiddleGAN, and empirically show that the
samples generated by the MiddleGAN are similar to both samples from the source
domain and samples from the target domain. We conducted extensive evaluations
using 24 benchmarks; on the 24 benchmarks, we compare MiddleGAN against various
state-of-the-art algorithms and outperform the state-of-the-art by up to 20.1\%
on certain benchmarks
Opportunities and obligations for physical computing systems
The recent confluence of embedded and real-time systems with wireless, sensor, and networking technologies is creating a nascent infrastructure for a technical, economic, and social revolution. Based on the seamless integration of computing with the physical world via sensors and actuators, this revolution will accrue many benefits. Potentially, its impact could be similar to that of the current Internet. We believe developers must focus on the physical, real-time, and embedded aspects of pervasive computing. We refer to this domain as physical computing systems. For pervasive computing to achieve its promise, developers must create not only high-level system software and application solutions, but also low-level embedded systems solutions. To better understand physical computing\u27s advantages, we consider three application areas: assisted living, emergency response systems for natural or man-made disasters, and protecting critical infrastructures at the national level
Adaptive Fault Tolerance and Graceful Degradation Under Dynamic Hard Real-time Scheduling
Static redundancy allocation is inappropriate in hard realtime systems that operate in variable and dynamic environments, (e.g., radar tracking, avionics). Adaptive Fault Tolerance (AFT) can assure adequate reliability of critical modules, under temporal and resources constraints, by allocating just as much redundancy to less critical modules as can be afforded, thus gracefully reducing their resource requirement. In this paper, we propose a mechanism for supporting adaptive fault tolerance in a real-time system. Adaptation is achieved by choosing a suitable redundancy strategy for a dynamically arriving computation to assure required reliability and to maximize the potential for fault tolerance while ensuring that deadlines are met. The proposed approach is evaluated using a real-life workload simulating radar tracking software in AWACS early warning aircraft. The results demonstrate that our technique outperforms static fault tolerance strategies in terms of tasks meeting their timing constraints. Further, we show that the gain in this timing-centric performance metric does not reduce the fault tolerance of the executing tasks below a predefined minimum level. Overall, the evaluation indicates that the proposed ideas result in a system that dynamically provides QOS guarantees along the fault-tolerance dimension
- …