37,633 research outputs found
Real-time and fault tolerance in distributed control software
Closed loop control systems typically contain multitude of spatially distributed sensors and actuators operated simultaneously. So those systems are parallel and distributed in their essence. But mapping this parallelism onto the given distributed hardware architecture, brings in some additional requirements: safe multithreading, optimal process allocation, real-time scheduling of bus and network resources. Nowadays, fault tolerance methods and fast even online reconfiguration are becoming increasingly important. All those often conflicting requirements, make design and implementation of real-time distributed control systems an extremely difficult task, that requires substantial knowledge in several areas of control and computer science. Although many design methods have been proposed so far, none of them had succeeded to cover all important aspects of the problem at hand. [1] Continuous increase of production in embedded market, makes a simple and natural design methodology for real-time systems needed more then ever
On the periodic behavior of real-time schedulers on identical multiprocessor platforms
This paper is proposing a general periodicity result concerning any
deterministic and memoryless scheduling algorithm (including
non-work-conserving algorithms), for any context, on identical multiprocessor
platforms. By context we mean the hardware architecture (uniprocessor,
multicore), as well as task constraints like critical sections, precedence
constraints, self-suspension, etc. Since the result is based only on the
releases and deadlines, it is independent from any other parameter. Note that
we do not claim that the given interval is minimal, but it is an upper bound
for any cycle of any feasible schedule provided by any deterministic and
memoryless scheduler
Heavy-tailed Distributions In Stochastic Dynamical Models
Heavy-tailed distributions are found throughout many naturally occurring
phenomena. We have reviewed the models of stochastic dynamics that lead to
heavy-tailed distributions (and power law distributions, in particular)
including the multiplicative noise models, the models subjected to the
Degree-Mass-Action principle (the generalized preferential attachment
principle), the intermittent behavior occurring in complex physical systems
near a bifurcation point, queuing systems, and the models of Self-organized
criticality. Heavy-tailed distributions appear in them as the emergent
phenomena sensitive for coupling rules essential for the entire dynamics
Timely-Throughput Optimal Scheduling with Prediction
Motivated by the increasing importance of providing delay-guaranteed services
in general computing and communication systems, and the recent wide adoption of
learning and prediction in network control, in this work, we consider a general
stochastic single-server multi-user system and investigate the fundamental
benefit of predictive scheduling in improving timely-throughput, being the rate
of packets that are delivered to destinations before their deadlines. By
adopting an error rate-based prediction model, we first derive a Markov
decision process (MDP) solution to optimize the timely-throughput objective
subject to an average resource consumption constraint. Based on a packet-level
decomposition of the MDP, we explicitly characterize the optimal scheduling
policy and rigorously quantify the timely-throughput improvement due to
predictive-service, which scales as
,
where are constants, is the
true-positive rate in prediction, is the false-negative rate, is the
packet deadline and is the prediction window size. We also conduct
extensive simulations to validate our theoretical findings. Our results provide
novel insights into how prediction and system parameters impact performance and
provide useful guidelines for designing predictive low-latency control
algorithms.Comment: 14 pages, 7 figure
PRIORITIZED TASK SCHEDULING IN FOG COMPUTING
Cloud computing is an environment where virtual resources are shared among the many users over network. A user of Cloud services is billed according to pay-per-use model associated with this environment. To keep this bill to a minimum, efficient resource allocation is of great importance. To handle the many requests sent to Cloud by the clients, the tasks need to be processed according to the SLAs defined by the client. The increase in the usage of Cloud services on a daily basis has introduced delays in the transmission of requests. These delays can cause clients to wait for the response of the tasks beyond the deadline assigned. To overcome these concerns, Fog Computing is helpful as it is physically placed closer to the clients. This layer is placed between the client and the Cloud layer, and it reduces the delay in the transmission of the requests, processing and the response sent back to the client greatly. This paper discusses an algorithm which schedules tasks by calculating the priority of a task in the Fog layer. The tasks with higher priority are processed first so that the deadline is met, which makes the algorithm practical and efficient
- …