1,644 research outputs found
Dynamic Multilevel Graph Visualization
We adapt multilevel, force-directed graph layout techniques to visualizing
dynamic graphs in which vertices and edges are added and removed in an online
fashion (i.e., unpredictably). We maintain multiple levels of coarseness using
a dynamic, randomized coarsening algorithm. To ensure the vertices follow
smooth trajectories, we employ dynamics simulation techniques, treating the
vertices as point particles. We simulate fine and coarse levels of the graph
simultaneously, coupling the dynamics of adjacent levels. Projection from
coarser to finer levels is adaptive, with the projection determined by an
affine transformation that evolves alongside the graph layouts. The result is a
dynamic graph visualizer that quickly and smoothly adapts to changes in a
graph.Comment: 21 page
Bulk Scheduling with the DIANA Scheduler
Results from the research and development of a Data Intensive and Network
Aware (DIANA) scheduling engine, to be used primarily for data intensive
sciences such as physics analysis, are described. In Grid analyses, tasks can
involve thousands of computing, data handling, and network resources. The
central problem in the scheduling of these resources is the coordinated
management of computation and data at multiple locations and not just data
replication or movement. However, this can prove to be a rather costly
operation and efficient sing can be a challenge if compute and data resources
are mapped without considering network costs. We have implemented an adaptive
algorithm within the so-called DIANA Scheduler which takes into account data
location and size, network performance and computation capability in order to
enable efficient global scheduling. DIANA is a performance-aware and
economy-guided Meta Scheduler. It iteratively allocates each job to the site
that is most likely to produce the best performance as well as optimizing the
global queue for any remaining jobs. Therefore it is equally suitable whether a
single job is being submitted or bulk scheduling is being performed. Results
indicate that considerable performance improvements can be gained by adopting
the DIANA scheduling approach.Comment: 12 pages, 11 figures. To be published in the IEEE Transactions in
Nuclear Science, IEEE Press. 200
Packet loss optimization in router forwarding tasks based on the particle swarm algorithm
Software-defined networks (SDNs) are computer networks where parameters and devices are configured by software. Recently, artificial intelligence aspects have been used for SDN programs for various applications, including packet classification and forwarding according to the quality of service (QoS) requirements. The main problem is that when packets from different applications pass through computer networks, they have different QoS criteria. To meet the requirements of packets, routers classify these packets, add them to multiple weighting queue systems, and forward them according to their priorities. Multiple queue systems in routers usually use a class-based weighted round-robin (CBWRR) scheduling algorithm with pre-configured fixed weights for each priority queue. The problem is that the intensity of traffic in general and of each packet class occasionally changes. Therefore, in this work, we suggest using the particle swarm optimization algorithm to find the optimal weights for the weighted fair round-robin algorithm (WFRR) by considering the variable densities of the traffic. This work presents a framework to simulate router operations by determining the weights and schedule packets and forwarding them. The proposed algorithm to optimize the weights is compared with the conventional WFRR algorithm, and the results show that the particle swarm optimization for the weighted round-robin algorithm is more efficient than WFRR, especially in high-intensity traffic. Moreover, the average packet-loss ratio does not exceed 7%, and the proposed algorithms are better than the conventional CBWRR algorithm and the related work results
Maximising microprocessor reliability through game theory and heuristics
PhD ThesisEmbedded Systems are becoming ever more pervasive in our society, with most
routine daily tasks now involving their use in some form and the market predicted
to be worth USD 220 billion, a rise of 300%, by 2018. Consumers expect
more functionality with each design iteration, but for no detriment in perceived
performance. These devices can range from simple low-cost chips to expensive
and complex systems and are a major cost driver in the equipment design
phase. For more than 35 years, designers have kept pace with Moore's Law, but
as device size approaches the atomic limit, layouts are becoming so complicated
that current scheduling techniques are also reaching their limit, meaning that
more resource must be reserved to manage and deliver reliable operation. With
the advent of many-core systems and further sources of unpredictability such as
changeable power supplies and energy harvesting, this reservation of capability
may become so large that systems will not be operating at their peak efficiency.
These complex systems can be controlled through many techniques, with
jobs scheduled either online prior to execution beginning or online at each time
or event change. Increased processing power and job types means that current
online scheduling methods that employ exhaustive search techniques will not
be suitable to define schedules for such enigmatic task lists and that new techniques
using statistic-based methods must be investigated to preserve Quality
of Service.
A new paradigm of scheduling through complex heuristics is one way to
administer these next levels of processor effectively and allow the use of more
simple devices in complex systems; thus reducing unit cost while retaining reliability a key goal identified by the International Technology Roadmap for Semi-conductors for Embedded Systems in Critical Environments. These changes
would be beneficial in terms of cost reduction and system
exibility within the
next generation of device. This thesis investigates the use of heuristics and
statistical methods in the operation of real-time systems, with the feasibility of
Game Theory and Statistical Process Control for the successful supervision of
high-load and critical jobs investigated. Heuristics are identified as an effective
method of controlling complex real-time issues, with two-person non-cooperative
games delivering Nash-optimal solutions where these exist. The simplified algorithms for creating and solving Game Theory events allow for its use within
small embedded RISC devices and an increase in reliability for systems operating
at the apex of their limits. Within this Thesis, Heuristic and Game Theoretic
algorithms for a variety of real-time scenarios are postulated, investigated, refined and tested against existing schedule types; initially through MATLAB
simulation before testing on an ARM Cortex M3 architecture functioning as a
simplified automotive Electronic Control Unit.Doctoral Teaching Account from the EPSRC
C-MOS array design techniques: SUMC multiprocessor system study
The current capabilities of LSI techniques for speed and reliability, plus the possibilities of assembling large configurations of LSI logic and storage elements, have demanded the study of multiprocessors and multiprocessing techniques, problems, and potentialities. Evaluated are three previous systems studies for a space ultrareliable modular computer multiprocessing system, and a new multiprocessing system is proposed that is flexibly configured with up to four central processors, four 1/0 processors, and 16 main memory units, plus auxiliary memory and peripheral devices. This multiprocessor system features a multilevel interrupt, qualified S/360 compatibility for ground-based generation of programs, virtual memory management of a storage hierarchy through 1/0 processors, and multiport access to multiple and shared memory units
A Bulk-Parallel Priority Queue in External Memory with STXXL
We propose the design and an implementation of a bulk-parallel external
memory priority queue to take advantage of both shared-memory parallelism and
high external memory transfer speeds to parallel disks. To achieve higher
performance by decoupling item insertions and extractions, we offer two
parallelization interfaces: one using "bulk" sequences, the other by defining
"limit" items. In the design, we discuss how to parallelize insertions using
multiple heaps, and how to calculate a dynamic prediction sequence to prefetch
blocks and apply parallel multiway merge for extraction. Our experimental
results show that in the selected benchmarks the priority queue reaches 75% of
the full parallel I/O bandwidth of rotational disks and and 65% of SSDs, or the
speed of sorting in external memory when bounded by computation.Comment: extended version of SEA'15 conference pape
Recommended from our members
A study of user level scheduling and software caching in the educational interactive system
DESIGN AND EVALUATION OF RESOURCE ALLOCATION AND JOB SCHEDULING ALGORITHMS ON COMPUTATIONAL GRIDS
Grid, an infrastructure for resource sharing, currently has shown its importance in
many scientific applications requiring tremendously high computational power. Grid
computing enables sharing, selection and aggregation of resources for solving
complex and large-scale scientific problems. Grids computing, whose resources are
distributed, heterogeneous and dynamic in nature, introduces a number of fascinating
issues in resource management. Grid scheduling is the key issue in grid environment
in which its system must meet the functional requirements of heterogeneous domains,
which are sometimes conflicting in nature also, like user, application, and network.
Moreover, the system must satisfy non-functional requirements like reliability,
efficiency, performance, effective resource utilization, and scalability. Thus, overall
aim of this research is to introduce new grid scheduling algorithms for resource
allocation as well as for job scheduling for enabling a highly efficient and effective
utilization of the resources in executing various applications.
The four prime aspects of this work are: firstly, a model of the grid scheduling
problem for dynamic grid computing environment; secondly, development of a new
web based simulator (SyedWSim), enabling the grid users to conduct a statistical
analysis of grid workload traces and provides a realistic basis for experimentation in
resource allocation and job scheduling algorithms on a grid; thirdly, proposal of a new
grid resource allocation method of optimal computational cost using synthetic and
real workload traces with respect to other allocation methods; and finally, proposal of
some new job scheduling algorithms of optimal performance considering parameters
like waiting time, turnaround time, response time, bounded slowdown, completion
time and stretch time. The issue is not only to develop new algorithms, but also to
evaluate them on an experimental computational grid, using synthetic and real
workload traces, along with the other existing job scheduling algorithms.
Experimental evaluation confirmed that the proposed grid scheduling algorithms
possess a high degree of optimality in performance, efficiency and scalability
- …