3,530 research outputs found
A Classification and Survey of Computer System Performance Evaluation Techniques
Classification and survey of computer system performance evaluation technique
Magic-State Functional Units: Mapping and Scheduling Multi-Level Distillation Circuits for Fault-Tolerant Quantum Architectures
Quantum computers have recently made great strides and are on a long-term
path towards useful fault-tolerant computation. A dominant overhead in
fault-tolerant quantum computation is the production of high-fidelity encoded
qubits, called magic states, which enable reliable error-corrected computation.
We present the first detailed designs of hardware functional units that
implement space-time optimized magic-state factories for surface code
error-corrected machines. Interactions among distant qubits require surface
code braids (physical pathways on chip) which must be routed. Magic-state
factories are circuits comprised of a complex set of braids that is more
difficult to route than quantum circuits considered in previous work [1]. This
paper explores the impact of scheduling techniques, such as gate reordering and
qubit renaming, and we propose two novel mapping techniques: braid repulsion
and dipole moment braid rotation. We combine these techniques with graph
partitioning and community detection algorithms, and further introduce a
stitching algorithm for mapping subgraphs onto a physical machine. Our results
show a factor of 5.64 reduction in space-time volume compared to the best-known
previous designs for magic-state factories.Comment: 13 pages, 10 figure
Taming Numbers and Durations in the Model Checking Integrated Planning System
The Model Checking Integrated Planning System (MIPS) is a temporal least
commitment heuristic search planner based on a flexible object-oriented
workbench architecture. Its design clearly separates explicit and symbolic
directed exploration algorithms from the set of on-line and off-line computed
estimates and associated data structures. MIPS has shown distinguished
performance in the last two international planning competitions. In the last
event the description language was extended from pure propositional planning to
include numerical state variables, action durations, and plan quality objective
functions. Plans were no longer sequences of actions but time-stamped
schedules. As a participant of the fully automated track of the competition,
MIPS has proven to be a general system; in each track and every benchmark
domain it efficiently computed plans of remarkable quality. This article
introduces and analyzes the most important algorithmic novelties that were
necessary to tackle the new layers of expressiveness in the benchmark problems
and to achieve a high level of performance. The extensions include critical
path analysis of sequentially generated plans to generate corresponding optimal
parallel plans. The linear time algorithm to compute the parallel plan bypasses
known NP hardness results for partial ordering by scheduling plans with respect
to the set of actions and the imposed precedence relations. The efficiency of
this algorithm also allows us to improve the exploration guidance: for each
encountered planning state the corresponding approximate sequential plan is
scheduled. One major strength of MIPS is its static analysis phase that grounds
and simplifies parameterized predicates, functions and operators, that infers
knowledge to minimize the state description length, and that detects domain
object symmetries. The latter aspect is analyzed in detail. MIPS has been
developed to serve as a complete and optimal state space planner, with
admissible estimates, exploration engines and branching cuts. In the
competition version, however, certain performance compromises had to be made,
including floating point arithmetic, weighted heuristic search exploration
according to an inadmissible estimate and parameterized optimization
AiiDA: Automated Interactive Infrastructure and Database for Computational Science
Computational science has seen in the last decades a spectacular rise in the
scope, breadth, and depth of its efforts. Notwithstanding this prevalence and
impact, it is often still performed using the renaissance model of individual
artisans gathered in a workshop, under the guidance of an established
practitioner. Great benefits could follow instead from adopting concepts and
tools coming from computer science to manage, preserve, and share these
computational efforts. We illustrate here our paradigm sustaining such vision,
based around the four pillars of Automation, Data, Environment, and Sharing. We
then discuss its implementation in the open-source AiiDA platform
(http://www.aiida.net), that has been tuned first to the demands of
computational materials science. AiiDA's design is based on directed acyclic
graphs to track the provenance of data and calculations, and ensure
preservation and searchability. Remote computational resources are managed
transparently, and automation is coupled with data storage to ensure
reproducibility. Last, complex sequences of calculations can be encoded into
scientific workflows. We believe that AiiDA's design and its sharing
capabilities will encourage the creation of social ecosystems to disseminate
codes, data, and scientific workflows.Comment: 30 pages, 7 figure
Design of testbed and emulation tools
The research summarized was concerned with the design of testbed and emulation tools suitable to assist in projecting, with reasonable accuracy, the expected performance of highly concurrent computing systems on large, complete applications. Such testbed and emulation tools are intended for the eventual use of those exploring new concurrent system architectures and organizations, either as users or as designers of such systems. While a range of alternatives was considered, a software based set of hierarchical tools was chosen to provide maximum flexibility, to ease in moving to new computers as technology improves and to take advantage of the inherent reliability and availability of commercially available computing systems
Routing on the Channel Dependency Graph:: A New Approach to Deadlock-Free, Destination-Based, High-Performance Routing for Lossless Interconnection Networks
In the pursuit for ever-increasing compute power, and with Moore's law slowly coming to an end, high-performance computing started to scale-out to larger systems. Alongside the increasing system size, the interconnection network is growing to accommodate and connect tens of thousands of compute nodes. These networks have a large influence on total cost, application performance, energy consumption, and overall system efficiency of the supercomputer. Unfortunately, state-of-the-art routing algorithms, which define the packet paths through the network, do not utilize this important resource efficiently. Topology-aware routing algorithms become increasingly inapplicable, due to irregular topologies, which either are irregular by design, or most often a result of hardware failures. Exchanging faulty network components potentially requires whole system downtime further increasing the cost of the failure. This management approach becomes more and more impractical due to the scale of today's networks and the accompanying steady decrease of the mean time between failures. Alternative methods of operating and maintaining these high-performance interconnects, both in terms of hardware- and software-management, are necessary to mitigate negative effects experienced by scientific applications executed on the supercomputer. However, existing topology-agnostic routing algorithms either suffer from poor load balancing or are not bounded in the number of virtual channels needed to resolve deadlocks in the routing tables.
Using the fail-in-place strategy, a well-established method for storage systems to repair only critical component failures, is a feasible solution for current and future HPC interconnects as well as other large-scale installations such as data center networks. Although, an appropriate combination of topology and routing algorithm is required to minimize the throughput degradation for the entire system. This thesis contributes a network simulation toolchain to facilitate the process of finding a suitable combination, either during system design or while it is in operation. On top of this foundation, a key contribution is a novel scheduling-aware routing, which reduces fault-induced throughput degradation while improving overall network utilization. The scheduling-aware routing performs frequent property preserving routing updates to optimize the path balancing for simultaneously running batch jobs. The increased deployment of lossless interconnection networks, in conjunction with fail-in-place modes of operation and topology-agnostic, scheduling-aware routing algorithms, necessitates new solutions to solve the routing-deadlock problem. Therefore, this thesis further advances the state-of-the-art by introducing a novel concept of routing on the channel dependency graph, which allows the design of an universally applicable destination-based routing capable of optimizing the path balancing without exceeding a given number of virtual channels, which are a common hardware limitation. This disruptive innovation enables implicit deadlock-avoidance during path calculation, instead of solving both problems separately as all previous solutions
Maintenance optimization of high voltage substation model
The real system from practice is selected for optimization purpose in this paper. We describe the real scheme of a
high voltage (HV) substation in different work states. Model scheme of the HV substation 22 kV is demonstrated within the
paper. The scheme serves as input model scheme for the maintenance optimization. The input reliability and cost parameters
of all components are given: the preventive and corrective maintenance costs, the actual maintenance period (being
optimized), the failure rate and mean time to repair - MTTR
Managing Communication Latency-Hiding at Runtime for Parallel Programming Languages and Libraries
This work introduces a runtime model for managing communication with support
for latency-hiding. The model enables non-computer science researchers to
exploit communication latency-hiding techniques seamlessly. For compiled
languages, it is often possible to create efficient schedules for
communication, but this is not the case for interpreted languages. By
maintaining data dependencies between scheduled operations, it is possible to
aggressively initiate communication and lazily evaluate tasks to allow maximal
time for the communication to finish before entering a wait state. We implement
a heuristic of this model in DistNumPy, an auto-parallelizing version of
numerical Python that allows sequential NumPy programs to run on distributed
memory architectures. Furthermore, we present performance comparisons for eight
benchmarks with and without automatic latency-hiding. The results shows that
our model reduces the time spent on waiting for communication as much as 27
times, from a maximum of 54% to only 2% of the total execution time, in a
stencil application.Comment: PREPRIN
Ilościowa ocena ryzyka przypadkowych zdarzeń wywołanych przez nieszczelności powodujące pożary w przemyśle przetwórczym
Risk to safety of personnel in process industries is normally modelled by the application of Event Trees, where the risk is defined as a product of event frequency and its consequences. This method is steady state whilst the actual event is time dependent. For example, gas release is an event comprising the size of gas cloud being released, probabilities of ignition, fire or explosion, fatality, escalation to new releases and fire and/or explosion, and the probability of fatality, all varying with time. This paper brings new perspective, how the risk to safety of personnel could be evaluated in dynamic context. A new approach is presented whereby the time-dependent events and the time-dependent probability of fatality are modelled by means of the analytical computation method based on modeling of different accident scenarios by use of the directed acyclic graph (DAG) and Fault Tree Analysis (FTA) method. Using these methods the modeled scenarios change with relevant probabilities at defined times to configurations with appropriate probabilities of fatalities.The paper uses a realistic example from the offshore industry, where different sizes of leak have different probability characteristics. Specifically small, medium and large leaks are evaluated. Based on the dynamic evolution of the probability of fatality, it is concluded that the most dangerous leak is the large one. Probability of fatality caused by the leak increased very rapidly within first 5 minutes. At the end of 5th minute, there is approximately one order of magnitude difference in the probabilities of fatality associated with the respective leak sizes.Zagrożenie dla bezpieczeństwa pracowników w przemyśle przetwórczym jest zwykle modelowane za pomocą drzewa zdarzeń,
gdzie ryzyko jest zdefiniowane jako iloczyn częstotliwości zdarzenia i jego skutków. Metoda ta dotyczy stanu stacjonarnego, podczas
gdy rzeczywiste zdarzenie jest zależne od czasu. Na przykład, ulatnianie się gazu jest zdarzeniem, które wiąże się z wielkością
obłoku uwalnianego gazu, prawdopodobieństwem zapłonu, pożaru lub wybuchu, śmiertelnością, eskalacją pod kątem dalszego
wycieku i pożaru i/lub wybuchu, oraz prawdopodobieństwem ofiar śmiertelnych, w każdym przypadku zależnie od czasu. Niniejsza
praca pokazuje nowe podejście do tego, jak zagrożenie dla bezpieczeństwa pracowników może być rozpatrywane w kontekście
dynamicznym. Nowe metoda polega na tym, iż zdarzenia zależne od czasu i zależne od czasu prawdopodobieństwo śmiertelności
są modelowane za pomocą analitycznej metody obliczeń opartej na modelowaniu różnych scenariuszy wypadków przez zastosowanie
skierowanego grafu acyklicznego (DAG) i metody analizy drzewa błędów (FTA). Dzięki zastosowaniu niniejszych metod,
modelowane scenariusze zmieniają się wraz z odpowiednimi prawdopodobieństwami w określonych czasach na konfiguracje
z właściwymi prawdopodobieństwami śmiertelności. Artykuł wykorzystuje rzeczywisty przykład z branży morskiej, gdzie różne
rozmiary wycieku wykazują różne parametry prawdopodobieństwa. Szczegółowo oceniane są małe, średnie i duże wycieki. W
oparciu o dynamiczną ewolucję prawdopodobieństwa ofiar śmiertelnych, należy stwierdzić, że najbardziej niebezpieczny jest duży
wyciek. Prawdopodobieństwo ofiar śmiertelnych spowodowanych wyciekiem gwałtownie wzrasta w ciągu pierwszych 5 minut.
Na koniec 5. minuty, występuje różnica w przybliżeniu o jeden rząd wielkości w prawdopodobieństwie śmiertelności związanej z
odpowiednimi wielkościami wycieku.Web of Science17459058
- …