2,148 research outputs found
Can One Trust Quantum Simulators?
Various fundamental phenomena of strongly-correlated quantum systems such as
high- superconductivity, the fractional quantum-Hall effect, and quark
confinement are still awaiting a universally accepted explanation. The main
obstacle is the computational complexity of solving even the most simplified
theoretical models that are designed to capture the relevant quantum
correlations of the many-body system of interest. In his seminal 1982 paper
[Int. J. Theor. Phys. 21, 467], Richard Feynman suggested that such models
might be solved by "simulation" with a new type of computer whose constituent
parts are effectively governed by a desired quantum many-body dynamics.
Measurements on this engineered machine, now known as a "quantum simulator,"
would reveal some unknown or difficult to compute properties of a model of
interest. We argue that a useful quantum simulator must satisfy four
conditions: relevance, controllability, reliability, and efficiency. We review
the current state of the art of digital and analog quantum simulators. Whereas
so far the majority of the focus, both theoretically and experimentally, has
been on controllability of relevant models, we emphasize here the need for a
careful analysis of reliability and efficiency in the presence of
imperfections. We discuss how disorder and noise can impact these conditions,
and illustrate our concerns with novel numerical simulations of a paradigmatic
example: a disordered quantum spin chain governed by the Ising model in a
transverse magnetic field. We find that disorder can decrease the reliability
of an analog quantum simulator of this model, although large errors in local
observables are introduced only for strong levels of disorder. We conclude that
the answer to the question "Can we trust quantum simulators?" is... to some
extent.Comment: 20 pages. Minor changes with respect to version 2 (some additional
explanations, added references...
Recommended from our members
Towards a Fault-tolerant, Scheduling Methodology for Safety-critical Certified Information Systems
Today, many critical information systems have safety-critical and non-safety-critical functions executed on the same platform in order to reduce design and implementation costs. The set of safety-critical functionality is subject to certification requirements and the rest of the functionality does not need to be certified, or is certified to a lower level. The resulting mixed-criticality systems bring challenges in designing such systems, especially when the critical tasks are required to complete with a timing constraint. This paper studies a problem of scheduling a mixed-criticality system with fault tolerance. A fault-recovery technique called checkpointing is used where a program can go back to a recent checkpoint for re-execution upon errors occurred. A novel schedulability test is derived to ensure that the safety-critical tasks are completed before their deadlines and the theoretical correctness is shown
Mixed-Criticality Systems on Commercial-Off-the-Shelf Multi-Processor Systems-on-Chip
Avionics and space industries are struggling with the adoption of technologies
like multi-processor system-on-chips (MPSoCs) due to strict safety requirements.
This thesis propose a new reference architecture for MPSoC-based mixed-criticality
systems (MCS) - i.e., systems integrating applications with different level of criticality - which are a common use case for aforementioned industries.
This thesis proposes a system architecture capable of granting partitioning -
which is, for short, the property of fault containment. It is based on the detection
of spatial and temporal interference, and has been named the online detection of
interference (ODIn) architecture.
Spatial partitioning requires that an application is not able to corrupt resources
used by a different application. In the architecture proposed in this thesis, spatial
partitioning is implemented using type-1 hypervisors, which allow definition of
resource partitions. An application running in a partition can only access resources
granted to that partition, therefore it cannot corrupt resources used by applications
running in other partitions.
Temporal partitioning requires that an application is not able to unexpectedly
change the execution time of other applications. In the proposed architecture, temporal partitioning has been solved using a bounded interference approach, composed of
an offline analysis phase and an online safety net.
The offline phase is based on a statistical profiling of a metric sensitive to
temporal interference’s, performed in nominal conditions, which allows definition of
a set of three thresholds:
1. the detection threshold TD;
2. the warning threshold TW ;
3. the α threshold.
Two rules of detection are defined using such thresholds:
Alarm rule When the value of the metric is above TD.
Warning rule When the value of the metric is in the warning region [TW ;TD] for
more than α consecutive times.
ODIn’s online safety-net exploits performance counters, available in many MPSoC architectures; such counters are configured at bootstrap to monitor the selected
metric(s), and to raise an interrupt request (IRQ) in case the metric value goes above
TD, implementing the alarm rule. The warning rule is implemented in a software detection module, which reads the value of performance counters when the monitored
task yields control to the scheduler and reset them if there is no detection.
ODIn also uses two additional detection mechanisms:
1. a control flow check technique, based on compile-time defined block signatures, is implemented through a set of watchdog processors, each monitoring
one partition.
2. a timeout is implemented through a system watchdog timer (SWDT), which is
able to send an external signal when the timeout is violated.
The recovery actions implemented in ODIn are:
• graceful degradation, to react to IRQs of WDPs monitoring non-critical applications or to warning rule violations; it temporarily stops non-critical applications
to grant resources to the critical application;
• hard recovery, to react to the SWDT, to the WDP of the critical application, or
to alarm rule violations; it causes a switch to a hot stand-by spare computer.
Experimental validation of ODIn was performed on two hardware platforms: the
ZedBoard - dual-core - and the Inventami board - quad-core.
A space benchmark and an avionic benchmark were implemented on both platforms, composed by different modules as showed in Table 1
Each version of the final application was evaluated through fault injection (FI)
campaigns, performed using a specifically designed FI system. There were three
types of FI campaigns:
1. HW FI, to emulate single event effects;
2. SW FI, to emulate bugs in non-critical applications;
3. artificial bug FI, to emulate a bug in non-critical applications introducing
unexpected interference on the critical application.
Experimental results show that ODIn is resilient to all considered types of faul
Multi-core devices for safety-critical systems: a survey
Multi-core devices are envisioned to support the development of next-generation safety-critical systems, enabling the on-chip integration of functions of different criticality. This integration provides multiple system-level potential benefits such as cost, size, power, and weight reduction. However, safety certification becomes a challenge and several fundamental safety technical requirements must be addressed, such as temporal and spatial independence, reliability, and diagnostic coverage. This survey provides a categorization and overview at different device abstraction levels (nanoscale, component, and device) of selected key research contributions that support the compliance with these fundamental safety requirements.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness under grant TIN2015-65316-P, Basque Government under grant KK-2019-00035 and the HiPEAC Network of Excellence. The Spanish Ministry of Economy and Competitiveness has also partially supported Jaume Abella under Ramon y Cajal postdoctoral fellowship (RYC-2013-14717).Peer ReviewedPostprint (author's final draft
- …