41,551 research outputs found
Smart Power Grid Synchronization With Fault Tolerant Nonlinear Estimation
Effective real-time state estimation is essential for smart grid synchronization, as electricity demand continues to grow, and renewable energy resources increase their penetration into the grid. In order to provide a more reliable state estimation technique to address the problem of bad data in the PMU-based power synchronization, this paper presents a novel nonlinear estimation framework to dynamically track frequency, voltage magnitudes and phase angles. Instead of directly analyzing in abc coordinate frame, symmetrical component transformation is employed to separate the positive, negative, and zero sequence networks. Then, Clarke\u27s transformation is used to transform the sequence networks into the αβ stationary coordinate frame, which leads to system model formulation. A novel fault tolerant extended Kalman filter based real-time estimation framework is proposed for smart grid synchronization with noisy bad data measurements. Computer simulation studies have demonstrated that the proposed fault tolerant extended Kalman filter (FTEKF) provides more accurate voltage synchronization results than the extended Kalman filter (EKF). The proposed approach has been implemented with dSPACE DS1103 and National Instruments CompactRIO hardware platforms. Computer simulation and hardware instrumentation results have shown the potential applications of FTEKF in smart grid synchronization
Source-independent quantum random number generation
Quantum random number generators can provide genuine randomness by appealing
to the fundamental principles of quantum mechanics. In general, a physical
generator contains two parts---a randomness source and its readout. The source
is essential to the quality of the resulting random numbers; hence, it needs to
be carefully calibrated and modeled to achieve information-theoretical provable
randomness. However, in practice, the source is a complicated physical system,
such as a light source or an atomic ensemble, and any deviations in the
real-life implementation from the theoretical model may affect the randomness
of the output. To close this gap, we propose a source-independent scheme for
quantum random number generation in which output randomness can be certified,
even when the source is uncharacterized and untrusted. In our randomness
analysis, we make no assumptions about the dimension of the source. For
instance, multiphoton emissions are allowed in optical implementations. Our
analysis takes into account the finite-key effect with the composable security
definition. In the limit of large data size, the length of the input random
seed is exponentially small compared to that of the output random bit. In
addition, by modifying a quantum key distribution system, we experimentally
demonstrate our scheme and achieve a randomness generation rate of over
bit/s.Comment: 11 pages, 7 figure
The Art of Fault Injection
Classical greek philosopher considered the foremost virtues to be temperance, justice, courage, and prudence. In this paper we relate these cardinal virtues to the correct methodological approaches that researchers should follow when setting up a fault injection experiment. With this work we try to understand where the "straightforward pathway" lies, in order to highlight those common methodological errors that deeply influence the coherency and the meaningfulness of fault injection experiments. Fault injection is like an art, where the success of the experiments depends on a very delicate balance between modeling, creativity, statistics, and patience
A Touch of Evil: High-Assurance Cryptographic Hardware from Untrusted Components
The semiconductor industry is fully globalized and integrated circuits (ICs)
are commonly defined, designed and fabricated in different premises across the
world. This reduces production costs, but also exposes ICs to supply chain
attacks, where insiders introduce malicious circuitry into the final products.
Additionally, despite extensive post-fabrication testing, it is not uncommon
for ICs with subtle fabrication errors to make it into production systems.
While many systems may be able to tolerate a few byzantine components, this is
not the case for cryptographic hardware, storing and computing on confidential
data. For this reason, many error and backdoor detection techniques have been
proposed over the years. So far all attempts have been either quickly
circumvented, or come with unrealistically high manufacturing costs and
complexity.
This paper proposes Myst, a practical high-assurance architecture, that uses
commercial off-the-shelf (COTS) hardware, and provides strong security
guarantees, even in the presence of multiple malicious or faulty components.
The key idea is to combine protective-redundancy with modern threshold
cryptographic techniques to build a system tolerant to hardware trojans and
errors. To evaluate our design, we build a Hardware Security Module that
provides the highest level of assurance possible with COTS components.
Specifically, we employ more than a hundred COTS secure crypto-coprocessors,
verified to FIPS140-2 Level 4 tamper-resistance standards, and use them to
realize high-confidentiality random number generation, key derivation, public
key decryption and signing. Our experiments show a reasonable computational
overhead (less than 1% for both Decryption and Signing) and an exponential
increase in backdoor-tolerance as more ICs are added
Ultrareliable, fault-tolerant control systems: A conceptual description
An Ultrareliable, Fault-Tolerant, Control-System (UFTCS) concept is described using a systems design philosophy which allows development of system structures containing virtually no common elements. Common elements limit achievable system reliability and can cause catastrophic loss of fault-tolerant system function. The UFTCS concept provides the means for removing common system elements by permitting the elements of the system to operate as independent, uncoupled entities. Multiple versions of the application program are run on dissimilar hardware. Fault tolerance is achieved through the use of static redundancy management
Study of fault tolerant software technology for dynamic systems
The major aim of this study is to investigate the feasibility of using systems-based failure detection isolation and compensation (FDIC) techniques in building fault-tolerant software and extending them, whenever possible, to the domain of software fault tolerance. First, it is shown that systems-based FDIC methods can be extended to develop software error detection techniques by using system models for software modules. In particular, it is demonstrated that systems-based FDIC techniques can yield consistency checks that are easier to implement than acceptance tests based on software specifications. Next, it is shown that systems-based failure compensation techniques can be generalized to the domain of software fault tolerance in developing software error recovery procedures. Finally, the feasibility of using fault-tolerant software in flight software is investigated. In particular, possible system and version instabilities, and functional performance degradation that may occur in N-Version programming applications to flight software are illustrated. Finally, a comparative analysis of N-Version and recovery block techniques in the context of generic blocks in flight software is presented
- …