853 research outputs found
Public key cryptosystems : theory, application and implementation
The determination of an individual's right to privacy is mainly a nontechnical matter, but the pragmatics of providing it is the central concern of the cryptographer. This thesis has sought answers to some of the outstanding issues in cryptography. In particular, some of the theoretical, application and implementation problems associated with a Public Key Cryptosystem (PKC).The Trapdoor Knapsack (TK) PKC is capable of fast throughput, but suffers from serious disadvantages. In chapter two a more general approach to the TK-PKC is described, showing how the public key size can be significantly reduced. To overcome the security limitations a new trapdoor was described in chapter three. It is based on transformations between the radix and residue number systems.Chapter four considers how cryptography can best be applied to multi-addressed packets of information. We show how security or communication network structure can be used to advantage, then proposing a new broadcast cryptosystem, which is more generally applicable.Copyright is traditionally used to protect the publisher from the pirate. Chapter five shows how to protect information when in easily copyable digital format.Chapter six describes the potential and pitfalls of VLSI, followed in chapter seven by a model for comparing the cost and performance of VLSI architectures. Chapter eight deals with novel architectures for all the basic arithmetic operations. These architectures provide a basic vocabulary of low complexity VLSI arithmetic structures for a wide range of applications.The design of a VLSI device, the Advanced Cipher Processor (ACP), to implement the RSA algorithm is described in chapter nine. It's heart is the modular exponential unit, which is a synthesis of the architectures in chapter eight. The ACP is capable of a throughput of 50 000 bits per second
X-Rel: Energy-Efficient and Low-Overhead Approximate Reliability Framework for Error-Tolerant Applications Deployed in Critical Systems
Triple Modular Redundancy (TMR) is one of the most common techniques in
fault-tolerant systems, in which the output is determined by a majority voter.
However, the design diversity of replicated modules and/or soft errors that are
more likely to happen in the nanoscale era may affect the majority voting
scheme. Besides, the significant overheads of the TMR scheme may limit its
usage in energy consumption and area-constrained critical systems. However, for
most inherently error-resilient applications such as image processing and
vision deployed in critical systems (like autonomous vehicles and robotics),
achieving a given level of reliability has more priority than precise results.
Therefore, these applications can benefit from the approximate computing
paradigm to achieve higher energy efficiency and a lower area. This paper
proposes an energy-efficient approximate reliability (X-Rel) framework to
overcome the aforementioned challenges of the TMR systems and get the full
potential of approximate computing without sacrificing the desired reliability
constraint and output quality. The X-Rel framework relies on relaxing the
precision of the voter based on a systematical error bounding method that
leverages user-defined quality and reliability constraints. Afterward, the size
of the achieved voter is used to approximate the TMR modules such that the
overall area and energy consumption are minimized. The effectiveness of
employing the proposed X-Rel technique in a TMR structure, for different
quality constraints as well as with various reliability bounds are evaluated in
a 15-nm FinFET technology. The results of the X-Rel voter show delay, area, and
energy consumption reductions of up to 86%, 87%, and 98%, respectively, when
compared to those of the state-of-the-art approximate TMR voters.Comment: This paper has been published in IEEE Transactions on Very Large
Scale Integration (VLSI) System
Spatial-photonic Boltzmann machines: low-rank combinatorial optimization and statistical learning by spatial light modulation
The spatial-photonic Ising machine (SPIM) [D. Pierangeli et al., Phys. Rev.
Lett. 122, 213902 (2019)] is a promising optical architecture utilizing spatial
light modulation for solving large-scale combinatorial optimization problems
efficiently. However, the SPIM can accommodate Ising problems with only
rank-one interaction matrices, which limits its applicability to various
real-world problems. In this Letter, we propose a new computing model for the
SPIM that can accommodate any Ising problem without changing its optical
implementation. The proposed model is particularly efficient for Ising problems
with low-rank interaction matrices, such as knapsack problems. Moreover, the
model acquires learning ability and can thus be termed a spatial-photonic
Boltzmann machine (SPBM). We demonstrate that learning, classification, and
sampling of the MNIST handwritten digit images are achieved efficiently using
SPBMs with low-rank interactions. Thus, the proposed SPBM model exhibits higher
practical applicability to various problems of combinatorial optimization and
statistical learning, without losing the scalability inherent in the SPIM
architecture.Comment: 7 pages, 5 figures (with a 3-page supplemental
L'intertextualité dans les publications scientifiques
La base de donnĂ©es bibliographiques de l'IEEE contient un certain nombre de duplications avĂ©rĂ©es avec indication des originaux copiĂ©s. Ce corpus est utilisĂ© pour tester une mĂ©thode d'attribution d'auteur. La combinaison de la distance intertextuelle avec la fenĂȘtre glissante et diverses techniques de classification permet d'identifier ces duplications avec un risque d'erreur trĂšs faible. Cette expĂ©rience montre Ă©galement que plusieurs facteurs brouillent l'identitĂ© de l'auteur scientifique, notamment des collectifs de chercheurs Ă gĂ©omĂ©trie variable et une forte dose d'intertextualitĂ© acceptĂ©e voire recherchĂ©e
- âŠ