2,259 research outputs found
An On-line BIST RAM Architecture with Self Repair Capabilities
The emerging field of self-repair computing is expected to have a major impact on deployable systems for space missions and defense applications, where high reliability, availability, and serviceability are needed. In this context, RAM (random access memories) are among the most critical components. This paper proposes a built-in self-repair (BISR) approach for RAM cores. The proposed design, introducing minimal and technology-dependent overheads, can detect and repair a wide range of memory faults including: stuck-at, coupling, and address faults. The test and repair capabilities are used on-line, and are completely transparent to the external user, who can use the memory without any change in the memory-access protocol. Using a fault-injection environment that can emulate the occurrence of faults inside the module, the effectiveness of the proposed architecture in terms of both fault detection and repairing capability was verified. Memories of various sizes have been considered to evaluate the area-overhead introduced by this proposed architectur
Quantum Hopfield neural network
Quantum computing allows for the potential of significant advancements in
both the speed and the capacity of widely used machine learning techniques.
Here we employ quantum algorithms for the Hopfield network, which can be used
for pattern recognition, reconstruction, and optimization as a realization of a
content-addressable memory system. We show that an exponentially large network
can be stored in a polynomial number of quantum bits by encoding the network
into the amplitudes of quantum states. By introducing a classical technique for
operating the Hopfield network, we can leverage quantum algorithms to obtain a
quantum computational complexity that is logarithmic in the dimension of the
data. We also present an application of our method as a genetic sequence
recognizer.Comment: 13 pages, 3 figures, final versio
Stream specificity and asymmetries in feature binding and content-addressable access in visual encoding and memory
YesHuman memory is content addressable—i.e., contents of
the memory can be accessed using partial information
about the bound features of a stored item. In this study,
we used a cross-feature cuing technique to examine how
the human visual system encodes, binds, and retains
information about multiple stimulus features within a
set of moving objects. We sought to characterize the
roles of three different features (position, color, and
direction of motion, the latter two of which are
processed preferentially within the ventral and dorsal
visual streams, respectively) in the construction and
maintenance of object representations. We investigated
the extent to which these features are bound together
across the following processing stages: during stimulus
encoding, sensory (iconic) memory, and visual shortterm
memory. Whereas all features examined here can
serve as cues for addressing content, their effectiveness
shows asymmetries and varies according to cue–report
pairings and the stage of information processing and
storage. Position-based indexing theories predict that
position should be more effective as a cue compared to
other features. While we found a privileged role for
position as a cue at the stimulus-encoding stage, position
was not the privileged cue at the sensory and visual
short-term memory stages. Instead, the pattern that
emerged from our findings is one that mirrors the
parallel processing streams in the visual system. This
stream-specific binding and cuing effectiveness
manifests itself in all three stages of information
processing examined here. Finally, we find that the Leaky
Flask model proposed in our previous study is applicable
to all three features
GPUs as Storage System Accelerators
Massively multicore processors, such as Graphics Processing Units (GPUs),
provide, at a comparable price, a one order of magnitude higher peak
performance than traditional CPUs. This drop in the cost of computation, as any
order-of-magnitude drop in the cost per unit of performance for a class of
system components, triggers the opportunity to redesign systems and to explore
new ways to engineer them to recalibrate the cost-to-performance relation. This
project explores the feasibility of harnessing GPUs' computational power to
improve the performance, reliability, or security of distributed storage
systems. In this context, we present the design of a storage system prototype
that uses GPU offloading to accelerate a number of computationally intensive
primitives based on hashing, and introduce techniques to efficiently leverage
the processing power of GPUs. We evaluate the performance of this prototype
under two configurations: as a content addressable storage system that
facilitates online similarity detection between successive versions of the same
file and as a traditional system that uses hashing to preserve data integrity.
Further, we evaluate the impact of offloading to the GPU on competing
applications' performance. Our results show that this technique can bring
tangible performance gains without negatively impacting the performance of
concurrently running applications.Comment: IEEE Transactions on Parallel and Distributed Systems, 201
Nanoscale content-addressable memory
A combined content addressable memory device and memory interface is provided. The combined device and interface includes one or more one molecular wire crossbar memories having spaced-apart key nanowires, spaced-apart value nanowires adjacent to the key nanowires, and configurable switches between the key nanowires and the value nanowires. The combination further includes a key microwire-nanowire grid (key MNG) electrically connected to the spaced-apart key nanowires, and a value microwire-nanowire grid (value MNG) electrically connected to the spaced-apart value nanowires. A key or value MNGs selects multiple nanowires for a given key or value
Quantum pattern recognition with liquid-state nuclear magnetic resonance
A novel quantum pattern recognition scheme is presented, which combines the
idea of a classic Hopfield neural network with adiabatic quantum computation.
Both the input and the memorized patterns are represented by means of the
problem Hamiltonian. In contrast to classic neural networks, the algorithm can
return a quantum superposition of multiple recognized patterns. A proof of
principle for the algorithm for two qubits is provided using a liquid state NMR
quantum computer.Comment: updated version, Journal-ref adde
- …