853 research outputs found
ToPoliNano: Nanoarchitectures Design Made Real
Many facts about emerging nanotechnologies are yet to be assessed. There are still major concerns, for instance, about maximum achievable device density, or about which architecture is best fit for a specific application. Growing complexity requires taking into account many aspects of technology, application and architecture at the same time. Researchers face problems that are not new per se, but are now subject to very different constraints, that need to be captured by design tools. Among the emerging nanotechnologies, two-dimensional nanowire based arrays represent promising nanostructures, especially for massively parallel computing architectures. Few attempts have been done, aimed at giving the possibility to explore architectural solutions, deriving information from extensive and reliable nanoarray characterization. Moreover, in the nanotechnology arena there is still not a clear winner, so it is important to be able to target different technologies, not to miss the next big thing. We present a tool, ToPoliNano, that enables such a multi-technological characterization in terms of logic behavior, power and timing performance, area and layout constraints, on the basis of specific technological and topological descriptions. This tool can aid the design process, beside providing a comprehensive simulation framework for DC and timing simulations, and detailed power analysis. Design and simulation results will be shown for nanoarray-based circuits. ToPoliNano is the first real design tool that tackles the top down design of a circuit based on emerging technologie
Yield Enhancement of Digital Microfluidics-Based Biochips Using Space Redundancy and Local Reconfiguration
As microfluidics-based biochips become more complex, manufacturing yield will
have significant influence on production volume and product cost. We propose an
interstitial redundancy approach to enhance the yield of biochips that are
based on droplet-based microfluidics. In this design method, spare cells are
placed in the interstitial sites within the microfluidic array, and they
replace neighboring faulty cells via local reconfiguration. The proposed design
method is evaluated using a set of concurrent real-life bioassays.Comment: Submitted on behalf of EDAA (http://www.edaa.com/
On hardware for generating routes in Kautz digraphs
In this paper we present a hardware implementation of an algorithm for generating node disjoint routes in a Kautz network. Kautz networks are based on a family of digraphs described by W.H. Kautz[Kautz 68]. A Kautz network with in-degree and out-degree d has N = dk + dkÂż1 nodes (for any cardinals d, k>0). The diameter is at most k, the degree is fixed and independent of the network size. Moreover, it is fault-tolerant, the connectivity is d and the mapping of standard computation graphs such as a linear array, a ring and a tree on a Kautz network is straightforward.\ud
The network has a simple routing mechanism, even when nodes or links are faulty. Imase et al. [Imase 86] showed the existence of d node disjoint paths between any pair of vertices. In Smit et al. [Smit 91] an algorithm is described that generates d node disjoint routes between two arbitrary nodes in the network. In this paper we present a simple and fast hardware implementation of this algorithm. It can be realized with standard components (Field Programmable Gate Arrays)
Recommended from our members
Algorithm Based Fault Tolerance in Massively Parallel Systems
An A complex computer system consists of billions of transistors, miles of wires, and many interactions with an unpredictable environment. Correct results must be produced despite faults that dynamically occur in some of these components. Many techniques have been developed for fault tolerant computation. General purpose methods are independent of the application, yet incur an overhead cost which may be unacceptable for massively parallel systems. Algorithm-specific methods, which can operate at lower cost, are a developing alternative [1, 72]. This paper first reviews the general-purpose approach and then focuses on the algorithm-specific method, with an eye toward massively parallel processors. Algorithm-based fault tolerance has the attraction of low overhead; furthermore it addresses both the detection and also the correction problems. The principle is to build low-cost checking and correcting mechanism based exclusively on the redundancies inherent in the system
Fault tolerance issues in nanoelectronics
The astonishing success story of microelectronics cannot go on indefinitely. In fact, once
devices reach the few-atom scale (nanoelectronics), transient quantum effects are expected
to impair their behaviour. Fault tolerant techniques will then be required. The aim of this
thesis is to investigate the problem of transient errors in nanoelectronic devices. Transient
error rates for a selection of nanoelectronic gates, based upon quantum cellular automata
and single electron devices, in which the electrostatic interaction between electrons is used
to create Boolean circuits, are estimated. On the bases of such results, various fault tolerant
solutions are proposed, for both logic and memory nanochips. As for logic chips, traditional
techniques are found to be unsuitable. A new technique, in which the voting approach of
triple modular redundancy (TMR) is extended by cascading TMR units composed of
nanogate clusters, is proposed and generalised to other voting approaches. For memory
chips, an error correcting code approach is found to be suitable. Various codes are
considered and a lookup table approach is proposed for encoding and decoding. We are
then able to give estimations for the redundancy level to be provided on nanochips, so as to
make their mean time between failures acceptable. It is found that, for logic chips, space
redundancies up to a few tens are required, if mean times between failures have to be of the
order of a few years. Space redundancy can also be traded for time redundancy. As for
memory chips, mean times between failures of the order of a few years are found to imply
both space and time redundancies of the order of ten
Evaluating local indirect addressing in SIMD proc essors
In the design of parallel computers, there exists a tradeoff between the number and power of individual processors. The single instruction stream, multiple data stream (SIMD) model of parallel computers lies at one extreme of the resulting spectrum. The available hardware resources are devoted to creating the largest possible number of processors, and consequently each individual processor must use the fewest possible resources. Disagreement exists as to whether SIMD processors should be able to generate addresses individually into their local data memory, or all processors should access the same address. The tradeoff is examined between the increased capability and the reduced number of processors that occurs in this single instruction stream, multiple, locally addressed, data (SIMLAD) model. Factors are assembled that affect this design choice, and the SIMLAD model is compared with the bare SIMD and the MIMD models
- âŚ