1,033 research outputs found
Design of an Efficient Interconnection Network of Temperature Sensors
Temperature has become a first class design constraint because high temperatures adversely affect circuit reliability, static power and degrade the performance. In this scenario, thermal characterization of ICs and on-chip temperature monitoring represent fundamental tasks in electronic design. In this work, we analyze the features that an interconnection network of temperature sensors must fulfill. Departing from the network topology, we continue with the proposal of a very light-weight network architecture based on digitalization resource sharing. Our proposal supposes a 16% improvement in area and power consumption compared to traditional approache
A software controlled voltage tuning system using multi-purpose ring oscillators
This paper presents a novel software driven voltage tuning method that
utilises multi-purpose Ring Oscillators (ROs) to provide process variation and
environment sensitive energy reductions. The proposed technique enables voltage
tuning based on the observed frequency of the ROs, taken as a representation of
the device speed and used to estimate a safe minimum operating voltage at a
given core frequency. A conservative linear relationship between RO frequency
and silicon speed is used to approximate the critical path of the processor.
Using a multi-purpose RO not specifically implemented for critical path
characterisation is a unique approach to voltage tuning. The parameters
governing the relationship between RO and silicon speed are obtained through
the testing of a sample of processors from different wafer regions. These
parameters can then be used on all devices of that model. The tuning method and
software control framework is demonstrated on a sample of XMOS XS1-U8A-64
embedded microprocessors, yielding a dynamic power saving of up to 25% with no
performance reduction and no negative impact on the real-time constraints of
the embedded software running on the processor
AI/ML Algorithms and Applications in VLSI Design and Technology
An evident challenge ahead for the integrated circuit (IC) industry in the
nanometer regime is the investigation and development of methods that can
reduce the design complexity ensuing from growing process variations and
curtail the turnaround time of chip manufacturing. Conventional methodologies
employed for such tasks are largely manual; thus, time-consuming and
resource-intensive. In contrast, the unique learning strategies of artificial
intelligence (AI) provide numerous exciting automated approaches for handling
complex and data-intensive tasks in very-large-scale integration (VLSI) design
and testing. Employing AI and machine learning (ML) algorithms in VLSI design
and manufacturing reduces the time and effort for understanding and processing
the data within and across different abstraction levels via automated learning
algorithms. It, in turn, improves the IC yield and reduces the manufacturing
turnaround time. This paper thoroughly reviews the AI/ML automated approaches
introduced in the past towards VLSI design and manufacturing. Moreover, we
discuss the scope of AI/ML applications in the future at various abstraction
levels to revolutionize the field of VLSI design, aiming for high-speed, highly
intelligent, and efficient implementations
Design and modelling of variability tolerant on-chip communication structures for future high performance system on chip designs
The incessant technology scaling has enabled the integration of functionally complex System-on-Chip (SoC) designs with a large number of heterogeneous systems on a single chip. The processing elements on these chips are integrated through on-chip communication structures which provide the infrastructure necessary for the exchange of data and control signals, while meeting the strenuous physical and design constraints. The use of vast amounts of on chip communications will be central to future designs where variability is an inherent characteristic. For this reason, in this thesis we investigate the performance and variability tolerance of typical on-chip communication structures. Understanding of the relationship between variability and communication is paramount for the designers; i.e. to devise new methods and techniques for designing performance and power efficient communication circuits in the forefront of challenges presented by deep sub-micron (DSM) technologies.
The initial part of this work investigates the impact of device variability due to Random Dopant Fluctuations (RDF) on the timing characteristics of basic communication elements. The characterization data so obtained can be used to estimate the performance and failure probability of simple links through the methodology proposed in this work. For the Statistical Static Timing Analysis (SSTA) of larger circuits, a method for accurate estimation of the probability density functions of different circuit parameters is proposed. Moreover, its significance on pipelined circuits is highlighted. Power and area are one of the most important design metrics for any integrated circuit (IC) design. This thesis emphasises the consideration of communication reliability while optimizing for power and area. A methodology has been proposed for the simultaneous optimization of performance, area, power and delay variability for a repeater inserted interconnect. Similarly for multi-bit parallel links, bandwidth driven optimizations have also been performed. Power and area efficient semi-serial links, less vulnerable to delay variations than the corresponding fully parallel links are introduced. Furthermore, due to technology scaling, the coupling noise between the link lines has become an important issue. With ever decreasing supply voltages, and the corresponding reduction in noise margins, severe challenges are introduced for performing timing verification in the presence of variability. For this reason an accurate model for crosstalk noise in an interconnection as a function of time and skew is introduced in this work. This model can be used for the identification of skew condition that gives maximum delay noise, and also for efficient design verification
Scaling silicon-based quantum computing using CMOS technology: State-of-the-art, Challenges and Perspectives
Complementary metal-oxide semiconductor (CMOS) technology has radically
reshaped the world by taking humanity to the digital age. Cramming more
transistors into the same physical space has enabled an exponential increase in
computational performance, a strategy that has been recently hampered by the
increasing complexity and cost of miniaturization. To continue achieving
significant gains in computing performance, new computing paradigms, such as
quantum computing, must be developed. However, finding the optimal physical
system to process quantum information, and scale it up to the large number of
qubits necessary to build a general-purpose quantum computer, remains a
significant challenge. Recent breakthroughs in nanodevice engineering have
shown that qubits can now be manufactured in a similar fashion to silicon
field-effect transistors, opening an opportunity to leverage the know-how of
the CMOS industry to address the scaling challenge. In this article, we focus
on the analysis of the scaling prospects of quantum computing systems based on
CMOS technology.Comment: Comments welcom
Dynamic Power Management for Neuromorphic Many-Core Systems
This work presents a dynamic power management architecture for neuromorphic
many core systems such as SpiNNaker. A fast dynamic voltage and frequency
scaling (DVFS) technique is presented which allows the processing elements (PE)
to change their supply voltage and clock frequency individually and
autonomously within less than 100 ns. This is employed by the neuromorphic
simulation software flow, which defines the performance level (PL) of the PE
based on the actual workload within each simulation cycle. A test chip in 28 nm
SLP CMOS technology has been implemented. It includes 4 PEs which can be scaled
from 0.7 V to 1.0 V with frequencies from 125 MHz to 500 MHz at three distinct
PLs. By measurement of three neuromorphic benchmarks it is shown that the total
PE power consumption can be reduced by 75%, with 80% baseline power reduction
and a 50% reduction of energy per neuron and synapse computation, all while
maintaining temporary peak system performance to achieve biological real-time
operation of the system. A numerical model of this power management model is
derived which allows DVFS architecture exploration for neuromorphics. The
proposed technique is to be used for the second generation SpiNNaker
neuromorphic many core system
Robust low-power digital circuit design in nano-CMOS technologies
Device scaling has resulted in large scale integrated, high performance, low-power, and low cost systems. However the move towards sub-100 nm technology nodes has increased variability in device characteristics due to large process variations. Variability has severe implications on digital circuit design by causing timing uncertainties in combinational circuits, degrading yield and reliability of memory elements, and increasing power density due to slow scaling of supply voltage. Conventional design methods add large pessimistic safety margins to mitigate increased variability, however, they incur large power and performance loss as the combination of worst cases occurs very rarely.
In-situ monitoring of timing failures provides an opportunity to dynamically tune safety margins in proportion to on-chip variability that can significantly minimize power and performance losses. We demonstrated by simulations two delay sensor designs to detect timing failures in advance that can be coupled with different compensation techniques such as voltage scaling, body biasing, or frequency scaling to avoid actual timing failures. Our simulation results using 45 nm and 32 nm technology BSIM4 models indicate significant reduction in total power consumption under temperature and statistical variations. Future work involves using dual sensing to avoid useless voltage scaling that incurs a speed loss.
SRAM cache is the first victim of increased process variations that requires handcrafted design to meet area, power, and performance requirements. We have proposed novel 6 transistors (6T), 7 transistors (7T), and 8 transistors (8T)-SRAM cells that enable variability tolerant and low-power SRAM cache designs. Increased sense-amplifier offset voltage due to device mismatch arising from high variability increases delay and power consumption of SRAM design. We have proposed two novel design techniques to reduce offset voltage dependent delays providing a high speed low-power SRAM design. Increasing leakage currents in nano-CMOS technologies pose a major challenge to a low-power reliable design. We have investigated novel segmented supply voltage architecture to reduce leakage power of the SRAM caches since they occupy bulk of the total chip area and power. Future work involves developing leakage reduction methods for the combination logic designs including SRAM peripherals
- …