6,669 research outputs found

    Cross-Layer Optimization for Power-Efficient and Robust Digital Circuits and Systems

    Full text link
    With the increasing digital services demand, performance and power-efficiency become vital requirements for digital circuits and systems. However, the enabling CMOS technology scaling has been facing significant challenges of device uncertainties, such as process, voltage, and temperature variations. To ensure system reliability, worst-case corner assumptions are usually made in each design level. However, the over-pessimistic worst-case margin leads to unnecessary power waste and performance loss as high as 2.2x. Since optimizations are traditionally confined to each specific level, those safe margins can hardly be properly exploited. To tackle the challenge, it is therefore advised in this Ph.D. thesis to perform a cross-layer optimization for digital signal processing circuits and systems, to achieve a global balance of power consumption and output quality. To conclude, the traditional over-pessimistic worst-case approach leads to huge power waste. In contrast, the adaptive voltage scaling approach saves power (25% for the CORDIC application) by providing a just-needed supply voltage. The power saving is maximized (46% for CORDIC) when a more aggressive voltage over-scaling scheme is applied. These sparsely occurred circuit errors produced by aggressive voltage over-scaling are mitigated by higher level error resilient designs. For functions like FFT and CORDIC, smart error mitigation schemes were proposed to enhance reliability (soft-errors and timing-errors, respectively). Applications like Massive MIMO systems are robust against lower level errors, thanks to the intrinsically redundant antennas. This property makes it applicable to embrace digital hardware that trades quality for power savings.Comment: 190 page

    Robust low-power digital circuit design in nano-CMOS technologies

    Get PDF
    Device scaling has resulted in large scale integrated, high performance, low-power, and low cost systems. However the move towards sub-100 nm technology nodes has increased variability in device characteristics due to large process variations. Variability has severe implications on digital circuit design by causing timing uncertainties in combinational circuits, degrading yield and reliability of memory elements, and increasing power density due to slow scaling of supply voltage. Conventional design methods add large pessimistic safety margins to mitigate increased variability, however, they incur large power and performance loss as the combination of worst cases occurs very rarely. In-situ monitoring of timing failures provides an opportunity to dynamically tune safety margins in proportion to on-chip variability that can significantly minimize power and performance losses. We demonstrated by simulations two delay sensor designs to detect timing failures in advance that can be coupled with different compensation techniques such as voltage scaling, body biasing, or frequency scaling to avoid actual timing failures. Our simulation results using 45 nm and 32 nm technology BSIM4 models indicate significant reduction in total power consumption under temperature and statistical variations. Future work involves using dual sensing to avoid useless voltage scaling that incurs a speed loss. SRAM cache is the first victim of increased process variations that requires handcrafted design to meet area, power, and performance requirements. We have proposed novel 6 transistors (6T), 7 transistors (7T), and 8 transistors (8T)-SRAM cells that enable variability tolerant and low-power SRAM cache designs. Increased sense-amplifier offset voltage due to device mismatch arising from high variability increases delay and power consumption of SRAM design. We have proposed two novel design techniques to reduce offset voltage dependent delays providing a high speed low-power SRAM design. Increasing leakage currents in nano-CMOS technologies pose a major challenge to a low-power reliable design. We have investigated novel segmented supply voltage architecture to reduce leakage power of the SRAM caches since they occupy bulk of the total chip area and power. Future work involves developing leakage reduction methods for the combination logic designs including SRAM peripherals

    Comparison of in-situ delay monitors for use in Adaptive Voltage Scaling

    Get PDF
    In Adaptive Voltage Scaling (AVS) the supply voltage of digital circuits is tuned according to the circuit's actual operating condition, which enables dynamic compensation to PVTA variations. By exploiting the excessive safety margins added in state-of-the-art worst-case designs considerable power saving is achieved. In our approach, the operating condition of the circuit is monitored by in-situ delay monitors. This paper presents different designs to implement the in-situ delay monitors capable of detecting late but still non-erroneous transitions, called Pre-Errors. The developed Pre-Error monitors are integrated in a 16 bit multiplier test circuit and the resulting Pre-Error AVS system is modeled by a Markov chain in order to determine the power saving potential of each Pre-Error detection approach

    A Novel Methodology for Error-Resilient Circuits in Near-Threshold Computing

    Get PDF
    Department of Electrical EngineeringThe main goal of designing VLSI system is high performance with low energy consumption. Actually, to realize the human-related techniques, such as internet of things (IoTs) and wearable devices, efficient power management techniques are required. Near threshold computing (NTC) is one of the most well-known techniques which is proposed for the trade-off between energy consumption and performance improvement. With this technique, the solution would be selected by the lowest energy with highest performance. However, NTC suffers a significant performance degradation, which is prone to timing errors. However, main goal of Integrated Circuit (IC) design is making the circuit to always operate correctly though worst-case condition. But, in order to make the circuit always work correctly, considerable area and power overheads may occur. As an alternative, better-than-worst-case (BTWC) design paradigm has been proposed. One of the main design of BTWC design includes error-resilient circuits which detect and correct timing errors, though they cause area and power overheads. In this thesis, we propose various design methodologies which provide an optimal implementation of error-resilient circuits. Slack-based, sensitivity-based methodology and modified Quine-McCluskey (Q-M) algorithm have been exploited to earn the minimum set of error-resilient circuits without any loss of detection ability. From sensitivity-based methodology, benchmark results show that the optimal designs reduces up to 46% monitoring area without compromising error detection ability of the initial error-resilient design. From the Quine-McCluskey (Q-M) algorithm, benchmark results show that optimal design reduces up to 72% of flip-flops which are required to be changed to error-resilient circuits without compromising an error detection ability. In addition, more power and area reduction can be possible when reasonable underestimation of error detection ability is accepted. Monte-Carlo analysis validates that our proposed method is tolerant to process variation.ope

    Circuits and Systems Advances in Near Threshold Computing

    Get PDF
    Modern society is witnessing a sea change in ubiquitous computing, in which people have embraced computing systems as an indispensable part of day-to-day existence. Computation, storage, and communication abilities of smartphones, for example, have undergone monumental changes over the past decade. However, global emphasis on creating and sustaining green environments is leading to a rapid and ongoing proliferation of edge computing systems and applications. As a broad spectrum of healthcare, home, and transport applications shift to the edge of the network, near-threshold computing (NTC) is emerging as one of the promising low-power computing platforms. An NTC device sets its supply voltage close to its threshold voltage, dramatically reducing the energy consumption. Despite showing substantial promise in terms of energy efficiency, NTC is yet to see widescale commercial adoption. This is because circuits and systems operating with NTC suffer from several problems, including increased sensitivity to process variation, reliability problems, performance degradation, and security vulnerabilities, to name a few. To realize its potential, we need designs, techniques, and solutions to overcome these challenges associated with NTC circuits and systems. The readers of this book will be able to familiarize themselves with recent advances in electronics systems, focusing on near-threshold computing

    Efficient in-situ delay monitoring for chip health tracking

    Get PDF

    Monitoring Seagrass within the Reef 2050 Integrated Monitoring and Reporting Program: final report of the Seagrass Expert Group

    Get PDF
    Seagrass is widely distributed throughout the Great Barrier Reef (the Reef), with a documented 35,000 square kilometres and a potential habitat area of 228,300 square kilometres. Seagrass meadows occur in many different environmental conditions, both within and beyond the impact of flood plumes, and are common in areas of high anthropogenic activity, such as ports and areas adjacent to urban centres. Many processes and services that maintain the exceptional values of the Reef occur in seagrass meadows. To provide the services that support these values seagrass habitats include a range of species, growth forms and benthic landscapes, that respond to pressures in different ways. In many cases seagrasses also modify their environments to improve environmental conditions on the Reef. Seagrasses vary spatially and temporally in their distribution and abundance across the Reef, occurring in different water quality types (estuaries, coastal, reefal and offshore) and at different water depths (intertidal, shallow subtidal, deep water). The diversity of potential seagrass habitats is one reason they support so many of the environmental services and values of the Great Barrier Reef World Heritage Area (World Heritage Area), including: habitat for crabs, prawns and fish –– supporting recreational and commercial fishing; primary food resource for species of conservation significance (dugong, green turtles, migratory shore birds); shoreline stabilisation by binding sediment to slow erosion; water clarity improvement, by promoting the settlement of fine particulate matter; and providing a natural carbon sink. To deliver the seagrass components of the knowledge system required to deliver Reef 2050 Long-Term Sustainability Plan (Reef 2050 Plan) reporting and other management activities, there will need to be modifications and enhancements made to the current seagrass monitoring programs. The Drivers, Pressures, State, Impact, Response (DPSIR) framework was used to facilitate the identification of linkages between the pressures on seagrass, state of the seagrass, the impact a decline in seagrass would have on community values, and the responses management agencies can take to mitigate loss of values. We have also defined twelve seagrass habitat types that occur on the Reef, identified by a matrix of water body type and water depth. The seagrasses occurring in each habitat are exposed to different pressures and require different management actions (responses) to protect and enhance the values of the community and Reef ecosystems. The proposed monitoring program has three spatial and temporal scales, with each scale providing different information (knowledge) to support resilience-based management of the Reef. 1. Habitat assessment: will occur across the Reef at all sites where seagrass has a potential of occurring. It will determine seagrass abundance, species composition and spatial extent of each habitat type within the World Heritage Area. This scale will be focused on supporting future outlook reports, but will also provide information for operational and strategic management and contribute towards other reports. 2. Health assessment: will take place at representative regional sites, for each habitat type. These sites will provide managers with annual and seasonal trends in seagrass condition and resilience at a regional scale for each habitat. This scale will provide higher temporal detail (i.e. at least annually) of seagrass condition and resilience, supporting tactical, operational and strategic management applications. This scale will provide the majority of information for regional/catchment report cards and the assessment of management effectiveness at a catchment wide scale. It will also contribute important trends in condition and resilience to Outlook reports and other communication products with more frequent reporting. 3. Process monitoring: will take place at the fewest number of sites, nested within habitat and health assessment sites. Due to the time-consuming and complex nature of these measurements the sampling sites will be chosen to focus on priority knowledge gaps. This scale will provide managers with information on cause-and-effect relationships and linkages between different aspects of the Reef’s processes and ecosystems. This scale will include measures of seagrass resilience (for example, feedback loops, recovery time after disturbance, history of disturbance and thresholds for exposure to pressures). The attributes measured at these sites will also provide confidence to managers regarding the impact a change in seagrass condition is likely to have on other values of the Reef (for example, fish, megafauna, coral, Indigenous heritage, and human dimensions). To ensure that future seagrass monitoring delivers the information required to report on the Reef 2050 Plan and meets the other knowledge requirements of managers, a spatially balanced random sampling design needs to be implemented on the Reef. Existing monitoring programs can and should be integrated into this design. However, current seagrass monitoring programs do not provide a balanced assessment of seagrass condition across the entire Reef, hence are not suitable to meet the Reef 2050 Plan reporting requirements and many other management information needs. Existing sites within current monitoring are focused on habitat types that are intertidal and shallow sub-tidal and lie close to the coast. These habitats have been previously selected because they face high levels of cumulative anthropogenic risk and therefore have higher levels of management demand for information. The current sites are likely to decline more rapidly, in response to catchment run-off and other anthropogenic pressures, than the average for seagrass meadows across the entire Reef. They also have a greater potential to show improvements from Reef catchment management actions that reduce pollution associated with run-off. This report sets out the framework for a recommended new seagrass monitoring program, highlighting the substantial improvements in knowledge and confidence this new program will deliver, and provides a scope for the statistical design work required to support implementation of this program

    Impacts of Environmental Factors on Flexible Pavements

    Get PDF
    Mechanistic-empirical pavement design methods for flexible pavements are based on the assumption that the pavement life is inversely proportional to the magnitude of the traffic-induced pavement strains. These strains vary with the stiffness of the asphalt layer and underlying base layer and subgrade. Environmental factors, such as the temperature in the asphalt concrete layer and the water content in the base layer and the subgrade, have a significant impact on the stiffness of relevant layers in pavement systems, and consequently the estimated life of flexible pavements. A comprehensive instrumentation system was installed at four sites across the state of Tennessee to monitor long-term seasonal changes in flexible pavement response. Thermistors were used to measure the temperature at different depth of the pavement systems. Diurnal temperature variations in the asphalt concrete layer were as large as the annual variation. Multi-segment TDR probes were used to measure the volumetric water content. Because of the difference in signal strength along the probe, all segments do not provide the same level of accuracy. A series of laboratory testing were performed to study the sources of measurement error and the temperature dependence of the measurements in some segments. Water content measurements were recalibrated according to findings of this laboratory study and the measured seasonal variations in subgrade and base water content were small. Using environmental data from instrumented pavement sites in Tennessee, the effects of asphalt concrete (AC) temperature and base and subgrade water v content variation were evaluated for three pavement profiles using the finite element method. The effect of AC temperature profile was found important to the critical strain in AC layer. Because the relationship between temperature and asphalt concrete stiffness is nonlinear, the additional pavement life consumed at higher-than-average temperatures is not offset by savings at lower-than-average temperatures. As a result, whenever average pavement temperatures are used to determine the asphalt stiffness, pavement life is overestimated. Furthermore, temperature and water content are neither completely dependent nor completely independent. Hence, the combined effects of temperature and water content variations were accounted for in the estimation of pavement life. The results of the parametric study showed that the temperature averaging period and the timing and duration of wet subgrade conditions are critical to estimated pavement life
    corecore