59,905 research outputs found

    Communication synthesis of networks-on-chip (NoC)

    Get PDF
    The emergence of networks-on-chip (NoC) as the communication infrastructure solution for complex multi-core SoCs presents communication synthesis challenges. This dissertation addresses the design and run-time management aspects of communication synthesis. Design reuse and the infeasibility of Intellectual Property (IP) core interface redesign, requires the development of a Core-Network Interface (CNI) which allows them to communicate over the on-chip network. The absence of intelligence amongst the NoC components, entails the introduction of a CNI capable of not only providing basic packetization and depacketization, but also other essential services such as reliability, power management, reconguration and test support. A generic CNI architecture providing these services for NoCs is proposed and evaluated in this dissertation. Rising on-chip communication power costs and reliability concerns due to these, motivate the development of a peak power management technique that is both scalable to dierent NoCs and adaptable to varying trac congurations. A scalable and adaptable peak power management technique - SAPP - is proposed and demonstrated. Latency and throughput improvements observed with SAPP demonstrate its superiority over existing techniques. Increasing design complexity make prediction of design lifetimes dicult. Post SoC deployment, an on-line health monitoring scheme, is essential to maintain con- dence in the correct operation of on-chip cores. The rising design complexity and IP core test costs makes non-concurrent testing of the IP cores infeasible. An on-line scheme capable of managing IP core test in the presence of executing applications is essential. Such a scheme ensures application performance and system power budgets are eciently managed. This dissertation proposes Concurrent On-Line Test (COLT) for NoC-based systems and demonstrates how a robust implementation of COLT using a Test Infrastructure-IP (TI-IP) can be used to maintain condence in the correct operation of the SoC

    Test exploration and validation using transaction level models

    Get PDF
    The complexity of the test infrastructure and test strategies in systems-on-chip approaches the complexity of the functional design space. This paper presents test design space exploration and validation of test strategies and schedules using transaction level models (TLMs). Since many aspects of testing involve the transfer of a significant amount of test stimuli and responses, the communication-centric view of TLMs suits this purpose exceptionally wel

    Efficient Simulation of Structural Faults for the Reliability Evaluation at System-Level

    Get PDF
    In recent technology nodes, reliability is considered a part of the standard design Âżow at all levels of embedded system design. While techniques that use only low-level models at gate- and register transfer-level offer high accuracy, they are too inefficient to consider the overall application of the embedded system. Multi-level models with high abstraction are essential to efficiently evaluate the impact of physical defects on the system. This paper provides a methodology that leverages state-of-the-art techniques for efficient fault simulation of structural faults together with transaction-level modeling. This way it is possible to accurately evaluate the impact of the faults on the entire hardware/software system. A case study of a system consisting of hardware and software for image compression and data encryption is presented and the method is compared to a standard gate/RT mixed-level approac

    On-Line Instruction-checking in Pipelined Microprocessors

    Get PDF
    Microprocessors performances have increased by more than five orders of magnitude in the last three decades. As technology scales down, these components become inherently unreliable posing major design and test challenges. This paper proposes an instruction-checking architecture to detect erroneous instruction executions caused by both permanent and transient errors in the internal logic of a microprocessor. Monitoring the correct activation sequence of a set of predefined microprocessor control/status signals allow distinguishing between correctly and not correctly executed instruction

    Network-aware design-space exploration of a power-efficient embedded application

    Get PDF
    The paper presents the design and multi-parameter optimization of a networked embedded application for the health-care domain. Several hardware, software, and application parameters, such as clock frequency, sensor sampling rate, data packet rate, are tuned at design- and run-time according to application specifications and operating conditions to optimize hardware requirements, packet loss, power consumption. Experimental results show that further power efficiency can be achieved by considering also communication aspects during design space exploratio

    DeSyRe: on-Demand System Reliability

    No full text
    The DeSyRe project builds on-demand adaptive and reliable Systems-on-Chips (SoCs). As fabrication technology scales down, chips are becoming less reliable, thereby incurring increased power and performance costs for fault tolerance. To make matters worse, power density is becoming a significant limiting factor in SoC design, in general. In the face of such changes in the technological landscape, current solutions for fault tolerance are expected to introduce excessive overheads in future systems. Moreover, attempting to design and manufacture a totally defect and fault-free system, would impact heavily, even prohibitively, the design, manufacturing, and testing costs, as well as the system performance and power consumption. In this context, DeSyRe delivers a new generation of systems that are reliable by design at well-balanced power, performance, and design costs. In our attempt to reduce the overheads of fault-tolerance, only a small fraction of the chip is built to be fault-free. This fault-free part is then employed to manage the remaining fault-prone resources of the SoC. The DeSyRe framework is applied to two medical systems with high safety requirements (measured using the IEC 61508 functional safety standard) and tight power and performance constraints

    The psychometric properties of a shortened Dutch version of the consequences scale used in the core alcohol and drug survey

    Get PDF
    <div><p>Background</p><p>Alcohol and drug misuse among college students has been studied extensively and has been clearly identified as a public health problem. Within more general populations alcohol misuse remains one of the leading causes of disease, disability and death worldwide. Conducting research on alcohol misuse requires valid and reliable instruments to measure its consequences. One scale that is often used is the consequences scale in the Core Alcohol and Drug Survey (CADS). However, psychometric studies on the CADS are rare and the ones that do exist report varying results. This article aims to address this imbalance by examining the psychometric properties of a Dutch version of the CADS in a large sample of Flemish university and college students.</p><p>Methods</p><p>The analyses are based on data collected by the inter-university project ‘Head in the clouds’, measuring alcohol use among students. In total, 19,253 students participated (22.1% response rate). The CADS scale was measured using 19 consequences, and participants were asked how often they had experienced these on a 6-point scale. Firstly, the factor structure of the CADS was examined. Two models from literature were compared by performing confirmatory factor analyses (CFA) and were adapted if necessary. Secondly, we assessed the composite reliability as well as the convergent, discriminant and concurrent validity.</p><p>Results</p><p>The two-factor model, identifying personal consequences (had a hangover; got nauseated or vomited; missed a class) and social consequences (got into an argument or fight; been criticized by someone I know; done something I later regretted; been hurt or injured) was indicated to be the best model, having both a good model fit and an acceptable composite reliability. In addition, construct validity was evaluated to be acceptable, with good discriminant validity, although the convergent validity of the factor measuring ‘social consequences’ could be improved. Concurrent validity was evaluated as good.</p><p>Conclusions</p><p>In deciding which model best represents the data, it is crucial that not only the model fit is evaluated, but the importance of factor reliability and validity issues is also taken into account. The two-factor model, identifying personal consequences and social consequences, was concluded to be the best model. This shortened Dutch version of the CADS (CADS_D) is a useful tool to screen alcohol-related consequences among college students.</p></div
    • 

    corecore