6,030 research outputs found

    Design for testability of high-order OTA-C filters

    Get PDF
    Copyright Ā© 2016 John Wiley & Sons, Ltd.A study of oscillation-based test for high-order Operational Transconductance Amplifier-C (OTA-C) filters is presented. The method is based on partition of a high-order filter into second-order filter functions. The opening Q-loop and adding positive feedback techniques are developed to convert the second-order filter section into a quadrature oscillator. These techniques are based on an open-loop configuration and an additional positive feedback configuration. Implementation of the two testability design methods for nth-order cascade, IFLF and leapfrog (LF) filters is presented, and the area overhead of the modified circuits is also discussed. The performances of the presented techniques are investigated. Fourth-order cascade, inverse follow-the-leader feedback (IFLF) and LF OTA-C filters were designed and simulated for analysis of fault coverage using the adding positive feedback method based on an analogue multiplexer. Simulation results show that the oscillation-based test method using positive feedback provides high fault coverage of around 97%, 96% and 95% for the cascade, IFLF and LF OTA-C filters, respectively. Copyright ƂPeer reviewe

    Timing verification of dynamically reconfigurable logic for Xilinx Virtex FPGA series

    Get PDF
    This paper reports on a method for extending existing VHDL design and verification software available for the Xilinx Virtex series of FPGAs. It allows the designer to apply standard hardware design and verification tools to the design of dynamically reconfigurable logic (DRL). The technique involves the conversion of a dynamic design into multiple static designs, suitable for input to standard synthesis and APR tools. For timing and functional verification after APR, the sections of the design can then be recombined into a single dynamic system. The technique has been automated by extending an existing DRL design tool named DCSTech, which is part of the Dynamic Circuit Switching (DCS) CAD framework. The principles behind the tools are generic and should be readily extensible to other architectures and CAD toolsets. Implementation of the dynamic system involves the production of partial configuration bitstreams to load sections of circuitry. The process of creating such bitstreams, the final stage of our design flow, is summarized

    Proportional sampling strategy: A compendium and some insights

    Get PDF
    There have been numerous studies on the effectiveness of partition and random testing. In particular, the proportional sampling (PS) strategy has been proved, under certain conditions, to be the only form of partition testing that outperforms random testing regardless of where the failure-causing inputs are. This paper provides an integrated synthesis and overview of our recent studies on the PS strategy and its related work. Through this synthesis, we offer a perspective that properly interprets the results obtained so far, and present some of the interesting issues involved and new insights obtained during the course of this research. Ā© 2001 Elsevier Science Inc. All rights reserved.postprin

    FORTEST: Formal methods and testing

    Get PDF
    Formal methods have traditionally been used for specification and development of software. However there are potential benefits for the testing stage as well. The panel session associated with this paper explores the usefulness or otherwise of formal methods in various contexts for improving software testing. A number of different possibilities for the use of formal methods are explored and questions raised. The contributors are all members of the UK FORTEST Network on formal methods and testing. Although the authors generally believe that formal methods are useful in aiding the testing process, this paper is intended to provoke discussion. Dissenters are encouraged to put their views to the panel or individually to the authors

    A Functional Verification Methodology for an Improved Coverage of System-on-Chips

    Get PDF
    The increasing popularity of System-on-Chip (SoC) circuits results in many new design challenges. One major challenge is to ensure the functional correctness of such complicated circuits. Functional verification is a verification technique used to verify the functional correctness of SoCs. Coverage Directed Test Generation (CDTG) is an essential part of functional verification, where the objective is to generate input stimulus that maximize the coverage of a design. Coverage helps to determine how well the input stimulus verified the design under verification. CDTG techniques analyze coverage results and adapt the input stimulus generation process to improve the coverage. One of the important component of CDTG based tools is the constraint solver. The time efficiency of the verification process depends on the efficiency of the solver. But the constraint solvers associated with CDTG tools require large amount of memory and time to generate input stimuli for SoCs. The solvers cannot generate solutions which are evenly distributed in search space, in order to attain the required coverage. The aim of this thesis is to provide a practical framework that enables the generation of evenly distributed input stimuli. A basic feature of the search space (data set) is that it contains k sub populations or clusters. Partitioning the search space into clusters and generating solutions from the partitions can improve the evenness of the solutions generated by the solver. Hence one of our main contribution is a novel domain partitioning algorithm. The domain partitioning algorithm relies on solution generated by a consistency search algorithm developed for our purpose. The number of partitions (required by the domain partitioning algorithm) is determined by using an algorithm which can find the optimal number of clusters present in a data set. To demonstrate the effectiveness of our approach, we apply our methodology on Constraint Satisfaction Problems (CSPs) and some real life applications

    Validation and optimization of analog circuits using randomized search algorithms

    Get PDF
    Analog circuits represent a large percentage of the chips used in mobile computing, communication devices, electric vehicles, and portable medical equipment today. Rapid scaling and shrinking chip geometrics introduce new challenging problems in verification, validation, and optimization of analog circuits. These problems include test generation and compression, runtime monitoring and analyzing the worst-case behaviors. State of the art techniques in Monte Carlo are unable to address these problems effectively. Consequently, designing an efficient and scalable CAD algorithm to address such problems is highly desirable.Ā  In this thesis, we introduce Duplex, a methodology for search and optimization. Duplex supports optimizing nonconvex nonlinear functions and functionals. We use duplex to solve problems in analog validation and machine learning. Duplex uses random tree data structures. Duplex is based on partitioning and separating the problem space into multiple smaller spaces such as input, state and the function space. Duplex simultaneously controls, biases and monitors the growth of the random trees in the partitioned spaces. We have used the duplex framework to solve practical problems in analog and mixed signal validation like directed input stimuli generation, compressing analog stress tests, worst-case eye diagram analysis, performance optimization, machine learning, and monitoring runtime behaviors of analog circuits. We used Duplex for validation and optimization of analog circuits. Duplex automatically generates input stimuli that expose bugs and improves coverage. Duplex automatically finds input corners that result in worst-case eye diagrams. Duplex simultaneously explores the parameter and performance spaces of analog circuits to optimize the circuit for best performance. We monitored the random trees and circuit execution against the specification properties described in formal languages. We formulated many challenging problems in the analog circuits, such as test compression and eye diagram analysis, as functional optimization problems. We use Duplex to solve these functional optimization problems.Ā  We propose the Duplex algorithm as an optimization algorithm to posit the framework to other domains. Duplex can address nonlinear and functional optimization problems in continuous and discrete spaces such as design-space exploration and supervised and unsupervised machine learning. The advantages of the duplex framework are efficiency, scalability and versatility. We consistently show orders of magnitude speedup improvements over the state of the art while objectively improving the quality of results. For generating input stimuli, duplex is the first technique that simultaneously does directed input stimulus generation and increases test coverage. We show over two orders of magnitude speedup over Monte Carlo simulations. For runtime monitoring, we check a large scalable circuit against a very expressive set of formal properties that were not possible to monitor before. For generating worst-case eye diagram, we show at least 20Ɨ20\times speedup and better quality of results in comparison to the state of the art. Duplex is the first work to provide transient test compression for analog circuits. We compress stress tests up to 96\%. We optimize analog circuits using Duplex and we show speedup and improved results with respect to the state of the art. We use Duplex to train supervised and unsupervised models and show improved accuracy in all cases

    å¤§č¦ęØ”ć‚·ć‚¹ćƒ†ćƒ LSIčØ­čØˆć®ćŸć‚ć®ēµ±äø€ēš„ćƒćƒ¼ćƒ‰ć‚¦ć‚§ć‚¢ćƒ»ć‚½ćƒ•ćƒˆć‚¦ć‚§ć‚¢å”čŖæꤜčØ¼ę‰‹ę³•

    Get PDF
    Currently, the complexity of embedded LSI system is growing faster than the productivity of system design. This trend results in a design productivity gap, particularly in tight development time. Since the verification task takes bigger part of development task, it becomes a major challenge in LSI system design. In order to guarantee system reliability and quality of results (QoR), verifying large coverage of system functionality requires huge amount of relevant test cases and various scenario of evaluations. To overcome these problems, verification methodology is evolving toward supporting higher level of design abstraction by employing HW-SW co-verification. In this study, we present a novel approach for verification LSI circuit which is called as unified HW/SW co-verification framework. The study aims to improve design efficiency while maintains implementation consistency in the point of view of system-level performance. The proposed data-driven simulation and flexible interface of HW and SW design become the backbone of verification framework. In order to avoid time consuming, prone error, and iterative design spin-off in a large team, the proposed framework has to support multiple design abstractions. Hence, it can close the loop of design, exploration, optimization, and testing. Furthermore, the proposed methodology is also able to co-operate with system-level simulation in high-level abstraction, which is easy to extend for various applications and enables fast-turn around design modification. These contributions are discussed in chapter 3. In order to show the effectiveness and the use-cases of the proposed verification framework, the evaluation and metrics assessments of Very High Throughput wireless LAN system design are carried out. Two application examples are provided. The first case in chapter 4 is intended for fast verification and design exploration of large circuit. The Maximum Likelihood Detection (MLD) MIMO decoder is considered as Design Under Test (DUT). The second case, as presented in chapter 5, is the evaluation for system-level simulation. The full transceiver system based on IEEE 802.11ac standard is employed as DUT. Experimental results show that the proposed verification approach gives significant improvements of verification time (e.g. up to 10,000 times) over the conventional scheme. The proposed framework is also able to support various schemes of system level evaluations and cross-layer evaluation of wireless system.ä¹å·žå·„ę„­å¤§å­¦åšå£«å­¦ä½č«–ę–‡ 学位čؘē•Ŗ号ļ¼šęƒ…å·„博ē”²ē¬¬328号 å­¦ä½ęŽˆäøŽå¹“ęœˆę—„ļ¼šå¹³ęˆ29幓6꜈30ę—„1 Introduction|2 Design and Verification in LSI System Design|3 Unified HW/SW Co-verification Methodology|4 Fast Co-verification and Design Exploration in Complex Circuits|5 Unified System Level Simulator for Very High Throughput Wireless Systems|6 Conclusion and Future Workä¹å·žå·„ę„­å¤§å­¦å¹³ęˆ29幓

    Hybridizing and applying computational intelligence techniques

    Get PDF
    As computers are increasingly relied upon to perform tasks of increasing complexity affecting many aspects of society, it is imperative that the underlying computational methods performing the tasks have high performance in terms of effectiveness and scalability. A common solution employed to perform such complex tasks are computational intelligence (CI) techniques. CI techniques use approaches influenced by nature to solve problems in which traditional modeling approaches fail due to impracticality, intractability, or mathematical ill-posedness. While CI techniques can perform considerably better than traditional modeling approaches when solving complex problems, the scalability performance of a given CI technique alone is not always optimal. Hybridization is a popular process by which a better performing CI technique is created from the combination of multiple existing techniques in a logical manner. In the first paper in this thesis, a novel hybridization of two CI techniques, accuracy-based learning classifier systems (XCS) and cluster analysis, is presented that improves upon the efficiency and, in some cases, the effectiveness of XCS. A number of tasks in software engineering are performed manually, such as defining expected output in model transformation testing. Especially since the number and size of projects that rely on tasks that must be performed manually, it is critical that automated approaches are employed to reduce or eliminate manual effort from these tasks in order to scale efficiently. The second paper in this thesis details a novel application of a CI technique, multi-objective simulated annealing, to the task of test case model generation to reduce the resulting effort required to manually update expected transformation output --Abstract, page iv
    • ā€¦
    corecore