16 research outputs found

    Indirect test of M-S circuits using multiple specification band guarding

    Get PDF
    Testing analog and mixed-signal circuits is a costly task due to the required test time targets and high end technical resources. Indirect testing methods partially address these issues providing an efficient solution using easy to measure CUT information that correlates with circuit performances. In this work, a multiple specification band guarding technique is proposed as a method to achieve a test target of misclassified circuits. The acceptance/rejection test regions are encoded using octrees in the measurement space, where the band guarding factors precisely tune the test decision boundary according to the required test yield targets. The generated octree data structure serves to cluster the forthcoming circuits in the production testing phase by solely relying on indirect measurements. The combined use of octree based encoding and multiple specification band guarding makes the testing procedure fast, efficient and highly tunable. The proposed band guarding methodology has been applied to test a band-pass Butterworth filter under parametric variations. Promising simulation results are reported showing remarkable improvements when the multiple specification band guarding criterion is used.Peer ReviewedPostprint (author's final draft

    Feature selection and feature design for machine learning indirect test: a tutorial review

    Get PDF
    International audienc

    GPU data structures for graphics and vision

    Get PDF
    Graphics hardware has in recent years become increasingly programmable, and its programming APIs use the stream processor model to expose massive parallelization to the programmer. Unfortunately, the inherent restrictions of the stream processor model, used by the GPU in order to maintain high performance, often pose a problem in porting CPU algorithms for both video and volume processing to graphics hardware. Serial data dependencies which accelerate CPU processing are counterproductive for the data-parallel GPU. This thesis demonstrates new ways for tackling well-known problems of large scale video/volume analysis. In some instances, we enable processing on the restricted hardware model by re-introducing algorithms from early computer graphics research. On other occasions, we use newly discovered, hierarchical data structures to circumvent the random-access read/fixed write restriction that had previously kept sophisticated analysis algorithms from running solely on graphics hardware. For 3D processing, we apply known game graphics concepts such as mip-maps, projective texturing, and dependent texture lookups to show how video/volume processing can benefit algorithmically from being implemented in a graphics API. The novel GPU data structures provide drastically increased processing speed, and lift processing heavy operations to real-time performance levels, paving the way for new and interactive vision/graphics applications.Graphikhardware wurde in den letzen Jahren immer weiter programmierbar. Ihre APIs verwenden das Streamprozessor-Modell, um die massive Parallelisierung auch für den Programmierer verfügbar zu machen. Leider folgen aus dem strikten Streamprozessor-Modell, welches die GPU für ihre hohe Rechenleistung benötigt, auch Hindernisse in der Portierung von CPU-Algorithmen zur Video- und Volumenverarbeitung auf die GPU. Serielle Datenabhängigkeiten beschleunigen zwar CPU-Verarbeitung, sind aber für die daten-parallele GPU kontraproduktiv . Diese Arbeit präsentiert neue Herangehensweisen für bekannte Probleme der Video- und Volumensverarbeitung. Teilweise wird die Verarbeitung mit Hilfe von modifizierten Algorithmen aus der frühen Computergraphik-Forschung an das beschränkte Hardwaremodell angepasst. Anderswo helfen neu entdeckte, hierarchische Datenstrukturen beim Umgang mit den Schreibzugriff-Restriktionen die lange die Portierung von komplexeren Bildanalyseverfahren verhindert hatten. In der 3D-Verarbeitung nutzen wir bekannte Konzepte aus der Computerspielegraphik wie Mipmaps, projektive Texturierung, oder verkettete Texturzugriffe, und zeigen auf welche Vorteile die Video- und Volumenverarbeitung aus hardwarebeschleunigter Graphik-API-Implementation ziehen kann. Die präsentierten GPU-Datenstrukturen bieten drastisch schnellere Verarbeitung und heben rechenintensive Operationen auf Echtzeit-Niveau. Damit werden neue, interaktive Bildverarbeitungs- und Graphik-Anwendungen möglich

    Visibility-Based Optimizations for Image Synthesis

    Get PDF
    Katedra počítačové grafiky a interakce

    Proceedings, MSVSCC 2017

    Get PDF
    Proceedings of the 11th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 20, 2017 at VMASC in Suffolk, Virginia. 211 pp

    Ray Tracing Gems

    Get PDF
    This book is a must-have for anyone serious about rendering in real time. With the announcement of new ray tracing APIs and hardware to support them, developers can easily create real-time applications with ray tracing as a core component. As ray tracing on the GPU becomes faster, it will play a more central role in real-time rendering. Ray Tracing Gems provides key building blocks for developers of games, architectural applications, visualizations, and more. Experts in rendering share their knowledge by explaining everything from nitty-gritty techniques that will improve any ray tracer to mastery of the new capabilities of current and future hardware. What you'll learn: The latest ray tracing techniques for developing real-time applications in multiple domains Guidance, advice, and best practices for rendering applications with Microsoft DirectX Raytracing (DXR) How to implement high-performance graphics for interactive visualizations, games, simulations, and more Who this book is for: Developers who are looking to leverage the latest APIs and GPU technology for real-time rendering and ray tracing Students looking to learn about best practices in these areas Enthusiasts who want to understand and experiment with their new GPU

    Efficient production binning using octree tessellation in the alternate measurements space

    Get PDF
    Binning after volume production is a widely accepted technique to classify fabricated ICs into different clusters depending on different degrees of specification compliance. This allows the manufacturer to sell non optimal devices at lower rates, so adapting to customer’s quality-price requirements. The binning procedure can be carried out by measuring every single circuit performances, but this approach is costly and time consuming. On the contrary, if alternate measurements are used to characterize the bins, the procedure is considerably enhanced. In such a case, the specification bin boundaries become arbitrary shape regions due to the highly non linear mappings between the specifications space and the alternate measurements space. The binning strategy proposed in this work functions with the same efficiency regardless of these shapes. The digital encoding of the bins in the alternate measurements space using octrees is the key idea of the proposal. The strategy has two phases, the training phase and the binning phase. In the training phase, the specification bins are encoded using octrees. This first phase requires sufficient samples of each class to generate the octree under realistic variations, but it only needs to be performed once. The binning phase corresponds to the actual production binning of the fabricated ICs. This is achieved by evaluating the alternate measurements in the previously generated octree. The binning phase is fast due to the inherent sparsity of the octree data structure. In order to illustrate the proposal, the method has been applied to a band-pass Butterworth filter considering three specification bins as a proof of concept. Successful simulation results are reported showing considerable advantages as compared to a SVM based classifier. Similar bin misclassifications are obtained with both methods, 1.68% using octrees and 1.83% using SVM, while binning time is 5X times faster using octrees than using the SVM based classifier
    corecore