28 research outputs found

    A novel reseeding mechanism for pseudo-random testing of VLSI circuits

    Get PDF
    [[abstract]]During built-in self-test (BIST), the set of patterns generated by a pseudo-random pattern generator may not provide sufficiently high fault coverage and many patterns were undetected fault (useless patterns). In order to reduce the test time, we can remove useless patterns or change them to useful patterns (fault dropping). In this paper, we reseed, modify the pseudo-random, and use an additional bit counter to improve test length and achieve high fault coverage. The fact is that a random test set contains useless patterns, so we present a technique, including both reseeding and bit modifying to remove useless patterns or change them to useful patterns, and when the patterns change, we pick out the numbers with less bits, leading to very short test length. The technique we present is applicable for single-stuck-at faults. The seeds we use are deterministic so 100% fault coverage can be achieve.[[conferencetype]]國際[[conferencedate]]20050523~20050526[[booktype]]紙本[[conferencelocation]]Kobe, Japa

    Efficient simultaneous encryption and compression of digital videos in computationally constrained applications

    Get PDF
    This thesis is concerned with the secure video transmission over open and wireless network channels. This would facilitate adequate interaction in computationally constrained applications among trusted entities such as in disaster/conflict zones, secure airborne transmission of videos for intelligence/security or surveillance purposes, and secure video communication for law enforcing agencies in crime fighting or in proactive forensics. Video content is generally too large and vulnerable to eavesdropping when transmitted over open network channels so that compression and encryption become very essential for storage and/or transmission. In terms of security, wireless channels, are more vulnerable than other kinds of mediums to a variety of attacks and eavesdropping. Since wireless communication is the main mode in the above applications, protecting video transmissions from unauthorized access through such network channels is a must. The main and multi-faceted challenges that one faces in implementing such a task are related to competing, and to some extent conflicting, requirements of a number of standard control factors relating to the constrained bandwidth, reasonably high image quality at the receiving end, the execution time, and robustness against security attacks. Applying both compression and encryption techniques simultaneously is a very tough challenge due to the fact that we need to optimize the compression ratio, time complexity, security and the quality simultaneously. There are different available image/video compression schemes that provide reasonable compression while attempting to maintain image quality, such as JPEG, MPEG and JPEG2000. The main approach to video compression is based on detecting and removing spatial correlation within the video frames as well as temporal correlations across the video frames. Temporal correlations are expected to be more evident across sequences of frames captured within a short period of time (often a fraction of a second). Correlation can be measured in terms of similarity between blocks of pixels. Frequency domain transforms such as the Discrete Cosine Transform (DCT) and the Discrete Wavelet Transform (DWT) have both been used restructure the frequency content (coefficients) to become amenable for efficient detection. JPEG and MPEG use DCT while JPEG2000 uses DWT. Removing spatial/temporal correlation encodes only one block from each class of equivalent (i.e. similar) blocks and remembering the position of all other block within the equivalence class. JPEG2000 compressed images achieve higher image quality than JPEG for the same compression ratios, while DCT based coding suffer from noticeable distortion at high compression ratio but when applied to any block it is easy to isolate the significant coefficients from the non-significant ones. Efficient video encryption in computationally constrained applications is another challenge on its own. It has long been recognised that selective encryption is the only viable approach to deal with the overwhelming file size. Selection can be made in the spatial or frequency domain. Efficiency of simultaneous compression and encryption is a good reason for us to apply selective encryption in the frequency domain. In this thesis we develop a hybrid of DWT and DCT for improved image/video compression in terms of image quality, compression ratio, bandwidth, and efficiency. We shall also investigate other techniques that have similar properties to the DCT in terms of representation of significant wavelet coefficients. The statistical properties of wavelet transform high frequency sub-bands provide one such approach, and we also propose phase sensing as another alternative but very efficient scheme. Simultaneous compression and encryption, in our investigations, were aimed at finding the best way of applying these two tasks in parallel by selecting some wavelet sub-bands for encryptions and applying compression on the other sub-bands. Since most spatial/temporal correlation appear in the high frequency wavelet sub-bands and the LL sub-bands of wavelet transformed images approximate the original images then we select the LL-sub-band data for encryption and the non-LL high frequency sub-band coefficients for compression. We also follow the common practice of using stream ciphers to meet efficiency requirements of real-time transmission. For key stream generation we investigated a number of schemes and the ultimate choice will depend on robustness to attacks. The still image (i.e. RF’s) are compressed with a modified EZW wavelet scheme by applying the DCT on the blocks of the wavelet sub-bands, selecting appropriate thresholds for determining significance of coefficients, and encrypting the EZW thresholds only with a simple 10-bit LFSR cipher This scheme is reasonably efficient in terms of processing time, compression ratio, image quality, as well was security robustness against statistical and frequency attack. However, many areas for improvements were identified as necessary to achieve the objectives of the thesis. Through a process of refinement we developed and tested 3 different secure efficient video compression schemes, whereby at each step we improve the performance of the scheme in the previous step. Extensive experiments are conducted to test performance of the new scheme, at each refined stage, in terms of efficiency, compression ratio, image quality, and security robustness. Depending on the aspects of compression that needs improvement at each refinement step, we replaced the previous block coding scheme with a more appropriate one from among the 3 above mentioned schemes (i.e. DCT, Edge sensing and phase sensing) for the reference frames or the non-reference ones. In subsequent refinement steps we apply encryption to a slightly expanded LL-sub-band using successively more secure stream ciphers, but with different approaches to key stream generation. In the first refinement step, encryption utilized two LFSRs seeded with three secret keys to scramble the significant wavelet LL-coefficients multiple times. In the second approach, the encryption algorithm utilises LFSR to scramble the wavelet coefficients of the edges extracted from the low frequency sub-band. These edges are mapped from the high frequency sub-bands using different threshold. Finally, use a version of the A5 cipher combined with chaotic logistic map to encrypt the significant parameters of the LL sub-band. Our empirical results show that the refinement process achieves the ultimate objectives of the thesis, i.e. efficient secure video compression scheme that is scalable in terms of the frame size at about 100 fps and satisfying the following features; high compression, reasonable quality, and resistance to the statistical, frequency and the brute force attack with low computational processing. Although image quality fluctuates depending on video complexity, in the conclusion we recommend an adaptive implementation of our scheme. Although this thesis does not deal with transmission tasks but the efficiency achieved in terms of video encryption and compression time as well as in compression ratios will be sufficient for real-time secure transmission of video using commercially available mobile computing devices

    Stream ciphers

    Get PDF

    Stream ciphers for secure display

    Get PDF
    In any situation where private, proprietary or highly confidential material is being dealt with, the need to consider aspects of data security has grown ever more important. It is usual to secure such data from its source, over networks and on to the intended recipient. However, data security considerations typically stop at the recipient's processor, leaving connections to a display transmitting raw data which is increasingly in a digital format and of value to an adversary. With a progression to wireless display technologies the prominence of this vulnerability is set to rise, making the implementation of 'secure display' increasingly desirable. Secure display takes aspects of data security right to the display panel itself, potentially minimising the cost, component count and thickness of the final product. Recent developments in display technologies should help make this integration possible. However, the processing of large quantities of time-sensitive data presents a significant challenge in such resource constrained environments. Efficient high- throughput decryption is a crucial aspect of the implementation of secure display and one for which the widely used and well understood block cipher may not be best suited. Stream ciphers present a promising alternative and a number of strong candidate algorithms potentially offer the hardware speed and efficiency required. In the past, similar stream ciphers have suffered from algorithmic vulnerabilities. Although these new-generation designs have done much to respond to this concern, the relatively short 80-bit key lengths of some proposed hardware candidates, when combined with ever-advancing computational power, leads to the thesis identifying exhaustive search of key space as a potential attack vector. To determine the value of protection afforded by such short key lengths a unique hardware key search engine for stream ciphers is developed that makes use of an appropriate data element to improve search efficiency. The simulations from this system indicate that the proposed key lengths may be insufficient for applications where data is of long-term or high value. It is suggested that for the concept of secure display to be accepted, a longer key length should be used

    Low power data acquisition for microImplant biometric monitoring of tremors

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 97-100).In recent years, trends in the medical industry have created a growing demand for implantable medical devices. In particular, the need to provide doctors a means to continuously monitor biometrics over long time scales with increased precision is paramount to efficient healthcare. To make medical implants more attractive, there is a need to reduce their size and power consumption. Small medical implants would allow for less invasive procedures, greater comfort for patients, and increased patient compliance. Reductions in power consumption translate to longer battery life. The two primary limitations to the size of small medical implants are the batteries that provide energy to circuit and sensor components and the antennas that enable wireless communication to terminals outside of the body. The theory is applied in the context of the long term monitoring of Parkinson's tremors. This work investigates how to reduce the amount of data needing to acquire a signal by applying compressive sampling thereby alleviating the demand on the energy source. A low energy SAR ADC is designed using adiabatic charging to further reduce energy usage. This application is ideal for adiabatic techniques because of the low frequency of operation and the ease with which we can reclaim energy from discharging the capacitors. Keywords: SAR ADC, adiabatic, compressive sampling, biometric, implantsby Tania Khanna.Ph.D

    Development of Advanced Closed-Loop Brain Electrophysiology Systems for Freely Behaving Rodents

    Full text link
    [ES] La electrofisiología extracelular es una técnica ampliamente usada en investigación neurocientífica, la cual estudia el funcionamiento del cerebro mediante la medición de campos eléctricos generados por la actividad neuronal. Esto se realiza a través de electrodos implantados en el cerebro y conectados a dispositivos electrónicos para amplificación y digitalización de las señales. De los muchos modelos animales usados en experimentación, las ratas y los ratones se encuentran entre las especies más comúnmente utilizadas. Actualmente, la experimentación electrofisiológica busca condiciones cada vez más complejas, limitadas por la tecnología de los dispositivos de adquisición. Dos aspectos son de particular interés: Realimentación de lazo cerrado y comportamiento en condiciones naturales. En esta tesis se presentan desarrollos con el objetivo de mejorar diferentes facetas de estos dos problemas. La realimentación en lazo cerrado se refiere a todas las técnicas en las que los estímulos son producidos en respuesta a un evento generado por el animal. La latencia debe ajustarse a las escalas temporales bajo estudio. Los sistemas modernos de adquisición presentan latencias en el orden de los 10ms. Sin embargo, para responder a eventos rápidos, como pueden ser los potenciales de acción, se requieren latencias por debajo de 1ms. Además, los algoritmos para detectar los eventos o generar los estímulos pueden ser complejos, integrando varias entradas de datos en tiempo real. Integrar el desarrollo de dichos algoritmos en las herramientas de adquisición forma parte del diseño experimental. Para estudiar comportamientos naturales, los animales deben ser capaces de moverse libremente en entornos emulando condiciones naturales. Experimentos de este tipo se ven dificultados por la naturaleza cableada de los sistemas de adquisición. Otras restricciones físicas, como el peso de los implantes o limitaciones en el consumo de energía, pueden también afectar a la duración de los experimentos, limitándola. La experimentación puede verse enriquecida cuando los datos electrofisiológicos se ven complementados con múltiples fuentes distintas. Por ejemplo, seguimiento de los animales o miscroscopía. Herramientas capaces de integrar datos independientemente de su origen abren la puerta a nuevas posibilidades. Los avances tecnológicos presentados abordan estas limitaciones. Se han diseñado dispositivos con latencias de lazo cerrado inferiores a 200us que permiten combinar cientos de canales electrofisiológicos con otras fuentes de datos, como vídeo o seguimiento. El software de control para estos dispositivos se ha diseñado manteniendo la flexibilidad como objetivo. Se han desarrollado interfaces y estándares de naturaleza abierta para incentivar el desarrollo de herramientas compatibles entre ellas. Para resolver los problemas de cableado se siguieron dos métodos distintos. Uno fue el desarrollo de headstages ligeros combinados con cables coaxiales ultra finos y conmutadores activos, gracias al seguimiento de animales. Este desarrollo permite reducir el esfuerzo impuesto a los animales, permitiendo espacios amplios y experimentos de larga duración, al tiempo que permite el uso de headstages con características avanzadas. Paralelamente se desarrolló un tipo diferente de headstage, con tecnología inalámbrica. Se creó un algoritmo de compresión digital especializado capaz de reducir el ancho de banda a menos del 65% de su tamaño original, ahorrando energía. Esta reducción permite baterías más ligeras y mayores tiempos de operación. El algoritmo fue diseñado para ser capaz de ser implementado en una gran variedad de dispositivos. Los desarrollos presentados abren la puerta a nuevas posibilidades experimentales para la neurociencia, combinando adquisición elextrofisiológica con estudios conductuales en condiciones naturales y estímulos complejos en tiempo real.[CA] L'electrofisiologia extracel·lular és una tècnica àmpliament utilitzada en la investigació neurocientífica, la qual permet estudiar el funcionament del cervell mitjançant el mesurament de camps elèctrics generats per l'activitat neuronal. Això es realitza a través d'elèctrodes implantats al cervell, connectats a dispositius electrònics per a l'amplificació i digitalització dels senyals. Dels molts models animals utilitzats en experimentació electrofisiològica, les rates i els ratolins es troben entre les espècies més utilitzades. Actualment, l'experimentació electrofisiològica busca condicions cada vegada més complexes, limitades per la tecnologia dels dispositius d'adquisició. Dos aspectes són d'especial interès: La realimentació de sistemes de llaç tancat i el comportament en condicions naturals. En aquesta tesi es presenten desenvolupaments amb l'objectiu de millorar diferents aspectes d'aquestos dos problemes. La realimentació de sistemes de llaç tancat es refereix a totes aquestes tècniques on els estímuls es produeixen en resposta a un esdeveniment generat per l'animal. La latència ha d'ajustar-se a les escales temporals sota estudi. Els sistemes moderns d'adquisició presenten latències en l'ordre dels 10ms. No obstant això, per a respondre a esdeveniments ràpids, com poden ser els potencials d'acció, es requereixen latències per davall de 1ms. A més a més, els algoritmes per a detectar els esdeveniments o generar els estímuls poden ser complexos, integrant varies entrades de dades a temps real. Integrar el desenvolupament d'aquests algoritmes en les eines d'adquisició forma part del disseny dels experiments. Per a estudiar comportaments naturals, els animals han de ser capaços de moure's lliurement en ambients emulant condicions naturals. Aquestos experiments es veuen limitats per la natura cablejada dels sistemes d'adquisició. Altres restriccions físiques, com el pes dels implants o el consum d'energia, poden també limitar la duració dels experiments. L'experimentació es pot enriquir quan les dades electrofisiològiques es complementen amb dades de múltiples fonts. Per exemple, el seguiment d'animals o microscòpia. Eines capaces d'integrar dades independentment del seu origen obrin la porta a noves possibilitats. Els avanços tecnològics presentats tracten aquestes limitacions. S'han dissenyat dispositius amb latències de llaç tancat inferiors a 200us que permeten combinar centenars de canals electrofisiològics amb altres fonts de dades, com vídeo o seguiment. El software de control per a aquests dispositius s'ha dissenyat mantenint la flexibilitat com a objectiu. S'han desenvolupat interfícies i estàndards de naturalesa oberta per a incentivar el desenvolupament d'eines compatibles entre elles. Per a resoldre els problemes de cablejat es van seguir dos mètodes diferents. Un va ser el desenvolupament de headstages lleugers combinats amb cables coaxials ultra fins i commutadors actius, gràcies al seguiment d'animals. Aquest desenvolupament permet reduir al mínim l'esforç imposat als animals, permetent espais amplis i experiments de llarga durada, al mateix temps que permet l'ús de headstages amb característiques avançades. Paral·lelament es va desenvolupar un tipus diferent de headstage, amb tecnologia sense fil. Es va crear un algorisme de compressió digital especialitzat capaç de reduir l'amplada de banda a menys del 65% de la seua grandària original, estalviant energia. Aquesta reducció permet bateries més lleugeres i majors temps d'operació. L'algorisme va ser dissenyat per a ser capaç de ser implementat a una gran varietat de dispositius. Els desenvolupaments presentats obrin la porta a noves possibilitats experimentals per a la neurociència, combinant l'adquisició electrofisiològica amb estudis conductuals en condicions naturals i estímuls complexos en temps real.[EN] Extracellular electrophysiology is a technique widely used in neuroscience research. It can offer insights on how the brain works by measuring the electrical fields generated by neural activity. This is done through electrodes implanted in the brain and connected to amplification and digitization electronic circuitry. Of the many animal models used in electrophysiology experimentation, rodents such as rats and mice are among the most popular species. Modern electrophysiology experiments seek increasingly complex conditions that are limited by acquisition hardware technology. Two particular aspects are of special interest: Closed-loop feedback and naturalistic behavior. In this thesis, we present developments aiming to improve on different facets of these two problems. Closed-loop feedback encompasses all techniques in which stimuli is produced in response of an event generated by the animal. Latency, the time between trigger event and stimuli generation, must adjust to the biological timescale being studied. While modern acquisition systems feature latencies in the order of 10ms, response to fast events such as high-frequency electrical transients created by neuronal activity require latencies under 1ms1ms. In addition, algorithms for triggering or generating closed-loop stimuli can be complex, integrating multiple inputs in real-time. Integration of algorithm development into acquisition tools becomes an important part of experiment design. For electrophysiology experiments featuring naturalistic behavior, animals must be able to move freely in ecologically meaningful environments, mimicking natural conditions. Experiments featuring elements such as large arenaa, environmental objects or the presence of another animals are, however, hindered by the wired nature of acquisition systems. Other physical constraints, such as implant weight or power restrictions can also affect experiment time, limiting their duration. Beyond the technical limits, complex experiments are enriched when electrophysiology data is integrated with multiple sources, for example animal tracking or brain microscopy. Tools allowing mixing data independently of the source open new experimental possibilities. The technological advances presented on this thesis addresses these topics. We have designed devices with closed-loop latencies under 200us while featuring high-bandwidth interfaces. These allow the simultaneous acquisition of hundreds of electrophysiological channels combined with other heterogeneous data sources, such as video or tracking. The control software for these devices was designed with flexibility in mind, allowing easy implementation of closed-loop algorithms. Open interface standards were created to encourage the development of interoperable tools for experimental data integration. To solve wiring issues in behavioral experiments, we followed two different approaches. One was the design of light headstages, coupled with ultra-thin coaxial cables and active commutator technology, making use of animal tracking. This allowed to reduce animal strain to a minimum allowing large arenas and prolonged experiments with advanced headstages. A different, wireless headstage was also developed. We created a digital compression algorithm specialized for neural electrophysiological signals able to reduce data bandwidth to less than 65.5% its original size without introducing distortions. Bandwidth has a large effect on power requirements. Thus, this reduction allows for lighter batteries and extended operational time. The algorithm is designed to be able to be implemented in a wide variety of devices, requiring low hardware resources and adding negligible power requirements to a system. Combined, the developments we present open new possibilities for neuroscience experiments combining electrophysiology acquisition with natural behaviors and complex, real-time, stimuli.The research described in this thesis was carried out at the Polytechnic University of Valencia (Universitat Politècnica de València), Valencia, Spain in an extremely close collaboration with the Neuroscience Institute - Spanish National Research Council - Miguel Hernández University (Instituto de Neurociencias - Consejo Superior de Investigaciones Cientí cas - Universidad Miguel Hernández), San Juan de Alicante, Spain. The projects described in chapters 3 and 4 were developed in collabo- ration with, and funded by, Open Ephys, Cambridge, MA, USA and OEPS - Eléctronica e produção, unipessoal lda, Algés, Portugal.Cuevas López, A. (2021). Development of Advanced Closed-Loop Brain Electrophysiology Systems for Freely Behaving Rodents [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/179718TESI

    Towards an embedded board-level tester: study of a configurable test processor

    Get PDF
    The demand for electronic systems with more features, higher performance, and less power consumption increases continuously. This is a real challenge for design and test engineers because they have to deal with electronic systems with ever-increasing complexity maintaining production and test costs low and meeting critical time to market deadlines. For a test engineer working at the board-level, this means that manufacturing defects must be detected as soon as possible and at a low cost. However, the use of classical test techniques for testing modern printed circuit boards is not sufficient, and in the worst case these techniques cannot be used at all. This is mainly due to modern packaging technologies, a high device density, and high operation frequencies of modern printed circuit boards. This leads to very long test times, low fault coverage, and high test costs. This dissertation addresses these issues and proposes an FPGA-based test approach for printed circuit boards. The concept is based on a configurable test processor that is temporarily implemented in the on-board FPGA and provides the corresponding mechanisms to communicate to external test equipment and co-processors implemented in the FPGA. This embedded test approach provides the flexibility to implement test functions either in the external test equipment or in the FPGA. In this manner, tests are executed at-speed increasing the fault coverage, test times are reduced, and the test system can be adapted automatically to the properties of the FPGA and devices located on the board. An essential part of the FPGA-based test approach deals with the development of a test processor. In this dissertation the required properties of the processor are discussed, and it is shown that the adaptation to the specific test scenario plays a very important role for the optimization. For this purpose, the test processor is equipped with configuration parameters at the instruction set architecture and microarchitecture level. Additionally, an automatic generation process for the test system and for the computation of some of the processor’s configuration parameters is proposed. The automatic generation process uses as input a model known as the device under test model (DUT-M). In order to evaluate the entire FPGA-based test approach and the viability of a processor for testing printed circuit boards, the developed test system is used to test interconnections to two different devices: a static random memory (SRAM) and a liquid crystal display (LCD). Experiments were conducted in order to determine the resource utilization of the processor and FPGA-based test system and to measure test time when different test functions are implemented in the external test equipment or the FPGA. It has been shown that the introduced approach is suitable to test printed circuit boards and that the test processor represents a realistic alternative for testing at board-level.Der Bedarf an elektronischen Systemen mit zusätzlichen Merkmalen, höherer Leistung und geringerem Energieverbrauch nimmt ständig zu. Dies stellt eine erhebliche Herausforderung für Entwicklungs- und Testingenieure dar, weil sie sich mit elektronischen Systemen mit einer steigenden Komplexität zu befassen haben. Außerdem müssen die Herstellungs- und Testkosten gering bleiben und die Produkteinführungsfristen so kurz wie möglich gehalten werden. Daraus folgt, dass ein Testingenieur, der auf Leiterplatten-Ebene arbeitet, die Herstellungsfehler so früh wie möglich entdecken und dabei möglichst niedrige Kosten verursachen soll. Allerdings sind die klassischen Testmethoden nicht in der Lage, die Anforderungen von modernen Leiterplatten zu erfüllen und im schlimmsten Fall können diese Testmethoden überhaupt nicht verwendet werden. Dies liegt vor allem an modernen Gehäuse-Technologien, der hohen Bauteildichte und den hohen Arbeitsfrequenzen von modernen Leiterplatten. Das führt zu sehr langen Testzeiten, geringer Testabdeckung und hohen Testkosten. Die Dissertation greift diese Problematik auf und liefert einen FPGA-basierten Testansatz für Leiterplatten. Das Konzept beruht auf einem konfigurierbaren Testprozessor, welcher im On-Board-FPGA temporär implementiert wird und die entsprechenden Mechanismen für die Kommunikation mit der externen Testeinrichtung und Co-Prozessoren im FPGA bereitstellt. Dadurch ist es möglich Testfunktionen flexibel entweder auf der externen Testeinrichtung oder auf dem FPGA zu implementieren. Auf diese Weise werden Tests at-speed ausgeführt, um die Testabdeckung zu erhöhen. Außerdem wird die Testzeit verkürzt und das Testsystem automatisch an die Eigenschaften des FPGAs und anderer Bauteile auf der Leiterplatte angepasst. Ein wesentlicher Teil des FPGA-basierten Testansatzes umfasst die Entwicklung eines Testprozessors. In dieser Dissertation wird über die benötigten Eigenschaften des Prozessors diskutiert und es wird gezeigt, dass die Anpassung des Prozessors an den spezifischen Testfall von großer Bedeutung für die Optimierung ist. Zu diesem Zweck wird der Prozessor mit Konfigurationsparametern auf der Befehlssatzarchitektur-Ebene und Mikroarchitektur-Ebene ausgerüstet. Außerdem wird ein automatischer Generierungsprozess für die Realisierung des Testsystems und für die Berechnung einer Untergruppe von Konfigurationsparametern des Prozessors vorgestellt. Der automatische Generierungsprozess benutzt als Eingangsinformation ein Modell des Prüflings (device under test model, DUT-M). Das entwickelte Testsystem wurde zum Testen von Leiterplatten für Verbindungen zwischen dem FPGA und zwei Bauteilen verwendet, um den FPGA-basierten Testansatz und die Durchführbarkeit des Testprozessors für das Testen auf Leiterplatte-Ebene zu evaluieren. Die zwei Bauteile sind ein Speicher mit direktem Zugriff (static random-access memory, SRAM) und eine Flüssigkristallanzeige (liquid crystal display, LCD). Die Experimente wurden durchgeführt, um den Ressourcenverbrauch des Prozessors und Testsystems festzustellen und um die Testzeit zu messen. Dies geschah durch die Implementierung von unterschiedlichen Testfunktionen auf der externen Testeinrichtung und dem FPGA. Dadurch konnte gezeigt werden, dass der FPGA-basierte Ansatz für das Testen von Leiterplatten geeignet ist und dass der Testprozessor eine realistische Alternative für das Testen auf Leiterplatten-Ebene ist
    corecore