39 research outputs found

    An extended cell set of self-timed designs

    Get PDF
    Journal ArticleThe high level synthesis approach described in [1] uses hopCP[2] uses hopCP[2 language for behavioral descriptions. The behavioral specifications are then translated into Hop Flow Graphs (HFGs). The actions in the graph are then refined such that refined actions can be directly mapped onto asynchronous circuit blocks. This report describes library of such blocks called action-blocks. The action blocks use two phase transition signalling protocol for control signals and bundled protocol for data signals. The blocks have been designed using ViewLogic Design tools

    A partial scan methodology for testing self-timed circuits

    Get PDF
    technical reportThis paper presents a partial scan method for testing control sections of macromodule based self-timed circuits for stuck-at faults. In comparison with other proposed test methods for self-timed circuits, this technique offers better fault coverage than methods using self-checking techniques, and requires fewer storage elements to be made scannable than full scan approaches with similar fault coverage. A new method is proposed to test the sequential network in this partial scan environment. Experimental data is presented to show that high fault coverage is possible using this method with only a subset of storage elements being made scannable

    ACT: A DFT tool for self-timed circuits

    Get PDF
    Journal ArticleThis paper presents a Design for Testability (DFT) tool called ACT (Asynchronous Circuit Testing) which uses a partial scan technique to make macro-module based selftimed circuits testable. The ACT tool is the first oFits kind for testing macro-module based self-timed circuits. ACT modifies designs automatically to incorporate partial scan and provides a complete path from schematic capturie to physical layout. It also has a test generation system to generate vectors for the testable design and to compute fault coverage of the generated tests. The test generatioin system includes a module for doing critical hazard free (.est generation using a new 6-valued algebra. ACT has been hilt around commercial tools from Viewlogic and Cascade. A Viewlogic schematic is used as the design entry point and Cascade tools are used for technology mapping

    Testing micropipelines

    Get PDF
    Journal ArticleMicropipelines, self-timed event-driven pipelines, are an attractive way of structuring asynchronous systems that exhibit many of the advantages of general asynchronous systems, but enough structure to make the design of significant systems practical. As with any design method, testing is critical. We present a technique for testing self-timed micropipelines for stuck-at faults and for delay faults an the bundled data paths by modifying the latch and control elements to include a built-in scan path for testing. This scan path allows the processing logic in the micropipeline, to be fully tested with only a small overhead in the latch and control circuits. The test method is very similar to scan testing in synchronous systems, but the micropipeline retains its self-timed behavior during normal operation

    Critical hazard free test generation for asynchronous circuits

    Get PDF
    Journal ArticleWe describe a technique to generate critical hazard-free tests for self-timed control circuits build using a macromodule library, in a partial scan based DFT environment. We propose a 6 valued algebra to generate these tests which are guaranteed to be critical hazard free under an unbounded delay model. This algebra has been incorporated in a D-algorithm based automatic test pattern generator

    Testing self-timed circuits using partial scan

    Get PDF
    Journal ArticleThis paper presents a partial scan method for testing both the control and data path parts of macromodule based self-timed circuits for stuck-at faults. Compared with other proposed test methods for testing control paths in self-timed circuits, this technique offers better fault coverage under a stuck-at input model than methods using self-checking properties, and requires fewer storage elements to be made scanable than full scan approaches with similar fault coverage. A new method is proposed to test the sequential network in the control path in this partial scan environment. The partial scan approach has also been applied to datapaths, where structural analysis is used to select which latches should be made scannable to break cycles in the circuit. Experimental data is presented to show that high fault coverage is possible using this method with only a subset of storage elements in the control and data paths being made scannable

    Semantic 3D Grid Maps for Autonomous Driving

    Full text link
    Maps play a key role in rapidly developing area of autonomous driving. We survey the literature for different map representations and find that while the world is three-dimensional, it is common to rely on 2D map representations in order to meet real-time constraints. We believe that high levels of situation awareness require a 3D representation as well as the inclusion of semantic information. We demonstrate that our recently presented hierarchical 3D grid mapping framework UFOMap meets the real-time constraints. Furthermore, we show how it can be used to efficiently support more complex functions such as calculating the occluded parts of space and accumulating the output from a semantic segmentation network.Comment: Submitted, accepted and presented at the 25th IEEE International Conference on Intelligent Transportation Systems (IEEE ITSC 2022

    Djupuppskattning från bilder med täta korrespondenser från Camera-Lidar och Deep Learning

    No full text
    Depth estimation from 2D images is a fundamental problem in Computer Vision, and is increasingly becoming an important topic for Autonomous Driving. A lot of research is driven by innovations in Convolutional Neural Networks, which efficiently encode low as well as high level image features and are able to fuse them to find accurate pixel correspondences and learn the scale of the objects. Current state-of-the-art deep learning models employ a semi-supervised learning approach, which is a combination of unsupervised and supervised learning. Most of the research community relies on the KITTI datasets for benchmarking of results. But the training performance is known to be limited by the sparseness of the lidar ground truth as well as lack of training data. In this thesis, multiple stereo datasets with increasingly denser depth maps are generated on the corpus of driving data collected at the Audi Electronics Venture GmbH. In this regard, a methodology is presented to obtain an accurate and dense registration between the camera and lidar sensors. Approaches are also outlined to rectify the stereo image datasets and filter the depth maps. Keeping the architecture fixed, a monocular and a stereo depth estimation network each are trained on these datasets and their performances are compared to other networks reported in literature. The results are competitive, with the stereo network exceeding the state-of-the-art accuracy. More work is needed though to establish the influence of increasing depth density on depth estimation performance. The proposed method forms a solid platform for pushing the envelope of depth estimation research as well as other application areas critical to autonomous driving.Djupberäkning från 2D-bilder är ett grundläggande problem i datorseende och blir alltmer ett viktigt ämne för autonom körning. Mycket forskning drivs av innovationer i faltningsnätverk, som effektivt kodar bildfunktioner på låg såväl som hög nivå och är i stånd att kombinera dem för att hitta exakta pixelkorrespondenser och lära sig objektens skala. Nuvarande moderna djupinlärningsmodeller använder en semi-övervakad inlärningsmetod, som är en kombination av oövervakat och övervakat lärande. Större delen av forskningssamhället förlitar sig på KITTI-dataset för att jämföra resultat. Men träningsprestanda begränsas av glesheten i lidars ground truth samt brist på träningsdata. I det här examensarbetet genereras flera stereo-datauppsättningar med allt tätare djupkartor på mängden av kör data som samlats in på Audi Electronics Venture GmbH. I detta sammanhang presenteras en metodik för att erhålla en precis och tät registrering mellan kameran och lidarsensorer. Metodik beskrivs också för att korrigera stereobild-dataseten och filtrera djupkartorna. Medan strukturen hålls fixerad tränas nätverk för vardera monokulär och stereo-djupupskattning på dessa dataset och deras prestanda jämförs med andra nätverk rapporterade i litteraturen. Resultaten är konkurrenskraftiga och stereonätverket överträffar den senaste tekniken. Mer arbete behövs för att fastställa påverkan av ökande djuptäthet på djupuppskattningsprestanda. Den föreslagna metoden utgör en solid plattform för att flytta gränserna för djupberäkningsforskning såväl som andra tillämpningsområden som är kritiska för autonom körning

    Djupuppskattning från bilder med täta korrespondenser från Camera-Lidar och Deep Learning

    No full text
    Depth estimation from 2D images is a fundamental problem in Computer Vision, and is increasingly becoming an important topic for Autonomous Driving. A lot of research is driven by innovations in Convolutional Neural Networks, which efficiently encode low as well as high level image features and are able to fuse them to find accurate pixel correspondences and learn the scale of the objects. Current state-of-the-art deep learning models employ a semi-supervised learning approach, which is a combination of unsupervised and supervised learning. Most of the research community relies on the KITTI datasets for benchmarking of results. But the training performance is known to be limited by the sparseness of the lidar ground truth as well as lack of training data. In this thesis, multiple stereo datasets with increasingly denser depth maps are generated on the corpus of driving data collected at the Audi Electronics Venture GmbH. In this regard, a methodology is presented to obtain an accurate and dense registration between the camera and lidar sensors. Approaches are also outlined to rectify the stereo image datasets and filter the depth maps. Keeping the architecture fixed, a monocular and a stereo depth estimation network each are trained on these datasets and their performances are compared to other networks reported in literature. The results are competitive, with the stereo network exceeding the state-of-the-art accuracy. More work is needed though to establish the influence of increasing depth density on depth estimation performance. The proposed method forms a solid platform for pushing the envelope of depth estimation research as well as other application areas critical to autonomous driving.Djupberäkning från 2D-bilder är ett grundläggande problem i datorseende och blir alltmer ett viktigt ämne för autonom körning. Mycket forskning drivs av innovationer i faltningsnätverk, som effektivt kodar bildfunktioner på låg såväl som hög nivå och är i stånd att kombinera dem för att hitta exakta pixelkorrespondenser och lära sig objektens skala. Nuvarande moderna djupinlärningsmodeller använder en semi-övervakad inlärningsmetod, som är en kombination av oövervakat och övervakat lärande. Större delen av forskningssamhället förlitar sig på KITTI-dataset för att jämföra resultat. Men träningsprestanda begränsas av glesheten i lidars ground truth samt brist på träningsdata. I det här examensarbetet genereras flera stereo-datauppsättningar med allt tätare djupkartor på mängden av kör data som samlats in på Audi Electronics Venture GmbH. I detta sammanhang presenteras en metodik för att erhålla en precis och tät registrering mellan kameran och lidarsensorer. Metodik beskrivs också för att korrigera stereobild-dataseten och filtrera djupkartorna. Medan strukturen hålls fixerad tränas nätverk för vardera monokulär och stereo-djupupskattning på dessa dataset och deras prestanda jämförs med andra nätverk rapporterade i litteraturen. Resultaten är konkurrenskraftiga och stereonätverket överträffar den senaste tekniken. Mer arbete behövs för att fastställa påverkan av ökande djuptäthet på djupuppskattningsprestanda. Den föreslagna metoden utgör en solid plattform för att flytta gränserna för djupberäkningsforskning såväl som andra tillämpningsområden som är kritiska för autonom körning

    Djupuppskattning från bilder med täta korrespondenser från Camera-Lidar och Deep Learning

    No full text
    Depth estimation from 2D images is a fundamental problem in Computer Vision, and is increasingly becoming an important topic for Autonomous Driving. A lot of research is driven by innovations in Convolutional Neural Networks, which efficiently encode low as well as high level image features and are able to fuse them to find accurate pixel correspondences and learn the scale of the objects. Current state-of-the-art deep learning models employ a semi-supervised learning approach, which is a combination of unsupervised and supervised learning. Most of the research community relies on the KITTI datasets for benchmarking of results. But the training performance is known to be limited by the sparseness of the lidar ground truth as well as lack of training data. In this thesis, multiple stereo datasets with increasingly denser depth maps are generated on the corpus of driving data collected at the Audi Electronics Venture GmbH. In this regard, a methodology is presented to obtain an accurate and dense registration between the camera and lidar sensors. Approaches are also outlined to rectify the stereo image datasets and filter the depth maps. Keeping the architecture fixed, a monocular and a stereo depth estimation network each are trained on these datasets and their performances are compared to other networks reported in literature. The results are competitive, with the stereo network exceeding the state-of-the-art accuracy. More work is needed though to establish the influence of increasing depth density on depth estimation performance. The proposed method forms a solid platform for pushing the envelope of depth estimation research as well as other application areas critical to autonomous driving.Djupberäkning från 2D-bilder är ett grundläggande problem i datorseende och blir alltmer ett viktigt ämne för autonom körning. Mycket forskning drivs av innovationer i faltningsnätverk, som effektivt kodar bildfunktioner på låg såväl som hög nivå och är i stånd att kombinera dem för att hitta exakta pixelkorrespondenser och lära sig objektens skala. Nuvarande moderna djupinlärningsmodeller använder en semi-övervakad inlärningsmetod, som är en kombination av oövervakat och övervakat lärande. Större delen av forskningssamhället förlitar sig på KITTI-dataset för att jämföra resultat. Men träningsprestanda begränsas av glesheten i lidars ground truth samt brist på träningsdata. I det här examensarbetet genereras flera stereo-datauppsättningar med allt tätare djupkartor på mängden av kör data som samlats in på Audi Electronics Venture GmbH. I detta sammanhang presenteras en metodik för att erhålla en precis och tät registrering mellan kameran och lidarsensorer. Metodik beskrivs också för att korrigera stereobild-dataseten och filtrera djupkartorna. Medan strukturen hålls fixerad tränas nätverk för vardera monokulär och stereo-djupupskattning på dessa dataset och deras prestanda jämförs med andra nätverk rapporterade i litteraturen. Resultaten är konkurrenskraftiga och stereonätverket överträffar den senaste tekniken. Mer arbete behövs för att fastställa påverkan av ökande djuptäthet på djupuppskattningsprestanda. Den föreslagna metoden utgör en solid plattform för att flytta gränserna för djupberäkningsforskning såväl som andra tillämpningsområden som är kritiska för autonom körning
    corecore