3,945 research outputs found

    Design and implementation of a multi-octave-band audio camera for realtime diagnosis

    Full text link
    Noise pollution investigation takes advantage of two common methods of diagnosis: measurement using a Sound Level Meter and acoustical imaging. The former enables a detailed analysis of the surrounding noise spectrum whereas the latter is rather used for source localization. Both approaches complete each other, and merging them into a unique system, working in realtime, would offer new possibilities of dynamic diagnosis. This paper describes the design of a complete system for this purpose: imaging in realtime the acoustic field at different octave bands, with a convenient device. The acoustic field is sampled in time and space using an array of MEMS microphones. This recent technology enables a compact and fully digital design of the system. However, performing realtime imaging with resource-intensive algorithm on a large amount of measured data confronts with a technical challenge. This is overcome by executing the whole process on a Graphic Processing Unit, which has recently become an attractive device for parallel computing

    Time-Shared Execution of Realtime Computer Vision Pipelines by Dynamic Partial Reconfiguration

    Full text link
    This paper presents an FPGA runtime framework that demonstrates the feasibility of using dynamic partial reconfiguration (DPR) for time-sharing an FPGA by multiple realtime computer vision pipelines. The presented time-sharing runtime framework manages an FPGA fabric that can be round-robin time-shared by different pipelines at the time scale of individual frames. In this new use-case, the challenge is to achieve useful performance despite high reconfiguration time. The paper describes the basic runtime support as well as four optimizations necessary to achieve realtime performance given the limitations of DPR on today's FPGAs. The paper provides a characterization of a working runtime framework prototype on a Xilinx ZC706 development board. The paper also reports the performance of realtime computer vision pipelines when time-shared

    On debugging in a parallel system

    Get PDF
    In this paper a description is given of a partly implemented parallel debugger for the Twente University Multicomputer (TUMULT). The system's basic method for exchange of data is message passing. Experience has learned that most programming errors in application software are made in calls to the kernel and the interprocess communication. The debugger is intended to be used for locating bugs at this level in the application software. It is assumed that basic blocks of the debuggee can be debugged using a traditional sequential sourcelevel debugger

    Readout system test benches

    Get PDF
    We propose to develop and exploit versatile multi-purpose Personal Computer-based Test Benches to support the evaluation and design of the basic elements required for digital front-end readout and data transmission systems for an LHC experiment. These test benches will have modular hardware facilities for the operation of new readout system components under realistic conditions, and will implement advanced modern software engineering concepts. They will support components such as fast ADCs, hybrid fibre-optic transceivers, and the prototype VLSI systolic array and data-flow processors currently being developed in national research laboratories and by the emerging European HDTV industry. These efforts would also lay the foundations for projects involving the development of custom-designed VLSI circuits

    An Efficient and Cost Effective FPGA Based Implementation of the Viola-Jones Face Detection Algorithm

    Get PDF
    We present an field programmable gate arrays (FPGA) based implementation of the popular Viola-Jones face detection algorithm, which is an essential building block in many applications such as video surveillance and tracking. Our implementation is a complete system level hardware design described in a hardware description language and validated on the affordable DE2-115 evaluation board. Our primary objective is to study the achievable performance with a low-end FPGA chip based implementation. In addition, we release to the public domain the entire project. We hope that this will enable other researchers to easily replicate and compare their results to ours and that it will encourage and facilitate further research and educational ideas in the areas of image processing, computer vision, and advanced digital design and FPGA prototyping

    System for Anomaly and Failure Detection (SAFD) system development

    Get PDF
    This task specified developing the hardware and software necessary to implement the System for Anomaly and Failure Detection (SAFD) algorithm, developed under Technology Test Bed (TTB) Task 21, on the TTB engine stand. This effort involved building two units; one unit to be installed in the Block II Space Shuttle Main Engine (SSME) Hardware Simulation Lab (HSL) at Marshall Space Flight Center (MSFC), and one unit to be installed at the TTB engine stand. Rocketdyne personnel from the HSL performed the task. The SAFD algorithm was developed as an improvement over the current redline system used in the Space Shuttle Main Engine Controller (SSMEC). Simulation tests and execution against previous hot fire tests demonstrated that the SAFD algorithm can detect engine failure as much as tens of seconds before the redline system recognized the failure. Although the current algorithm only operates during steady state conditions (engine not throttling), work is underway to expand the algorithm to work during transient condition

    Estimating performance of an ray- tracing ASIC design

    Get PDF
    Journal ArticleRecursive ray tracing is a powerful rendering technique used to compute realistic images by simulating the global light transport in a scene. Algorithmic improvements and FPGA-based hardware implementations of ray tracing have demonstrated realtime performance but hardware that achieves performance levels comparable to commodity rasterization graphics chips is still not available. This paper describes the architecture and ASIC implementations of the DRPU design (Dynamic Ray Processing Unit) that closes this performance gap. The DRPU supports fully programmable shading and most kinds of dynamic scenes and thus provides similar capabilities as current GPUs. It achieves high efficiency due to SIMD processing of floating point vectors, massive multithreading, synchronous execution of packets of threads, and careful management of caches for scene data. To support dynamic scenes B-KD trees are used as spatial index structures that are processed by a custom traversal and intersection unit and modified by an Update Processor on scene changes

    Advancing automation and robotics technology for the Space Station Freedom and for the U.S. economy

    Get PDF
    In April 1985, as required by Public Law 98-371, the NASA Advanced Technology Advisory Committee (ATAC) reported to Congress the results of its studies on advanced automation and robotics technology for use on Space Station Freedom. This material was documented in the initial report (NASA Technical Memorandum 87566). A further requirement of the law was that ATAC follow NASA's progress in this area and report to Congress semiannually. This report is the fifteenth in a series of progress updates and covers the period between 27 Feb. - 17 Sep. 1992. The progress made by Levels 1, 2, and 3 of the Space Station Freedom in developing and applying advanced automation and robotics technology is described. Emphasis was placed upon the Space Station Freedom program responses to specific recommendations made in ATAC Progress Report 14. Assessments are presented for these and other areas as they apply to the advancement of automation and robotics technology for Space Station Freedom

    Weakly- and Self-Supervised Learning for Content-Aware Deep Image Retargeting

    Full text link
    This paper proposes a weakly- and self-supervised deep convolutional neural network (WSSDCNN) for content-aware image retargeting. Our network takes a source image and a target aspect ratio, and then directly outputs a retargeted image. Retargeting is performed through a shift map, which is a pixel-wise mapping from the source to the target grid. Our method implicitly learns an attention map, which leads to a content-aware shift map for image retargeting. As a result, discriminative parts in an image are preserved, while background regions are adjusted seamlessly. In the training phase, pairs of an image and its image-level annotation are used to compute content and structure losses. We demonstrate the effectiveness of our proposed method for a retargeting application with insightful analyses.Comment: 10 pages, 11 figures. To appear in ICCV 2017, Spotlight Presentatio
    corecore