8 research outputs found

    eISP, une architecture de calcul programmable pour l'amélioration d'images sur téléphone portable.

    Get PDF
    4 pagesToday's smart phones, with their embedded high-resolution video sensors, require computing capacities that are too high to easily meet stringent silicon area and power consumption requirements (some one and a half square millimeters and half a watt) especially when programmable components are used. To develop such capacities, integrators still rely on dedicated low resolution video processing components, whose drawback is low flexibility. With this in mind, our paper presents eISP {--} a new, fully programmable Embedded Image Signal Processor architecture, now validated in {TSMC~65nm} technology to achieve a capacity of {16.8~GOPs} at {233~MHz}, for {1.5~mm2^2} of silicon area and a power consumption of {250~mW}. Its resulting efficiency ({67~MOPs/mW}), has made eISP the leading programmable architecture for signal processing, especially for {HD~1080p} video processing on embedded devices such as smart phone

    eISP: a Programmable Processing Architecture for Smart Phone Image Enhancement

    Get PDF
    4 pagesToday's smart phones, with their embedded high-resolution video sensors, require computing capacities that are too high to easily meet stringent silicon area and power consumption requirements (some one and a half square millimeters and half a watt) especially when programmable components are used. To develop such capacities, integrators still rely on dedicated low resolution video processing components, whose drawback is low flexibility. With this in mind, our paper presents eISP {--} a new, fully programmable Embedded Image Signal Processor architecture, now validated in {TSMC 65nm} technology to achieve a capacity of {16.8 GOPs} at {233 MHz}, for {1.5 mm2^2} of silicon area and a power consumption of {250 mW}. Its resulting efficiency ({67 MOPs/mW}), has made eISP the leading programmable architecture for signal processing, especially for {HD 1080p} video processing on embedded devices such as smart phone

    HDR-­ARtiSt: a FPGA-­based Smart Camera for High Dynamic Range color video from multiple exposures

    No full text
    International audienceA camera is able to capture only a part of a high dynamic range scene information. The same scene can be fully perceived by the human visual system. This is true especially for real scenes where the difference in light intensity between the dark areas and bright areas is high. The imaging technique which can overcome this problem is called HDR (High Dynamic Range). It produces images from a set of multiple LDR images (Low Dynamic Range), captured with different exposure times. This technique appears as one of the most appropriate and a cheap solution to enhance the dynamic range of captured environments. We developed an FPGA-based smart camera that produces a HDR live video colour stream from three successive acquisitions. Our hardware platform is build around a standard LDR CMOS sensor and a Virtex 6 FPGA board. The hardware architecture embeds a multiple exposure control, a memory management unit, the HDR creating, and the tone mapping. Our video camera enables a real-time video at 60 frames per second for a full sensor resolution of 1, 280 × 1, 024 pixels

    Fast prototyping of a SoC-based smart camera:a real-time fall detection case study

    No full text
    International audienceSmart camera, i.e. cameras that are able to acquire and process images in real-time, is a typical example of the new embedded computer vision systems. A key example of application is automatic fall detection, which can be useful for helping elderly people in daily life. In this paper, we propose a methodology for development and fast-prototyping of a fall detection system based on such a smart camera, which allows to reduce the development time compared to standard approaches. Founded on a supervised classification approach, we propose a HW/SW implementation to detect falls in a home environment using a single camera and an optimized descriptor adapted to real-time tasks. This heterogeneous implementation is based on Xilinx’s system-on-chip named Zynq. The main contributions of this work are (i) the proposal of a codesignmethodology. These methodologies enable the HW/SW partitioning to be delayed using high-level algorithmic description and high-level synthesis tools. Our approach enables fast prototyping which allows fast architecture exploration and optimisation to be performed, (ii) the design of a hardware accelerator dedicated to boostingbased classification, which is a very popular and efficient algorithm used in image analysis, (iii) the proposal of falldetection embedded in a smart camera and enabling integration into the elderly people environment. Performances of our system are finally compared to the state-of-the-art

    NTIRE 2022 Challenge on High Dynamic Range Imaging: Methods and Results

    Full text link
    This paper reviews the challenge on constrained high dynamic range (HDR) imaging that was part of the New Trends in Image Restoration and Enhancement (NTIRE) workshop, held in conjunction with CVPR 2022. This manuscript focuses on the competition set-up, datasets, the proposed methods and their results. The challenge aims at estimating an HDR image from multiple respective low dynamic range (LDR) observations, which might suffer from under- or over-exposed regions and different sources of noise. The challenge is composed of two tracks with an emphasis on fidelity and complexity constraints: In Track 1, participants are asked to optimize objective fidelity scores while imposing a low-complexity constraint (i.e. solutions can not exceed a given number of operations). In Track 2, participants are asked to minimize the complexity of their solutions while imposing a constraint on fidelity scores (i.e. solutions are required to obtain a higher fidelity score than the prescribed baseline). Both tracks use the same data and metrics: Fidelity is measured by means of PSNR with respect to a ground-truth HDR image (computed both directly and with a canonical tonemapping operation), while complexity metrics include the number of Multiply-Accumulate (MAC) operations and runtime (in seconds).Comment: CVPR Workshops 2022. 15 pages, 21 figures, 2 table
    corecore