63 research outputs found
A Future for Integrated Diagnostic Helping
International audienceMedical systems used for exploration or diagnostic helping impose high applicative constraints such as real time image acquisition and displaying. A large part of computing requirement of these systems is devoted to image processing. This chapter provides clues to transfer consumers computing architecture approaches to the benefit of medical applications. The goal is to obtain fully integrated devices from diagnostic helping to autonomous lab on chip while taking into account medical domain specific constraints.This expertise is structured as follows: the first part analyzes vision based medical applications in order to extract essentials processing blocks and to show the similarities between consumer’s and medical vision based applications. The second part is devoted to the determination of elementary operators which are mostly needed in both domains. Computing capacities that are required by these operators and applications are compared to the state-of-the-art architectures in order to define an efficient algorithm-architecture adequation. Finally this part demonstrates that it's possible to use highly constrained computing architectures designed for consumers handled devices in application to medical domain. This is based on the example of a high definition (HD) video processing architecture designed to be integrated into smart phone or highly embedded components. This expertise paves the way for the industrialisation of intergraded autonomous diagnostichelping devices, by showing the feasibility of such systems. Their future use would also free the medical staff from many logistical constraints due the deployment of today’s cumbersome systems
Optimizing Image Compression via Joint Learning with Denoising
High levels of noise usually exist in today's captured images due to the
relatively small sensors equipped in the smartphone cameras, where the noise
brings extra challenges to lossy image compression algorithms. Without the
capacity to tell the difference between image details and noise, general image
compression methods allocate additional bits to explicitly store the undesired
image noise during compression and restore the unpleasant noisy image during
decompression. Based on the observations, we optimize the image compression
algorithm to be noise-aware as joint denoising and compression to resolve the
bits misallocation problem. The key is to transform the original noisy images
to noise-free bits by eliminating the undesired noise during compression, where
the bits are later decompressed as clean images. Specifically, we propose a
novel two-branch, weight-sharing architecture with plug-in feature denoisers to
allow a simple and effective realization of the goal with little computational
cost. Experimental results show that our method gains a significant improvement
over the existing baseline methods on both the synthetic and real-world
datasets. Our source code is available at
https://github.com/felixcheng97/DenoiseCompression.Comment: Accepted to ECCV 202
Robust Joint Image Reconstruction from Color and Monochrome Cameras
International audienceRecent years have seen an explosion of the number of camera modules integratedinto individual consumer mobile devices, including configurations that contain multi-ple different types of image sensors. One popular configuration is to combine an RGBcamera for color imaging with a monochrome camera that has improved performancein low-light settings, as well as some sensitivity in the infrared. In this work we in-troduce a method to combine simultaneously captured images from such a two-camerastereo system to generate a high-quality, noise reduced color image. To do so, pixel-to-pixel alignment has to be constructed between the two captured monochrome and colorimages, which however, is prone to artifacts due to parallax. The joint image recon-struction is made robust by introducing a novel artifact-robust optimization formulation.We provide extensive experimental results based on the two-camera configuration of a commercially available cell phone
Deep Burst Denoising
Noise is an inherent issue of low-light image capture, one which is
exacerbated on mobile devices due to their narrow apertures and small sensors.
One strategy for mitigating noise in a low-light situation is to increase the
shutter time of the camera, thus allowing each photosite to integrate more
light and decrease noise variance. However, there are two downsides of long
exposures: (a) bright regions can exceed the sensor range, and (b) camera and
scene motion will result in blurred images. Another way of gathering more light
is to capture multiple short (thus noisy) frames in a "burst" and intelligently
integrate the content, thus avoiding the above downsides. In this paper, we use
the burst-capture strategy and implement the intelligent integration via a
recurrent fully convolutional deep neural net (CNN). We build our novel,
multiframe architecture to be a simple addition to any single frame denoising
model, and design to handle an arbitrary number of noisy input frames. We show
that it achieves state of the art denoising results on our burst dataset,
improving on the best published multi-frame techniques, such as VBM4D and
FlexISP. Finally, we explore other applications of image enhancement by
integrating content from multiple frames and demonstrate that our DNN
architecture generalizes well to image super-resolution
BLADE: Filter Learning for General Purpose Computational Photography
The Rapid and Accurate Image Super Resolution (RAISR) method of Romano,
Isidoro, and Milanfar is a computationally efficient image upscaling method
using a trained set of filters. We describe a generalization of RAISR, which we
name Best Linear Adaptive Enhancement (BLADE). This approach is a trainable
edge-adaptive filtering framework that is general, simple, computationally
efficient, and useful for a wide range of problems in computational
photography. We show applications to operations which may appear in a camera
pipeline including denoising, demosaicing, and stylization
- …