33 research outputs found

    Thin On-Sensor Nanophotonic Array Cameras

    Full text link
    Today's commodity camera systems rely on compound optics to map light originating from the scene to positions on the sensor where it gets recorded as an image. To record images without optical aberrations, i.e., deviations from Gauss' linear model of optics, typical lens systems introduce increasingly complex stacks of optical elements which are responsible for the height of existing commodity cameras. In this work, we investigate flat nanophotonic computational cameras as an alternative that employs an array of skewed lenslets and a learned reconstruction approach. The optical array is embedded on a metasurface that, at 700~nm height, is flat and sits on the sensor cover glass at 2.5~mm focal distance from the sensor. To tackle the highly chromatic response of a metasurface and design the array over the entire sensor, we propose a differentiable optimization method that continuously samples over the visible spectrum and factorizes the optical modulation for different incident fields into individual lenses. We reconstruct a megapixel image from our flat imager with a learned probabilistic reconstruction method that employs a generative diffusion model to sample an implicit prior. To tackle scene-dependent aberrations in broadband, we propose a method for acquiring paired captured training data in varying illumination conditions. We assess the proposed flat camera design in simulation and with an experimental prototype, validating that the method is capable of recovering images from diverse scenes in broadband with a single nanophotonic layer.Comment: 18 pages, 12 figures, to be published in ACM Transactions on Graphic

    A Comprehensive Review of Image Restoration and Noise Reduction Techniques

    Get PDF
    Images play a crucial role in modern life and find applications in diverse fields, ranging from preserving memories to conducting scientific research. However, images often suffer from various forms of degradation such as blur, noise, and contrast loss. These degradations make images difficult to interpret, reduce their visual quality, and limit their practical applications. To overcome these challenges, image restoration and noise reduction techniques have been developed to recover degraded images and enhance their quality. These techniques have gained significant importance in recent years, especially with the increasing use of digital imaging in various fields such as medical imaging, surveillance, satellite imaging, and many others. This paper presents a comprehensive review of image restoration and noise reduction techniques, encompassing spatial and frequency domain methods, and deep learning-based techniques. The paper also discusses the evaluation metrics utilized to assess the effectiveness of these techniques and explores future research directions in this field. The primary objective of this paper is to offer a comprehensive understanding of the concepts and methods involved in image restoration and noise reduction

    Real-Time Under-Display Cameras Image Restoration and HDR on Mobile Devices

    Full text link
    The new trend of full-screen devices implies positioning the camera behind the screen to bring a larger display-to-body ratio, enhance eye contact, and provide a notch-free viewing experience on smartphones, TV or tablets. On the other hand, the images captured by under-display cameras (UDCs) are degraded by the screen in front of them. Deep learning methods for image restoration can significantly reduce the degradation of captured images, providing satisfying results for the human eyes. However, most proposed solutions are unreliable or efficient enough to be used in real-time on mobile devices. In this paper, we aim to solve this image restoration problem using efficient deep learning methods capable of processing FHD images in real-time on commercial smartphones while providing high-quality results. We propose a lightweight model for blind UDC Image Restoration and HDR, and we also provide a benchmark comparing the performance and runtime of different methods on smartphones. Our models are competitive on UDC benchmarks while using x4 less operations than others. To the best of our knowledge, we are the first work to approach and analyze this real-world single image restoration problem from the efficiency and production point of view.Comment: ECCV 2022 AIM Worksho

    Lightweight Implicit Blur Kernel Estimation Network for Blind Image Super-Resolution

    Get PDF
    Blind image super-resolution (Blind-SR) is the process of leveraging a low-resolution (LR) image, with unknown degradation, to generate its high-resolution (HR) version. Most of the existing blind SR techniques use a degradation estimator network to explicitly estimate the blur kernel to guide the SR network with the supervision of ground truth (GT) kernels. To solve this issue, it is necessary to design an implicit estimator network that can extract discriminative blur kernel representation without relying on the supervision of ground-truth blur kernels. We design a lightweight approach for blind super-resolution (Blind-SR) that estimates the blur kernel and restores the HR image based on a deep convolutional neural network (CNN) and a deep super-resolution residual convolutional generative adversarial network. Since the blur kernel for blind image SR is unknown, following the image formation model of blind super-resolution problem, we firstly introduce a neural network-based model to estimate the blur kernel. This is achieved by (i) a Super Resolver that, from a low-resolution input, generates the corresponding SR image; and (ii) an Estimator Network generating the blur kernel from the input datum. The output of both models is used in a novel loss formulation. The proposed network is end-to-end trainable. The methodology proposed is substantiated by both quantitative and qualitative experiments. Results on benchmarks demonstrate that our computationally efficient approach (12x fewer parameters than the state-of-the-art models) performs favorably with respect to existing approaches and can be used on devices with limited computational capabilities

    Coordinate-based neural representations for computational adaptive optics in widefield microscopy

    Full text link
    Widefield microscopy is widely used for non-invasive imaging of biological structures at subcellular resolution. When applied to complex specimen, its image quality is degraded by sample-induced optical aberration. Adaptive optics can correct wavefront distortion and restore diffraction-limited resolution but require wavefront sensing and corrective devices, increasing system complexity and cost. Here, we describe a self-supervised machine learning algorithm, CoCoA, that performs joint wavefront estimation and three-dimensional structural information extraction from a single input 3D image stack without the need for external training dataset. We implemented CoCoA for widefield imaging of mouse brain tissues and validated its performance with direct-wavefront-sensing-based adaptive optics. Importantly, we systematically explored and quantitatively characterized the limiting factors of CoCoA's performance. Using CoCoA, we demonstrated the first in vivo widefield mouse brain imaging using machine-learning-based adaptive optics. Incorporating coordinate-based neural representations and a forward physics model, the self-supervised scheme of CoCoA should be applicable to microscopy modalities in general.Comment: 33 pages, 5 figure
    corecore