17,679 research outputs found

    DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs

    Full text link
    We present a novel deep learning architecture for fusing static multi-exposure images. Current multi-exposure fusion (MEF) approaches use hand-crafted features to fuse input sequence. However, the weak hand-crafted representations are not robust to varying input conditions. Moreover, they perform poorly for extreme exposure image pairs. Thus, it is highly desirable to have a method that is robust to varying input conditions and capable of handling extreme exposure without artifacts. Deep representations have known to be robust to input conditions and have shown phenomenal performance in a supervised setting. However, the stumbling block in using deep learning for MEF was the lack of sufficient training data and an oracle to provide the ground-truth for supervision. To address the above issues, we have gathered a large dataset of multi-exposure image stacks for training and to circumvent the need for ground truth images, we propose an unsupervised deep learning framework for MEF utilizing a no-reference quality metric as loss function. The proposed approach uses a novel CNN architecture trained to learn the fusion operation without reference ground truth image. The model fuses a set of common low level features extracted from each image to generate artifact-free perceptually pleasing results. We perform extensive quantitative and qualitative evaluation and show that the proposed technique outperforms existing state-of-the-art approaches for a variety of natural images.Comment: ICCV 201

    デバイスの限界を超えた正確な撮像を可能にする深層学習

    Get PDF
    Tohoku University博士(情報科学)thesi

    A Perceptually Optimized and Self-Calibrated Tone Mapping Operator

    Full text link
    With the increasing popularity and accessibility of high dynamic range (HDR) photography, tone mapping operators (TMOs) for dynamic range compression are practically demanding. In this paper, we develop a two-stage neural network-based TMO that is self-calibrated and perceptually optimized. In Stage one, motivated by the physiology of the early stages of the human visual system, we first decompose an HDR image into a normalized Laplacian pyramid. We then use two lightweight deep neural networks (DNNs), taking the normalized representation as input and estimating the Laplacian pyramid of the corresponding LDR image. We optimize the tone mapping network by minimizing the normalized Laplacian pyramid distance (NLPD), a perceptual metric aligning with human judgments of tone-mapped image quality. In Stage two, the input HDR image is self-calibrated to compute the final LDR image. We feed the same HDR image but rescaled with different maximum luminances to the learned tone mapping network, and generate a pseudo-multi-exposure image stack with different detail visibility and color saturation. We then train another lightweight DNN to fuse the LDR image stack into a desired LDR image by maximizing a variant of the structural similarity index for multi-exposure image fusion (MEF-SSIM), which has been proven perceptually relevant to fused image quality. The proposed self-calibration mechanism through MEF enables our TMO to accept uncalibrated HDR images, while being physiology-driven. Extensive experiments show that our method produces images with consistently better visual quality. Additionally, since our method builds upon three lightweight DNNs, it is among the fastest local TMOs.Comment: 20 pages,18 figure

    Quantification of DNA-associated proteins inside eukaryotic cells using single-molecule localization microscopy

    Get PDF
    Development of single-molecule localization microscopy techniques has allowed nanometre scale localization accuracy inside cells, permitting the resolution of ultra-fine cell structure and the elucidation of crucial molecular mechanisms. Application of these methodologies to understanding processes underlying DNA replication and repair has been limited to defined in vitro biochemical analysis and prokaryotic cells. In order to expand these techniques to eukaryotic systems, we have further developed a photo-activated localization microscopy-based method to directly visualize DNA-associated proteins in unfixed eukaryotic cells. We demonstrate that motion blurring of fluorescence due to protein diffusivity can be used to selectively image the DNA-bound population of proteins. We designed and tested a simple methodology and show that it can be used to detect changes in DNA binding of a replicative helicase subunit, Mcm4, and the replication sliding clamp, PCNA, between different stages of the cell cycle and between distinct genetic backgrounds

    Exposure Fusion for Hand-held Camera Inputs with Optical Flow and PatchMatch

    Full text link
    This paper proposes a hybrid synthesis method for multi-exposure image fusion taken by hand-held cameras. Motions either due to the shaky camera or caused by dynamic scenes should be compensated before any content fusion. Any misalignment can easily cause blurring/ghosting artifacts in the fused result. Our hybrid method can deal with such motions and maintain the exposure information of each input effectively. In particular, the proposed method first applies optical flow for a coarse registration, which performs well with complex non-rigid motion but produces deformations at regions with missing correspondences. The absence of correspondences is due to the occlusions of scene parallax or the moving contents. To correct such error registration, we segment images into superpixels and identify problematic alignments based on each superpixel, which is further aligned by PatchMatch. The method combines the efficiency of optical flow and the accuracy of PatchMatch. After PatchMatch correction, we obtain a fully aligned image stack that facilitates a high-quality fusion that is free from blurring/ghosting artifacts. We compare our method with existing fusion algorithms on various challenging examples, including the static/dynamic, the indoor/outdoor and the daytime/nighttime scenes. Experiment results demonstrate the effectiveness and robustness of our method

    A new multicompartmental reaction-diffusion modeling method links transient membrane attachment of E. coli MinE to E-ring formation

    Get PDF
    Many important cellular processes are regulated by reaction-diffusion (RD) of molecules that takes place both in the cytoplasm and on the membrane. To model and analyze such multicompartmental processes, we developed a lattice-based Monte Carlo method, Spatiocyte that supports RD in volume and surface compartments at single molecule resolution. Stochasticity in RD and the excluded volume effect brought by intracellular molecular crowding, both of which can significantly affect RD and thus, cellular processes, are also supported. We verified the method by comparing simulation results of diffusion, irreversible and reversible reactions with the predicted analytical and best available numerical solutions. Moreover, to directly compare the localization patterns of molecules in fluorescence microscopy images with simulation, we devised a visualization method that mimics the microphotography process by showing the trajectory of simulated molecules averaged according to the camera exposure time. In the rod-shaped bacterium _Escherichia coli_, the division site is suppressed at the cell poles by periodic pole-to-pole oscillations of the Min proteins (MinC, MinD and MinE) arising from carefully orchestrated RD in both cytoplasm and membrane compartments. Using Spatiocyte we could model and reproduce the _in vivo_ MinDE localization dynamics by accounting for the established properties of MinE. Our results suggest that the MinE ring, which is essential in preventing polar septation, is largely composed of MinE that is transiently attached to the membrane independently after recruited by MinD. Overall, Spatiocyte allows simulation and visualization of complex spatial and reaction-diffusion mediated cellular processes in volumes and surfaces. As we showed, it can potentially provide mechanistic insights otherwise difficult to obtain experimentally

    Hybrid-Supervised Dual-Search: Leveraging Automatic Learning for Loss-free Multi-Exposure Image Fusion

    Full text link
    Multi-exposure image fusion (MEF) has emerged as a prominent solution to address the limitations of digital imaging in representing varied exposure levels. Despite its advancements, the field grapples with challenges, notably the reliance on manual designs for network structures and loss functions, and the constraints of utilizing simulated reference images as ground truths. Consequently, current methodologies often suffer from color distortions and exposure artifacts, further complicating the quest for authentic image representation. In addressing these challenges, this paper presents a Hybrid-Supervised Dual-Search approach for MEF, dubbed HSDS-MEF, which introduces a bi-level optimization search scheme for automatic design of both network structures and loss functions. More specifically, we harnesses a unique dual research mechanism rooted in a novel weighted structure refinement architecture search. Besides, a hybrid supervised contrast constraint seamlessly guides and integrates with searching process, facilitating a more adaptive and comprehensive search for optimal loss functions. We realize the state-of-the-art performance in comparison to various competitive schemes, yielding a 10.61% and 4.38% improvement in Visual Information Fidelity (VIF) for general and no-reference scenarios, respectively, while providing results with high contrast, rich details and colors
    corecore