698 research outputs found
General Dynamic Scene Reconstruction from Multiple View Video
This paper introduces a general approach to dynamic scene reconstruction from
multiple moving cameras without prior knowledge or limiting constraints on the
scene structure, appearance, or illumination. Existing techniques for dynamic
scene reconstruction from multiple wide-baseline camera views primarily focus
on accurate reconstruction in controlled environments, where the cameras are
fixed and calibrated and background is known. These approaches are not robust
for general dynamic scenes captured with sparse moving cameras. Previous
approaches for outdoor dynamic scene reconstruction assume prior knowledge of
the static background appearance and structure. The primary contributions of
this paper are twofold: an automatic method for initial coarse dynamic scene
segmentation and reconstruction without prior knowledge of background
appearance or structure; and a general robust approach for joint segmentation
refinement and dense reconstruction of dynamic scenes from multiple
wide-baseline static or moving cameras. Evaluation is performed on a variety of
indoor and outdoor scenes with cluttered backgrounds and multiple dynamic
non-rigid objects such as people. Comparison with state-of-the-art approaches
demonstrates improved accuracy in both multiple view segmentation and dense
reconstruction. The proposed approach also eliminates the requirement for prior
knowledge of scene structure and appearance
Construction of all-in-focus images assisted by depth sensing
Multi-focus image fusion is a technique for obtaining an all-in-focus image
in which all objects are in focus to extend the limited depth of field (DoF) of
an imaging system. Different from traditional RGB-based methods, this paper
presents a new multi-focus image fusion method assisted by depth sensing. In
this work, a depth sensor is used together with a color camera to capture
images of a scene. A graph-based segmentation algorithm is used to segment the
depth map from the depth sensor, and the segmented regions are used to guide a
focus algorithm to locate in-focus image blocks from among multi-focus source
images to construct the reference all-in-focus image. Five test scenes and six
evaluation metrics were used to compare the proposed method and representative
state-of-the-art algorithms. Experimental results quantitatively demonstrate
that this method outperforms existing methods in both speed and quality (in
terms of comprehensive fusion metrics). The generated images can potentially be
used as reference all-in-focus images.Comment: 18 pages. This paper has been submitted to Computer Vision and Image
Understandin
Multi-exposure microscopic image fusion-based detail enhancement algorithm
[EN] Traditional microscope imaging techniques are unable to retrieve the complete dynamic range of a diatom species with complex silica-based cell walls and multi-scale patterns. In order to extract details from the diatom, multi-exposure images are captured at variable exposure settings using microscopy techniques. A recent innovation shows that image fusion overcomes the limitations of standard digital cameras to capture details from high dynamic range scene or specimen photographed using microscopy imaging techniques. In this paper, we present a cell-region sensitive exposure fusion (CS-EF) approach to produce well-exposed fused images that can be presented directly on conventional display devices. The ambition is to preserve details in poorly and brightly illuminated regions of 3-D transparent diatom shells. The aforesaid objective is achieved by taking into account local information measures, which select well-exposed regions across input exposures. In addition, a modified histogram equalization is introduced to improve uniformity of input multi-exposure image prior to fusion. Quantitative and qualitative assessment of proposed fusion results reveal better performance than several state-of-the-art algorithms that substantiate the methodâs validitySIThis work was supported in part by the Spanish Government, Spain under the AQUALITAS-retos project (Ref.CTM2014-51907-C2-2-R-MINECO) and by Junta de Comunidades de Castilla-La Mancha, Spain under project HIPERDEEP (Ref. SBPLY/19/180501/000273). The funding agencies had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscrip
Enhancing Visibility in Nighttime Haze Images Using Guided APSF and Gradient Adaptive Convolution
Visibility in hazy nighttime scenes is frequently reduced by multiple
factors, including low light, intense glow, light scattering, and the presence
of multicolored light sources. Existing nighttime dehazing methods often
struggle with handling glow or low-light conditions, resulting in either
excessively dark visuals or unsuppressed glow outputs. In this paper, we
enhance the visibility from a single nighttime haze image by suppressing glow
and enhancing low-light regions. To handle glow effects, our framework learns
from the rendered glow pairs. Specifically, a light source aware network is
proposed to detect light sources of night images, followed by the APSF (Angular
Point Spread Function)-guided glow rendering. Our framework is then trained on
the rendered images, resulting in glow suppression. Moreover, we utilize
gradient-adaptive convolution, to capture edges and textures in hazy scenes. By
leveraging extracted edges and textures, we enhance the contrast of the scene
without losing important structural details. To boost low-light intensity, our
network learns an attention map, then adjusted by gamma correction. This
attention has high values on low-light regions and low values on haze and glow
regions. Extensive evaluation on real nighttime haze images, demonstrates the
effectiveness of our method. Our experiments demonstrate that our method
achieves a PSNR of 30.38dB, outperforming state-of-the-art methods by 13 on
GTA5 nighttime haze dataset. Our data and code is available at:
\url{https://github.com/jinyeying/nighttime_dehaze}.Comment: Accepted to ACM'MM2023, https://github.com/jinyeying/nighttime_dehaz
- âŠ