132,978 research outputs found
Blur Interpolation Transformer for Real-World Motion from Blur
This paper studies the challenging problem of recovering motion from blur,
also known as joint deblurring and interpolation or blur temporal
super-resolution. The remaining challenges are twofold: 1) the current methods
still leave considerable room for improvement in terms of visual quality even
on the synthetic dataset, and 2) poor generalization to real-world data. To
this end, we propose a blur interpolation transformer (BiT) to effectively
unravel the underlying temporal correlation encoded in blur. Based on
multi-scale residual Swin transformer blocks, we introduce dual-end temporal
supervision and temporally symmetric ensembling strategies to generate
effective features for time-varying motion rendering. In addition, we design a
hybrid camera system to collect the first real-world dataset of one-to-many
blur-sharp video pairs. Experimental results show that BiT has a significant
gain over the state-of-the-art methods on the public dataset Adobe240. Besides,
the proposed real-world dataset effectively helps the model generalize well to
real blurry scenarios
Motion Offset for Blur Modeling
Motion blur caused by the relative movement between the camera and the subject is often an undesirable degradation of the image quality. In most conventional deblurring methods, a blur kernel is estimated for image deconvolution. Due to the ill-posed nature, predefined priors are proposed to suppress the ill-posedness. However, these predefined priors can only handle some specific situations. In order to achieve a better deblurring performance on dynamic scene, deep-learning based methods are proposed to learn a mapping function that restore the sharp image from a blurry image. The blur may be implicitly modelled in feature extraction module. However, the blur modelled from the paired dataset cannot be well generalized to some real-world scenes. To summary, an accurate and dynamic blur model that more closely approximates real-world blur is needed.
By revisiting the principle of camera exposure, we can model the blur with the displacements between sharp pixels and the exposed pixel, namely motion offsets. Given specific physical constraints, motion offsets are able to form different exposure trajectories (i.e. linear, quadratic). Compare to conventional blur kernel, our proposed motion offsets are a more rigorous approximation for real-world blur, since they can constitute a non-linear and non-uniform motion field. Through learning from dynamic scene dataset, an accurate and spatial-variant motion offset field is obtained.
With accurate motion information and a compact blur modeling method, we explore the ways of utilizing motion information to facilitate multiple blur-related tasks. By introducing recovered motion offsets, we build up a motion-aware and spatial-variant convolution. For extracting a video clip from a blurry image, motion offsets can provide an explicit (non-)linear motion trajectory for interpolating. We also work towards a better image deblurring performance in real-world scenarios by improving the generalization ability of the deblurring model
Leveraging blur information for plenoptic camera calibration
This paper presents a novel calibration algorithm for plenoptic cameras,
especially the multi-focus configuration, where several types of micro-lenses
are used, using raw images only. Current calibration methods rely on simplified
projection models, use features from reconstructed images, or require separated
calibrations for each type of micro-lens. In the multi-focus configuration, the
same part of a scene will demonstrate different amounts of blur according to
the micro-lens focal length. Usually, only micro-images with the smallest
amount of blur are used. In order to exploit all available data, we propose to
explicitly model the defocus blur in a new camera model with the help of our
newly introduced Blur Aware Plenoptic (BAP) feature. First, it is used in a
pre-calibration step that retrieves initial camera parameters, and second, to
express a new cost function to be minimized in our single optimization process.
Third, it is exploited to calibrate the relative blur between micro-images. It
links the geometric blur, i.e., the blur circle, to the physical blur, i.e.,
the point spread function. Finally, we use the resulting blur profile to
characterize the camera's depth of field. Quantitative evaluations in
controlled environment on real-world data demonstrate the effectiveness of our
calibrations.Comment: arXiv admin note: text overlap with arXiv:2004.0774
Automatic quantification of the microvascular density on whole slide images, applied to paediatric brain tumours
Angiogenesis is a key phenomenon for tumour progression, diagnosis and
treatment in brain tumours and more generally in oncology. Presently, its
precise, direct quantitative assessment can only be done on whole tissue
sections immunostained to reveal vascular endothelial cells. But this is a
tremendous task for the pathologist and a challenge for the computer since
digitised whole tissue sections, whole slide images (WSI), contain typically
around ten gigapixels.
We define and implement an algorithm that determines automatically, on a WSI
at objective magnification , the regions of tissue, the regions
without blur and the regions of large puddles of red blood cells, and
constructs the mask of blur-free, significant tissue on the WSI. Then it
calibrates automatically the optical density ratios of the immunostaining of
the vessel walls and of the counterstaining, performs a colour deconvolution
inside the regions of blur-free tissue, and finds the vessel walls inside these
regions by selecting, on the image resulting from the colour deconvolution,
zones which satisfy a double-threshold criterion. A mask of vessel wall regions
on the WSI is produced. The density of microvessels is finally computed as the
fraction of the area of significant tissue which is occupied by vessel walls.
We apply this algorithm to a set of 186 WSI of paediatric brain tumours from
World Health Organisation grades I to IV. The segmentations are of very good
quality although the set of slides is very heterogeneous. The computation time
is of the order of a fraction of an hour for each WSI on a modest computer. The
computed microvascular density is found to be robust and strongly correlates
with the tumour grade.
This method requires no training and can easily be applied to other tumour
types and other stainings
Improved Handling of Motion Blur in Online Object Detection
We wish to detect specific categories of objects, for on-line vision systems that will run in the real world. Object detection is already very challenging. It is even harder when the images are blurred, from the camera being in a car or a hand-held phone. Most existing efforts either focused on sharp images, with easy to label ground truth, or they have treated motion blur as one of many generic corruptions.Instead, we focus especially on the details of egomotion induced blur. We explore five classes of remedies, where each targets different potential causes for the performance gap between sharp and blurred images. For example, first deblurring an image changes its human interpretability, but at present, only partly improves object detection. The other four classes of remedies address multi-scale texture, out-of-distribution testing, label generation, and conditioning by blur-type. Surprisingly, we discover that custom label generation aimed at resolving spatial ambiguity, ahead of all others, markedly improves object detection. Also, in contrast to findings from classification, we see a noteworthy boost by conditioning our model on bespoke categories of motion blur.We validate and cross-breed the different remedies experimentally on blurred COCO images and real-world blur datasets, producing an easy and practical favorite model with superior detection rates
- …