53,802 research outputs found
Super-resolution microscopy live cell imaging and image analysis
Novel fundamental research results provided new techniques going beyond the diffraction limit. These recent advances known as super-resolution microscopy have been awarded by the Nobel Prize as they promise new discoveries in biology and live sciences. All these techniques rely on complex signal and image processing. The applicability in biology, and particularly for live cell imaging, remains challenging and needs further investigation. Focusing on image processing and analysis, the thesis is devoted to a significant enhancement of structured illumination microscopy (SIM) and super-resolution optical fluctuation imaging (SOFI)methods towards fast live cell and quantitative imaging. The thesis presents a novel image reconstruction method for both 2D and 3D SIM data, compatible with weak signals, and robust towards unwanted image artifacts. This image reconstruction is efficient under low light conditions, reduces phototoxicity and facilitates live cell observations. We demonstrate the performance of our new method by imaging long super-resolution video sequences of live U2-OS cells and improving cell particle tracking. We develop an adapted 3D deconvolution algorithm for SOFI, which suppresses noise and makes 3D SOFI live cell imaging feasible due to reduction of the number of required input images. We introduce a novel linearization procedure for SOFI maximizing the resolution gain and show that SOFI and PALM can both be applied on the same dataset revealing more insights about the sample. This PALM and SOFI concept provides an enlarged quantitative imaging framework, allowing unprecedented functional exploration of the sample through the estimation of molecular parameters. For quantifying the outcome of our super-resolutionmethods, the thesis presents a novel methodology for objective image quality assessment measuring spatial resolution and signal to noise ratio in real samples. We demonstrate our enhanced SOFI framework by high throughput 3D imaging of live HeLa cells acquiring the whole super-resolution 3D image in 0.95 s, by investigating focal adhesions in live MEF cells, by fast optical readout of fluorescently labelled DNA strands and by unraveling the nanoscale organization of CD4 proteins on a plasma membrane of T-cells. Within the thesis, unique open-source software packages SIMToolbox and SOFI simulation tool were developed to facilitate implementation of super-resolution microscopy methods
UG^2: a Video Benchmark for Assessing the Impact of Image Restoration and Enhancement on Automatic Visual Recognition
Advances in image restoration and enhancement techniques have led to
discussion about how such algorithmscan be applied as a pre-processing step to
improve automatic visual recognition. In principle, techniques like deblurring
and super-resolution should yield improvements by de-emphasizing noise and
increasing signal in an input image. But the historically divergent goals of
the computational photography and visual recognition communities have created a
significant need for more work in this direction. To facilitate new research,
we introduce a new benchmark dataset called UG^2, which contains three
difficult real-world scenarios: uncontrolled videos taken by UAVs and manned
gliders, as well as controlled videos taken on the ground. Over 160,000
annotated frames forhundreds of ImageNet classes are available, which are used
for baseline experiments that assess the impact of known and unknown image
artifacts and other conditions on common deep learning-based object
classification approaches. Further, current image restoration and enhancement
techniques are evaluated by determining whether or not theyimprove baseline
classification performance. Results showthat there is plenty of room for
algorithmic innovation, making this dataset a useful tool going forward.Comment: Supplemental material: https://goo.gl/vVM1xe, Dataset:
https://goo.gl/AjA6En, CVPR 2018 Prize Challenge: ug2challenge.or
Learned Perceptual Image Enhancement
Learning a typical image enhancement pipeline involves minimization of a loss
function between enhanced and reference images. While L1 and L2 losses are
perhaps the most widely used functions for this purpose, they do not
necessarily lead to perceptually compelling results. In this paper, we show
that adding a learned no-reference image quality metric to the loss can
significantly improve enhancement operators. This metric is implemented using a
CNN (convolutional neural network) trained on a large-scale dataset labelled
with aesthetic preferences of human raters. This loss allows us to conveniently
perform back-propagation in our learning framework to simultaneously optimize
for similarity to a given ground truth reference and perceptual quality. This
perceptual loss is only used to train parameters of image processing operators,
and does not impose any extra complexity at inference time. Our experiments
demonstrate that this loss can be effective for tuning a variety of operators
such as local tone mapping and dehazing
Image enhancement from a stabilised video sequence
The aim of video stabilisation is to create a new video sequence where the motions (i.e. rotations, translations) and scale differences between frames (or parts of a frame) have effectively been removed. These stabilisation effects can be obtained via digital video processing techniques which use the information extracted from the video sequence itself, with no need for additional hardware or knowledge about camera physical motion.
A video sequence usually contains a large overlap between successive frames, and regions of the same scene are sampled at different positions. In this paper, this multiple sampling is combined to achieve images with a higher spatial resolution. Higher resolution imagery play an important role in assisting in the identification of people, vehicles, structures or objects of interest captured by surveillance cameras or by video cameras used in face recognition, traffic monitoring, traffic law reinforcement, driver assistance and automatic vehicle guidance systems
WESPE: Weakly Supervised Photo Enhancer for Digital Cameras
Low-end and compact mobile cameras demonstrate limited photo quality mainly
due to space, hardware and budget constraints. In this work, we propose a deep
learning solution that translates photos taken by cameras with limited
capabilities into DSLR-quality photos automatically. We tackle this problem by
introducing a weakly supervised photo enhancer (WESPE) - a novel image-to-image
Generative Adversarial Network-based architecture. The proposed model is
trained by under weak supervision: unlike previous works, there is no need for
strong supervision in the form of a large annotated dataset of aligned
original/enhanced photo pairs. The sole requirement is two distinct datasets:
one from the source camera, and one composed of arbitrary high-quality images
that can be generally crawled from the Internet - the visual content they
exhibit may be unrelated. Hence, our solution is repeatable for any camera:
collecting the data and training can be achieved in a couple of hours. In this
work, we emphasize on extensive evaluation of obtained results. Besides
standard objective metrics and subjective user study, we train a virtual rater
in the form of a separate CNN that mimics human raters on Flickr data and use
this network to get reference scores for both original and enhanced photos. Our
experiments on the DPED, KITTI and Cityscapes datasets as well as pictures from
several generations of smartphones demonstrate that WESPE produces comparable
or improved qualitative results with state-of-the-art strongly supervised
methods
- …