1,299 research outputs found
Reflectance Transformation Imaging (RTI) System for Ancient Documentary Artefacts
This tutorial summarises our uses of reflectance transformation imaging in archaeological contexts. It introduces the UK AHRC funded project reflectance Transformation Imaging for Anciant Documentary Artefacts and demonstrates imaging methodologies
A deep learning framework for quality assessment and restoration in video endoscopy
Endoscopy is a routine imaging technique used for both diagnosis and
minimally invasive surgical treatment. Artifacts such as motion blur, bubbles,
specular reflections, floating objects and pixel saturation impede the visual
interpretation and the automated analysis of endoscopy videos. Given the
widespread use of endoscopy in different clinical applications, we contend that
the robust and reliable identification of such artifacts and the automated
restoration of corrupted video frames is a fundamental medical imaging problem.
Existing state-of-the-art methods only deal with the detection and restoration
of selected artifacts. However, typically endoscopy videos contain numerous
artifacts which motivates to establish a comprehensive solution.
We propose a fully automatic framework that can: 1) detect and classify six
different primary artifacts, 2) provide a quality score for each frame and 3)
restore mildly corrupted frames. To detect different artifacts our framework
exploits fast multi-scale, single stage convolutional neural network detector.
We introduce a quality metric to assess frame quality and predict image
restoration success. Generative adversarial networks with carefully chosen
regularization are finally used to restore corrupted frames.
Our detector yields the highest mean average precision (mAP at 5% threshold)
of 49.0 and the lowest computational time of 88 ms allowing for accurate
real-time processing. Our restoration models for blind deblurring, saturation
correction and inpainting demonstrate significant improvements over previous
methods. On a set of 10 test videos we show that our approach preserves an
average of 68.7% which is 25% more frames than that retained from the raw
videos.Comment: 14 page
Acquisition of Surface Light Fields from Videos
La tesi presenta un nuovo approccio per la stima di Surface Light Field di oggetti reali, a partire da sequenze video acquisite in condizioni di illuminazione fisse e non controllate. Il metodo proposto si basa sulla separazione delle due componenti principali dell'apparenza superficiale dell'oggetto: la componente diffusiva, modellata come colore RGB, e la componente speculare, approssimata mediante un modello parametrico funzione della posizione dell'osservatore.
L'apparenza superficiale ricostruita permette una visualizzazione fotorealistica e in real-time dell'oggetto al variare della posizione dell'osservatore, consentendo una navigazione 3D interattiva
Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery
One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
Separation and contrast enhancement of overlapping cast shadow components using polarization
Shadow is an inseparable aspect of all natural scenes. When there are multiple light sources or multiple reflections several different shadows may overlap at the same location and create complicated patterns. Shadows are a potentially good source of information about a scene if the shadow regions can be properly identified and segmented. However, shadow region identification and segmentation is a difficult task and improperly identified shadows often interfere with machine vision tasks like object recognition and tracking. We propose here a new shadow separation and contrast enhancement method based on the polarization of light. Polarization information of the scene captured by our polarization-sensitive camera is shown to separate shadows from different light sources effectively. Such shadow separation is almost impossible to realize with conventional, polarization-insensitive imaging
Estimating Reflectance Layer from A Single Image: Integrating Reflectance Guidance and Shadow/Specular Aware Learning
Estimating reflectance layer from a single image is a challenging task. It
becomes more challenging when the input image contains shadows or specular
highlights, which often render an inaccurate estimate of the reflectance layer.
Therefore, we propose a two-stage learning method, including reflectance
guidance and a Shadow/Specular-Aware (S-Aware) network to tackle the problem.
In the first stage, an initial reflectance layer free from shadows and
specularities is obtained with the constraint of novel losses that are guided
by prior-based shadow-free and specular-free images. To further enforce the
reflectance layer to be independent from shadows and specularities in the
second-stage refinement, we introduce an S-Aware network that distinguishes the
reflectance image from the input image. Our network employs a classifier to
categorize shadow/shadow-free, specular/specular-free classes, enabling the
activation features to function as attention maps that focus on shadow/specular
regions. Our quantitative and qualitative evaluations show that our method
outperforms the state-of-the-art methods in the reflectance layer estimation
that is free from shadows and specularities.Comment: Accepted to AAAI202
Fully-automatic inverse tone mapping algorithm based on dynamic mid-level tone mapping
High Dynamic Range (HDR) displays can show images with higher color contrast levels and peak luminosities than the common Low Dynamic Range (LDR) displays. However, most existing video content is recorded and/or graded in LDR format. To show LDR content on HDR displays, it needs to be up-scaled using a so-called inverse tone mapping algorithm. Several techniques for inverse tone mapping have been proposed in the last years, going from simple approaches based on global and local operators to more advanced algorithms such as neural networks. Some of the drawbacks of existing techniques for inverse tone mapping are the need for human intervention, the high computation time for more advanced algorithms, limited low peak brightness, and the lack of the preservation of the artistic intentions. In this paper, we propose a fully-automatic inverse tone mapping operator based on mid-level mapping capable of real-time video processing. Our proposed algorithm allows expanding LDR images into HDR images with peak brightness over 1000 nits, preserving the artistic intentions inherent to the HDR domain. We assessed our results using the full-reference objective quality metrics HDR-VDP-2.2 and DRIM, and carrying out a subjective pair-wise comparison experiment. We compared our results with those obtained with the most recent methods found in the literature. Experimental results demonstrate that our proposed method outperforms the current state-of-the-art of simple inverse tone mapping methods and its performance is similar to other more complex and time-consuming advanced techniques
New Reflection Transformation Imaging Methods for Rock Art and Multiple-Viewpoint Display
info:eu-repo/semantics/publishedVersio
- …