4,171 research outputs found
Full Reference Objective Quality Assessment for Reconstructed Background Images
With an increased interest in applications that require a clean background
image, such as video surveillance, object tracking, street view imaging and
location-based services on web-based maps, multiple algorithms have been
developed to reconstruct a background image from cluttered scenes.
Traditionally, statistical measures and existing image quality techniques have
been applied for evaluating the quality of the reconstructed background images.
Though these quality assessment methods have been widely used in the past,
their performance in evaluating the perceived quality of the reconstructed
background image has not been verified. In this work, we discuss the
shortcomings in existing metrics and propose a full reference Reconstructed
Background image Quality Index (RBQI) that combines color and structural
information at multiple scales using a probability summation model to predict
the perceived quality in the reconstructed background image given a reference
image. To compare the performance of the proposed quality index with existing
image quality assessment measures, we construct two different datasets
consisting of reconstructed background images and corresponding subjective
scores. The quality assessment measures are evaluated by correlating their
objective scores with human subjective ratings. The correlation results show
that the proposed RBQI outperforms all the existing approaches. Additionally,
the constructed datasets and the corresponding subjective scores provide a
benchmark to evaluate the performance of future metrics that are developed to
evaluate the perceived quality of reconstructed background images.Comment: Associated source code: https://github.com/ashrotre/RBQI, Associated
Database:
https://drive.google.com/drive/folders/1bg8YRPIBcxpKIF9BIPisULPBPcA5x-Bk?usp=sharing
(Email for permissions at: ashrotreasuedu
Image Quality Improvement of Medical Images using Deep Learning for Computer-aided Diagnosis
Retina image analysis is an important screening tool for early detection of multiple dis eases such as diabetic retinopathy which greatly impairs visual function. Image analy sis and pathology detection can be accomplished both by ophthalmologists and by the
use of computer-aided diagnosis systems. Advancements in hardware technology led to
more portable and less expensive imaging devices for medical image acquisition. This
promotes large scale remote diagnosis by clinicians as well as the implementation of
computer-aided diagnosis systems for local routine disease screening. However, lower cost equipment generally results in inferior quality images. This may jeopardize the
reliability of the acquired images and thus hinder the overall performance of the diagnos tic tool. To solve this open challenge, we carried out an in-depth study on using different
deep learning-based frameworks for improving retina image quality while maintaining
the underlying morphological information for the diagnosis. Our results demonstrate
that using a Cycle Generative Adversarial Network for unpaired image-to-image trans lation leads to successful transformations of retina images from a low- to a high-quality
domain. The visual evidence of this improvement was quantitatively affirmed by the two
proposed validation methods. The first used a retina image quality classifier to confirm a
significant prediction label shift towards a quality enhance. On average, a 50% increase
of images being classified as high-quality was verified. The second analysed the perfor mance modifications of a diabetic retinopathy detection algorithm upon being trained
with the quality-improved images. The latter led to strong evidence that the proposed
solution satisfies the requirement of maintaining the images’ original information for
diagnosis, and that it assures a pathology-assessment more sensitive to the presence of
pathological signs. These experimental results confirm the potential effectiveness of our
solution in improving retina image quality for diagnosis. Along with the addressed con tributions, we analysed how the construction of the data sets representing the low-quality
domain impacts the quality translation efficiency. Our findings suggest that by tackling
the problem more selectively, that is, constructing data sets more homogeneous in terms
of their image defects, we can obtain more accentuated quality transformations
Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect
Recently, the new Kinect One has been issued by Microsoft, providing the next
generation of real-time range sensing devices based on the Time-of-Flight (ToF)
principle. As the first Kinect version was using a structured light approach,
one would expect various differences in the characteristics of the range data
delivered by both devices. This paper presents a detailed and in-depth
comparison between both devices. In order to conduct the comparison, we propose
a framework of seven different experimental setups, which is a generic basis
for evaluating range cameras such as Kinect. The experiments have been designed
with the goal to capture individual effects of the Kinect devices as isolatedly
as possible and in a way, that they can also be adopted, in order to apply them
to any other range sensing device. The overall goal of this paper is to provide
a solid insight into the pros and cons of either device. Thus, scientists that
are interested in using Kinect range sensing cameras in their specific
application scenario can directly assess the expected, specific benefits and
potential problem of either device.Comment: 58 pages, 23 figures. Accepted for publication in Computer Vision and
Image Understanding (CVIU
Boosting Cross-Quality Face Verification using Blind Face Restoration
In recent years, various Blind Face Restoration (BFR) techniques were
developed. These techniques transform low quality faces suffering from multiple
degradations to more realistic and natural face images with high perceptual
quality. However, it is crucial for the task of face verification to not only
enhance the perceptual quality of the low quality images but also to improve
the biometric-utility face quality metrics. Furthermore, preserving the
valuable identity information is of great importance. In this paper, we
investigate the impact of applying three state-of-the-art blind face
restoration techniques namely, GFP-GAN, GPEN and SGPN on the performance of
face verification system under very challenging environment characterized by
very low quality images. Extensive experimental results on the recently
proposed cross-quality LFW database using three state-of-the-art deep face
recognition models demonstrate the effectiveness of GFP-GAN in boosting
significantly the face verification accuracy.Comment: paper accepted at BIOSIG 2023 conferenc
Evaluation of different segmentation-based approaches for skin disorders from dermoscopic images
Treballs Finals de Grau d'Enginyeria Biomèdica. Facultat de Medicina i Ciències de la Salut. Universitat de Barcelona. Curs: 2022-2023. Tutor/Director: Sala Llonch, Roser, Mata Miquel, Christian, Munuera, JosepSkin disorders are the most common type of cancer in the world and the incident has been lately increasing over the past decades. Even with the most complex and advanced technologies, current image acquisition systems do not permit a reliable identification of the skin lesion by visual examination due to the challenging structure of the malignancy. This promotes the need for the implementation of automatic skin lesion segmentation methods in order to assist in physicians’ diagnostic when determining the lesion's region and to serve as a preliminary step for the classification of the skin lesion. Accurate and precise segmentation is crucial for a rigorous screening and monitoring of the disease's progression.
For the purpose of the commented concern, the present project aims to accomplish a state-of-the-art review about the most predominant conventional segmentation models for skin lesion segmentation, alongside with a market analysis examination. With the rise of automatic segmentation tools, a wide number of algorithms are currently being used, but many are the drawbacks when employing them for dermatological disorders due to the high-level presence of artefacts in the image acquired.
In light of the above, three segmentation techniques have been selected for the completion of the work: level set method, an algorithm combining GrabCut and k-means methods and an intensity automatic algorithm developed by Hospital Sant Joan de DĂ©u de Barcelona research group. In addition, a validation of their performance is conducted for a further implementation of them in clinical training. The proposals, together with the got outcomes, have been accomplished by means of a publicly available skin lesion image database
Fast single image defogging with robust sky detection
Haze is a source of unreliability for computer vision applications in outdoor scenarios, and it is usually caused by atmospheric conditions. The Dark Channel Prior (DCP) has shown remarkable results in image defogging with three main limitations: 1) high time-consumption, 2) artifact generation, and 3) sky-region over-saturation. Therefore, current work has focused on improving processing time without losing restoration quality and avoiding image artifacts during image defogging. Hence in this research, a novel methodology based on depth approximations through DCP, local Shannon entropy, and Fast Guided Filter is proposed for reducing artifacts and improving image recovery on sky regions with low computation time. The proposed-method performance is assessed using more than 500 images from three datasets: Hybrid Subjective Testing Set from Realistic Single Image Dehazing (HSTS-RESIDE), the Synthetic Objective Testing Set from RESIDE (SOTS-RESIDE) and the HazeRD. Experimental results demonstrate that the proposed approach has an outstanding performance over state-of-the-art methods in reviewed literature, which is validated qualitatively and quantitatively through Peak Signal-to-Noise Ratio (PSNR), Naturalness Image Quality Evaluator (NIQE) and Structural SIMilarity (SSIM) index on retrieved images, considering different visual ranges, under distinct illumination and contrast conditions. Analyzing images with various resolutions, the method proposed in this work shows the lowest processing time under similar software and hardware conditions.This work was supported in part by the Centro en Investigaciones en Ă“ptica (CIO) and the Consejo Nacional de Ciencia y TecnologĂa (CONACYT), and in part by the Barcelona Supercomputing Center.Peer ReviewedPostprint (published version
Rain rendering for evaluating and improving robustness to bad weather
Rain fills the atmosphere with water particles, which breaks the common
assumption that light travels unaltered from the scene to the camera. While it
is well-known that rain affects computer vision algorithms, quantifying its
impact is difficult. In this context, we present a rain rendering pipeline that
enables the systematic evaluation of common computer vision algorithms to
controlled amounts of rain. We present three different ways to add synthetic
rain to existing images datasets: completely physic-based; completely
data-driven; and a combination of both. The physic-based rain augmentation
combines a physical particle simulator and accurate rain photometric modeling.
We validate our rendering methods with a user study, demonstrating our rain is
judged as much as 73% more realistic than the state-of-theart. Using our
generated rain-augmented KITTI, Cityscapes, and nuScenes datasets, we conduct a
thorough evaluation of object detection, semantic segmentation, and depth
estimation algorithms and show that their performance decreases in degraded
weather, on the order of 15% for object detection, 60% for semantic
segmentation, and 6-fold increase in depth estimation error. Finetuning on our
augmented synthetic data results in improvements of 21% on object detection,
37% on semantic segmentation, and 8% on depth estimation.Comment: 19 pages, 19 figures, IJCV 2020 preprint. arXiv admin note: text
overlap with arXiv:1908.1033
- …