15,261 research outputs found
Terahertz Security Image Quality Assessment by No-reference Model Observers
To provide the possibility of developing objective image quality assessment
(IQA) algorithms for THz security images, we constructed the THz security image
database (THSID) including a total of 181 THz security images with the
resolution of 127*380. The main distortion types in THz security images were
first analyzed for the design of subjective evaluation criteria to acquire the
mean opinion scores. Subsequently, the existing no-reference IQA algorithms,
which were 5 opinion-aware approaches viz., NFERM, GMLF, DIIVINE, BRISQUE and
BLIINDS2, and 8 opinion-unaware approaches viz., QAC, SISBLIM, NIQE, FISBLIM,
CPBD, S3 and Fish_bb, were executed for the evaluation of the THz security
image quality. The statistical results demonstrated the superiority of Fish_bb
over the other testing IQA approaches for assessing the THz image quality with
PLCC (SROCC) values of 0.8925 (-0.8706), and with RMSE value of 0.3993. The
linear regression analysis and Bland-Altman plot further verified that the
Fish__bb could substitute for the subjective IQA. Nonetheless, for the
classification of THz security images, we tended to use S3 as a criterion for
ranking THz security image grades because of the relatively low false positive
rate in classifying bad THz image quality into acceptable category (24.69%).
Interestingly, due to the specific property of THz image, the average pixel
intensity gave the best performance than the above complicated IQA algorithms,
with the PLCC, SROCC and RMSE of 0.9001, -0.8800 and 0.3857, respectively. This
study will help the users such as researchers or security staffs to obtain the
THz security images of good quality. Currently, our research group is
attempting to make this research more comprehensive.Comment: 13 pages, 8 figures, 4 table
Recommended from our members
Subjective and objective quality evaluation of synthetic and high dynamic range images
Recent years have seen a huge growth in the acquisition, transmission, and storage of videos. The visual data consists of both natural scenes as well as synthetic scenes, such as animated movies, cartoons and video games. In all these cases, the ultimate goal is to provide the viewers with a satisfactory quality-of-experience. In addition to the traditional 8-bit images, high dynamic range imaging is also becoming popular because of its ability to represent the real world luminances more realistically. Coming up with objective image quality assessment algorithms for these applications is an interesting research problem. In this work, I have developed a synthetic image quality database by introducing varying degrees of different types of distortions and conducted a subjective experiment in order to obtain the ground-truth data. I evaluated the performance of state-of-the-art image quality assessment algorithms (typically meant for natural images) on this database, especially no-reference algorithms that have not been applied to the domain of computer graphics images before. I identified the top-performing algorithms along with analyzing the types of distortions on which the present algorithms show a less impressive performance. For high dynamic range(HDR) images, I have designed two new full-reference image quality assessment algorithms to judge the quality of tonemapped HDR images using statistical features extracted from them. I have also conducted a massive online crowd-sourced subjective test for HDR image artifacts arising from tonemapping, multiple-exposure fusion and post processing. To the best of our knowledge, presently this is the largest HDR image database in the world involving the largest number of source images and most number of human evaluations. Based on the subjective evaluations obtained, I have also proposed machine learning based no-reference image quality assessment algorithms to predict the perceptual quality of HDR images.Electrical and Computer Engineerin
A MONOTONICITY MEASURE WITH A FAST ALGORITHM FOR OBJECTIVE EVALUATION OF TONE MAPPING METHODS
The range of light intensity in the real world greatly
exceeds what most existing devices can display. Various tone
mapping methods have been developed to render HDR (high
dynamic range) images or to increase local contrast of
conventionally captured images. While local (or spatially varying)
tone mapping methods are generally more effective they are also
prone to artifacts such as halos. Most existing methods for
evaluating tone-mapped images focus on preservation of
informative details and may not identify artifacts effectively. This
paper proposes an objective metric based on a monotonicity
measure that may serve as a baseline measure for artifacts due to
intensity reversal. A naïve method to compute the metric has a
high computational complexity of O(N2), where N is the total
number of pixels. To make the metric acceptable for interactive
applications, a fast algorithm with the complexity of O(N) is
presented. Experimental results using real-world images are
included to demonstrate the efficacy of both the metric and the fast
algorithm
デバイスの限界を超えた正確な撮像を可能にする深層学習
Tohoku University博士(情報科学)thesi
Learning a self-supervised tone mapping operator via feature contrast masking loss
High Dynamic Range (HDR) content is becoming ubiquitous due to the rapid development of capture technologies. Nevertheless, the dynamic range of common display devices is still limited, therefore tone mapping (TM) remains a key challenge for image visualization. Recent work has demonstrated that neural networks can achieve remarkable performance in this task when compared to traditional methods, however, the quality of the results of these learning-based methods is limited by the training data. Most existing works use as training set a curated selection of best-performing results from existing traditional tone mapping operators (often guided by a quality metric), therefore, the quality of newly generated results is fundamentally limited by the performance of such operators. This quality might be even further limited by the pool of HDR content that is used for training. In this work we propose a learning-based self-supervised tone mapping operator that is trained at test time specifically for each HDR image and does not need any data labeling. The key novelty of our approach is a carefully designed loss function built upon fundamental knowledge on contrast perception that allows for directly comparing the content in the HDR and tone mapped images. We achieve this goal by reformulating classic VGG feature maps into feature contrast maps that normalize local feature differences by their average magnitude in a local neighborhood, allowing our loss to account for contrast masking effects. We perform extensive ablation studies and exploration of parameters and demonstrate that our solution outperforms existing approaches with a single set of fixed parameters, as confirmed by both objective and subjective metrics
High-Brightness Image Enhancement Algorithm
In this paper, we introduce a tone mapping algorithm for processing high-brightness video images. This method can maximally recover the information of high-brightness areas and preserve detailed information. Along with benchmark data, real-life and practical application data were taken to test the proposed method. The experimental objects were license plates. We reconstructed the image in the RGB channel, and gamma correction was carried out. After that, local linear adjustment was completed through a tone mapping window to restore the detailed information of the high-brightness region. The experimental results showed that our algorithm could clearly restore the details of high-brightness local areas. The processed image conformed to the visual effect observed by human eyes but with higher definition. Compared with other algorithms, the proposed algorithm has advantages in terms of both subjective and objective evaluation. It can fully satisfy the needs in various practical applications
- …