109,937 research outputs found
Terahertz Security Image Quality Assessment by No-reference Model Observers
To provide the possibility of developing objective image quality assessment
(IQA) algorithms for THz security images, we constructed the THz security image
database (THSID) including a total of 181 THz security images with the
resolution of 127*380. The main distortion types in THz security images were
first analyzed for the design of subjective evaluation criteria to acquire the
mean opinion scores. Subsequently, the existing no-reference IQA algorithms,
which were 5 opinion-aware approaches viz., NFERM, GMLF, DIIVINE, BRISQUE and
BLIINDS2, and 8 opinion-unaware approaches viz., QAC, SISBLIM, NIQE, FISBLIM,
CPBD, S3 and Fish_bb, were executed for the evaluation of the THz security
image quality. The statistical results demonstrated the superiority of Fish_bb
over the other testing IQA approaches for assessing the THz image quality with
PLCC (SROCC) values of 0.8925 (-0.8706), and with RMSE value of 0.3993. The
linear regression analysis and Bland-Altman plot further verified that the
Fish__bb could substitute for the subjective IQA. Nonetheless, for the
classification of THz security images, we tended to use S3 as a criterion for
ranking THz security image grades because of the relatively low false positive
rate in classifying bad THz image quality into acceptable category (24.69%).
Interestingly, due to the specific property of THz image, the average pixel
intensity gave the best performance than the above complicated IQA algorithms,
with the PLCC, SROCC and RMSE of 0.9001, -0.8800 and 0.3857, respectively. This
study will help the users such as researchers or security staffs to obtain the
THz security images of good quality. Currently, our research group is
attempting to make this research more comprehensive.Comment: 13 pages, 8 figures, 4 table
CURE-OR: Challenging Unreal and Real Environments for Object Recognition
In this paper, we introduce a large-scale, controlled, and multi-platform
object recognition dataset denoted as Challenging Unreal and Real Environments
for Object Recognition (CURE-OR). In this dataset, there are 1,000,000 images
of 100 objects with varying size, color, and texture that are positioned in
five different orientations and captured using five devices including a webcam,
a DSLR, and three smartphone cameras in real-world (real) and studio (unreal)
environments. The controlled challenging conditions include underexposure,
overexposure, blur, contrast, dirty lens, image noise, resizing, and loss of
color information. We utilize CURE-OR dataset to test recognition APIs-Amazon
Rekognition and Microsoft Azure Computer Vision- and show that their
performance significantly degrades under challenging conditions. Moreover, we
investigate the relationship between object recognition and image quality and
show that objective quality algorithms can estimate recognition performance
under certain photometric challenging conditions. The dataset is publicly
available at https://ghassanalregib.com/cure-or/.Comment: 8 pages, 7 figures, 4 table
Fuzzy-based Propagation of Prior Knowledge to Improve Large-Scale Image Analysis Pipelines
Many automatically analyzable scientific questions are well-posed and offer a
variety of information about the expected outcome a priori. Although often
being neglected, this prior knowledge can be systematically exploited to make
automated analysis operations sensitive to a desired phenomenon or to evaluate
extracted content with respect to this prior knowledge. For instance, the
performance of processing operators can be greatly enhanced by a more focused
detection strategy and the direct information about the ambiguity inherent in
the extracted data. We present a new concept for the estimation and propagation
of uncertainty involved in image analysis operators. This allows using simple
processing operators that are suitable for analyzing large-scale 3D+t
microscopy images without compromising the result quality. On the foundation of
fuzzy set theory, we transform available prior knowledge into a mathematical
representation and extensively use it enhance the result quality of various
processing operators. All presented concepts are illustrated on a typical
bioimage analysis pipeline comprised of seed point detection, segmentation,
multiview fusion and tracking. Furthermore, the functionality of the proposed
approach is validated on a comprehensive simulated 3D+t benchmark data set that
mimics embryonic development and on large-scale light-sheet microscopy data of
a zebrafish embryo. The general concept introduced in this contribution
represents a new approach to efficiently exploit prior knowledge to improve the
result quality of image analysis pipelines. Especially, the automated analysis
of terabyte-scale microscopy data will benefit from sophisticated and efficient
algorithms that enable a quantitative and fast readout. The generality of the
concept, however, makes it also applicable to practically any other field with
processing strategies that are arranged as linear pipelines.Comment: 39 pages, 12 figure
- …