2 research outputs found
Blockwise Based Detection of Local Defects
Print quality is an important criterion for a printer's performance. The
detection, classification, and assessment of printing defects can reflect the
printer's working status and help to locate mechanical problems inside. To
handle all these questions, an efficient algorithm is needed to replace the
traditionally visual checking method. In this paper, we focus on pages with
local defects including gray spots and solid spots. We propose a coarse-to-fine
method to detect local defects in a block-wise manner, and aggregate the
blockwise attributes to generate the feature vector of the whole test page for
a further ranking task. In the detection part, we first select candidate
regions by thresholding a single feature. Then more detailed features of
candidate blocks are calculated and sent to a decision tree that is previously
trained on our training dataset. The final result is given by the decision tree
model to control the false alarm rate while maintaining the required miss rate.
Our algorithm is proved to be effective in detecting and classifying local
defects compared with previous methods.Comment: 7 pages, 13 figures, IS&T Electronic Imaging 2019 Proceeding
Boosting High-Level Vision with Joint Compression Artifacts Reduction and Super-Resolution
Due to the limits of bandwidth and storage space, digital images are usually
down-scaled and compressed when transmitted over networks, resulting in loss of
details and jarring artifacts that can lower the performance of high-level
visual tasks. In this paper, we aim to generate an artifact-free
high-resolution image from a low-resolution one compressed with an arbitrary
quality factor by exploring joint compression artifacts reduction (CAR) and
super-resolution (SR) tasks. First, we propose a context-aware joint CAR and SR
neural network (CAJNN) that integrates both local and non-local features to
solve CAR and SR in one-stage. Finally, a deep reconstruction network is
adopted to predict high quality and high-resolution images. Evaluation on CAR
and SR benchmark datasets shows that our CAJNN model outperforms previous
methods and also takes 26.2% shorter runtime. Based on this model, we explore
addressing two critical challenges in high-level computer vision: optical
character recognition of low-resolution texts, and extremely tiny face
detection. We demonstrate that CAJNN can serve as an effective image
preprocessing method and improve the accuracy for real-scene text recognition
(from 85.30% to 85.75%) and the average precision for tiny face detection (from
0.317 to 0.611).Comment: 8 pages, 6 figures, 5 tables. Accepted by the 25th ICPR (2020