24,011 research outputs found
A comparative study of image processing thresholding algorithms on residual oxide scale detection in stainless steel production lines
The present work is intended for residual oxide scale detection and classification through the application of image processing
techniques. This is a defect that can remain in the surface of stainless steel coils after an incomplete pickling process in a
production line. From a previous detailed study over reflectance of residual oxide defect, we present a comparative study of
algorithms for image segmentation based on thresholding methods. In particular, two computational models based on multi-linear
regression and neural networks will be proposed. A system based on conventional area camera with a special lighting was
installed and fully integrated in an annealing and pickling line for model testing purposes. Finally, model approaches will be
compared and evaluated their performance..Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
Gray Image extraction using Fuzzy Logic
Fuzzy systems concern fundamental methodology to represent and process
uncertainty and imprecision in the linguistic information. The fuzzy systems
that use fuzzy rules to represent the domain knowledge of the problem are known
as Fuzzy Rule Base Systems (FRBS). On the other hand image segmentation and
subsequent extraction from a noise-affected background, with the help of
various soft computing methods, are relatively new and quite popular due to
various reasons. These methods include various Artificial Neural Network (ANN)
models (primarily supervised in nature), Genetic Algorithm (GA) based
techniques, intensity histogram based methods etc. providing an extraction
solution working in unsupervised mode happens to be even more interesting
problem. Literature suggests that effort in this respect appears to be quite
rudimentary. In the present article, we propose a fuzzy rule guided novel
technique that is functional devoid of any external intervention during
execution. Experimental results suggest that this approach is an efficient one
in comparison to different other techniques extensively addressed in
literature. In order to justify the supremacy of performance of our proposed
technique in respect of its competitors, we take recourse to effective metrics
like Mean Squared Error (MSE), Mean Absolute Error (MAE), Peak Signal to Noise
Ratio (PSNR).Comment: 8 pages, 5 figures, Fuzzy Rule Base, Image Extraction, Fuzzy
Inference System (FIS), Membership Functions, Membership values,Image coding
and Processing, Soft Computing, Computer Vision Accepted and published in
IEEE. arXiv admin note: text overlap with arXiv:1206.363
Extraction of Projection Profile, Run-Histogram and Entropy Features Straight from Run-Length Compressed Text-Documents
Document Image Analysis, like any Digital Image Analysis requires
identification and extraction of proper features, which are generally extracted
from uncompressed images, though in reality images are made available in
compressed form for the reasons such as transmission and storage efficiency.
However, this implies that the compressed image should be decompressed, which
indents additional computing resources. This limitation induces the motivation
to research in extracting features directly from the compressed image. In this
research, we propose to extract essential features such as projection profile,
run-histogram and entropy for text document analysis directly from run-length
compressed text-documents. The experimentation illustrates that features are
extracted directly from the compressed image without going through the stage of
decompression, because of which the computing time is reduced. The feature
values so extracted are exactly identical to those extracted from uncompressed
images.Comment: Published by IEEE in Proceedings of ACPR-2013. arXiv admin note: text
overlap with arXiv:1403.778
Cutting out the middleman: measuring nuclear area in histopathology slides without segmentation
The size of nuclei in histological preparations from excised breast tumors is
predictive of patient outcome (large nuclei indicate poor outcome).
Pathologists take into account nuclear size when performing breast cancer
grading. In addition, the mean nuclear area (MNA) has been shown to have
independent prognostic value. The straightforward approach to measuring nuclear
size is by performing nuclei segmentation. We hypothesize that given an image
of a tumor region with known nuclei locations, the area of the individual
nuclei and region statistics such as the MNA can be reliably computed directly
from the image data by employing a machine learning model, without the
intermediate step of nuclei segmentation. Towards this goal, we train a deep
convolutional neural network model that is applied locally at each nucleus
location, and can reliably measure the area of the individual nuclei and the
MNA. Furthermore, we show how such an approach can be extended to perform
combined nuclei detection and measurement, which is reminiscent of
granulometry.Comment: Conditionally accepted for MICCAI 201
Veni Vidi Vici, A Three-Phase Scenario For Parameter Space Analysis in Image Analysis and Visualization
Automatic analysis of the enormous sets of images is a critical task in life
sciences. This faces many challenges such as: algorithms are highly
parameterized, significant human input is intertwined, and lacking a standard
meta-visualization approach. This paper proposes an alternative iterative
approach for optimizing input parameters, saving time by minimizing the user
involvement, and allowing for understanding the workflow of algorithms and
discovering new ones. The main focus is on developing an interactive
visualization technique that enables users to analyze the relationships between
sampled input parameters and corresponding output. This technique is
implemented as a prototype called Veni Vidi Vici, or "I came, I saw, I
conquered." This strategy is inspired by the mathematical formulas of numbering
computable functions and is developed atop ImageJ, a scientific image
processing program. A case study is presented to investigate the proposed
framework. Finally, the paper explores some potential future issues in the
application of the proposed approach in parameter space analysis in
visualization
Robot navigation control based on monocular images: An image processing algorithm for obstacle avoidance decisions
This paper covers the use of monocular vision to control autonomous navigation for a robot in a dynamically changing environment. The solution focused on using colour segmentation against a selected floor plane to distinctly separate obstacles from traversable space, this is then supplemented with canny edge detection to separate similarly coloured boundaries to the floor plane. The resulting binary map (where white identifies an obstacle-free area and black identifies an obstacle) could then be processed by fuzzy logic or neural networks to control the robot’s next movements. Findings shows that the algorithm performed strongly on solid coloured carpets, wooden and concrete floors but had difficulty in separating colours in multi-coloured floor types such as patterned carpets
Component-based Segmentation of words from handwritten Arabic text
Efficient preprocessing is very essential for automatic recognition of handwritten documents. In this paper, techniques on segmenting words in handwritten Arabic text are presented. Firstly, connected components (ccs) are extracted, and distances among different components are analyzed. The statistical distribution of this distance is then obtained to determine an optimal threshold for words segmentation. Meanwhile, an improved projection based method is also employed for baseline detection. The proposed method has been successfully tested on IFN/ENIT database consisting of 26459 Arabic words handwritten by 411 different writers, and the results were promising and very encouraging in more accurate detection of the baseline and segmentation of words for further recognition
- …