114,264 research outputs found
A. Eye Detection Using Varients of Hough Transform B. Off-Line Signature Verification
PART (A): EYE DETECTION USING VARIANTS OF HOUGH TRANSFORM:
Broadly eye detection is the process of tracking the location of human eye in a face image. Previous approaches use complex techniques like neural network, Radial Basis Function networks, Multi-Layer Perceptrons etc. In the developed project human eye is modeled as a circle (iris; the black circular region of eye) enclosed inside an ellipse (eye-lashes). Due to the sudden intensity variations in the iris with respect the inner region of eye-lashes the probability of false acceptance is very less. Since the image taken is a face image the probability of false acceptance further reduces. Hough transform is used for circle (iris) and ellipse (eye-lash) detection. Hough transform was the obvious choice because of its resistance towards the holes in the boundary and noise present in the image. Image smoothing is done to reduce the presence of noise in the image further it makes the image better for further processing like edge detection (Prewitt method). Compared to the aforementioned models the proposed model is simple and efficient. The proposed model can further be improved by including various features like orientation angle of eye-lashes (which is assumed constant in the proposed model), and by making the parameters adaptive.
PART (B): OFF-LINE SIGNATURE VERIFICATION:
Hand-written signature is widely used for authentication and identification of individual. It has been the target for fraudulence ever since. A novel off-line signature verification algorithm has been developed and tested successfully. Since the hand-written signature can be random, because of presence of various curves and features, techniques like character recognition cannot be applied for signature verification. The proposed algorithm incorporates a soft-computing technique “CLUSTERING” for extraction of feature points from the image of the signature. These feature points or centers are updated using the clustering update equations for required number of times, then these acts as extracted feature points of the signature image. To avoid interpersonal variation 6 to 8 signature images of the same person are taken and feature points are trained. These trained feature points are compared with the test signature images and based on a specific threshold, the signature is declared original or forgery. This approach works well if there is a high variation in the original signature, but for signatures with low variation, it produces incorrect results
Learning Spatial-Semantic Context with Fully Convolutional Recurrent Network for Online Handwritten Chinese Text Recognition
Online handwritten Chinese text recognition (OHCTR) is a challenging problem
as it involves a large-scale character set, ambiguous segmentation, and
variable-length input sequences. In this paper, we exploit the outstanding
capability of path signature to translate online pen-tip trajectories into
informative signature feature maps using a sliding window-based method,
successfully capturing the analytic and geometric properties of pen strokes
with strong local invariance and robustness. A multi-spatial-context fully
convolutional recurrent network (MCFCRN) is proposed to exploit the multiple
spatial contexts from the signature feature maps and generate a prediction
sequence while completely avoiding the difficult segmentation problem.
Furthermore, an implicit language model is developed to make predictions based
on semantic context within a predicting feature sequence, providing a new
perspective for incorporating lexicon constraints and prior knowledge about a
certain language in the recognition procedure. Experiments on two standard
benchmarks, Dataset-CASIA and Dataset-ICDAR, yielded outstanding results, with
correct rates of 97.10% and 97.15%, respectively, which are significantly
better than the best result reported thus far in the literature.Comment: 14 pages, 9 figure
Offline Handwritten Signature Verification - Literature Review
The area of Handwritten Signature Verification has been broadly researched in
the last decades, but remains an open research problem. The objective of
signature verification systems is to discriminate if a given signature is
genuine (produced by the claimed individual), or a forgery (produced by an
impostor). This has demonstrated to be a challenging task, in particular in the
offline (static) scenario, that uses images of scanned signatures, where the
dynamic information about the signing process is not available. Many
advancements have been proposed in the literature in the last 5-10 years, most
notably the application of Deep Learning methods to learn feature
representations from signature images. In this paper, we present how the
problem has been handled in the past few decades, analyze the recent
advancements in the field, and the potential directions for future research.Comment: Accepted to the International Conference on Image Processing Theory,
Tools and Applications (IPTA 2017
Learning Representations from Persian Handwriting for Offline Signature Verification, a Deep Transfer Learning Approach
Offline Signature Verification (OSV) is a challenging pattern recognition
task, especially when it is expected to generalize well on the skilled
forgeries that are not available during the training. Its challenges also
include small training sample and large intra-class variations. Considering the
limitations, we suggest a novel transfer learning approach from Persian
handwriting domain to multi-language OSV domain. We train two Residual CNNs on
the source domain separately based on two different tasks of word
classification and writer identification. Since identifying a person signature
resembles identifying ones handwriting, it seems perfectly convenient to use
handwriting for the feature learning phase. The learned representation on the
more varied and plentiful handwriting dataset can compensate for the lack of
training data in the original task, i.e. OSV, without sacrificing the
generalizability. Our proposed OSV system includes two steps: learning
representation and verification of the input signature. For the first step, the
signature images are fed into the trained Residual CNNs. The output
representations are then used to train SVMs for the verification. We test our
OSV system on three different signature datasets, including MCYT (a Spanish
signature dataset), UTSig (a Persian one) and GPDS-Synthetic (an artificial
dataset). On UT-SIG, we achieved 9.80% Equal Error Rate (EER) which showed
substantial improvement over the best EER in the literature, 17.45%. Our
proposed method surpassed state-of-the-arts by 6% on GPDS-Synthetic, achieving
6.81%. On MCYT, EER of 3.98% was obtained which is comparable to the best
previously reported results
Inspection System And Method For Bond Detection And Validation Of Surface Mount Devices Using Sensor Fusion And Active Perception
A hybrid surface mount component inspection system which includes both vision and infrared inspection techniques to determine the presence of surface mount components on a printed wiring board, and the quality of solder joints of surface mount components on printed wiring boards by using data level sensor fusion to combine data from two infrared sensors to obtain emissivity independent thermal signatures of solder joints, and using feature level sensor fusion with active perception to assemble and process inspection information from any number of sensors to determine characteristic feature sets of different defect classes to classify solder defects.Georgia Tech Research Corporatio
- …