2 research outputs found
A Novel Hybrid Biometric Electronic Voting System: Integrating Finger Print and Face Recognition
A novel hybrid design based electronic voting system is proposed, implemented
and analyzed. The proposed system uses two voter verification techniques to
give better results in comparison to single identification based systems.
Finger print and facial recognition based methods are used for voter
identification. Cross verification of a voter during an election process
provides better accuracy than single parameter identification method. The
facial recognition system uses Viola-Jones algorithm along with rectangular
Haar feature selection method for detection and extraction of features to
develop a biometric template and for feature extraction during the voting
process. Cascaded machine learning based classifiers are used for comparing the
features for identity verification using GPCA (Generalized Principle Component
Analysis) and K-NN (K-Nearest Neighbor). It is accomplished through comparing
the Eigen-vectors of the extracted features with the biometric template
pre-stored in the election regulatory body database. The results of the
proposed system show that the proposed cascaded design based system performs
better than the systems using other classifiers or separate schemes i.e. facial
or finger print based schemes. The proposed system will be highly useful for
real time applications due to the reason that it has 91% accuracy under nominal
light in terms of facial recognition. with bags of paper votes. The central
station compiles and publishes the names of winners and losers through
television and radio stations. This method is useful only if the whole process
is completed in a transparent way. However, there are some drawbacks to this
system. These include higher expenses, longer time to complete the voting
process, fraudulent practices by the authorities administering elections as
well as malpractices by the voters [1]. These challenges result in manipulated
election results
Learning by Aligning Videos in Time
We present a self-supervised approach for learning video representations
using temporal video alignment as a pretext task, while exploiting both
frame-level and video-level information. We leverage a novel combination of
temporal alignment loss and temporal regularization terms, which can be used as
supervision signals for training an encoder network. Specifically, the temporal
alignment loss (i.e., Soft-DTW) aims for the minimum cost for temporally
aligning videos in the embedding space. However, optimizing solely for this
term leads to trivial solutions, particularly, one where all frames get mapped
to a small cluster in the embedding space. To overcome this problem, we propose
a temporal regularization term (i.e., Contrastive-IDM) which encourages
different frames to be mapped to different points in the embedding space.
Extensive evaluations on various tasks, including action phase classification,
action phase progression, and fine-grained frame retrieval, on three datasets,
namely Pouring, Penn Action, and IKEA ASM, show superior performance of our
approach over state-of-the-art methods for self-supervised representation
learning from videos. In addition, our method provides significant performance
gain where labeled data is lacking.Comment: Accepted to CVPR 202