198 research outputs found
Relaxed 2-D Principal Component Analysis by Norm for Face Recognition
A relaxed two dimensional principal component analysis (R2DPCA) approach is
proposed for face recognition. Different to the 2DPCA, 2DPCA- and G2DPCA,
the R2DPCA utilizes the label information (if known) of training samples to
calculate a relaxation vector and presents a weight to each subset of training
data. A new relaxed scatter matrix is defined and the computed projection axes
are able to increase the accuracy of face recognition. The optimal -norms
are selected in a reasonable range. Numerical experiments on practical face
databased indicate that the R2DPCA has high generalization ability and can
achieve a higher recognition rate than state-of-the-art methods.Comment: 19 pages, 11 figure
Generalized Two-Dimensional Quaternion Principal Component Analysis with Weighting for Color Image Recognition
A generalized two-dimensional quaternion principal component analysis
(G2DQPCA) approach with weighting is presented for color image analysis. As a
general framework of 2DQPCA, G2DQPCA is flexible to adapt different constraints
or requirements by imposing norms both on the constraint function and
the objective function. The gradient operator of quaternion vector functions is
redefined by the structure-preserving gradient operator of real vector
function. Under the framework of minorization-maximization (MM), an iterative
algorithm is developed to obtain the optimal closed-form solution of G2DQPCA.
The projection vectors generated by the deflating scheme are required to be
orthogonal to each other. A weighting matrix is defined to magnify the effect
of main features. The weighted projection bases remain the accuracy of face
recognition unchanged or moving in a tight range as the number of features
increases. The numerical results based on the real face databases validate that
the newly proposed method performs better than the state-of-the-art algorithms.Comment: 15 pages, 15 figure
Feature Selection and Non-Euclidean Dimensionality Reduction: Application to Electrocardiology.
Heart disease has been the leading cause of human death for decades.
To improve treatment of heart disease, algorithms to perform reliable computer diagnosis using electrocardiogram (ECG) data have become an area of active research. This thesis utilizes well-established methods from cluster analysis, classification, and localization to cluster and classify ECG data, and aims to help clinicians diagnose and treat heart diseases. The power of these methods is enhanced by state-of-the-art feature selection and dimensionality reduction.
The specific contributions of this thesis are as follows. First, a unique combination of ECG feature selection and mixture model clustering is introduced to classify the sites of origin of ventricular tachycardias. Second, we apply a restricted Boltzmann machine (RBM) to learn sparse representations of ECG signals and to build an enriched classifier from patient data. Third, a novel manifold learning algorithm is introduced, called Quaternion Laplacian Information Maps (QLIM), and is applied to visualize high-dimensional ECG signals. These methods are applied to design of an automated supervised classification algorithm to help a physician identify the origin of ventricular arrhythmias (VA) directed from a patient's ECG data. The algorithm is trained on a large database of ECGs and catheter positions collected during the electrophysiology (EP) pace-mapping procedures. The proposed algorithm is demonstrated to have a correct classification rate of over 80% for the difficult task of classifying VAs having epicardial or endocardial origins.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113303/1/dyjung_1.pd
Spatiotemporal Saliency Detection: State of Art
Saliency detection has become a very prominent subject for research in recent time. Many techniques has been defined for the saliency detection.In this paper number of techniques has been explained that include the saliency detection from the year 2000 to 2015, almost every technique has been included.all the methods are explained briefly including their advantages and disadvantages. Comparison between various techniques has been done. With the help of table which includes authors name,paper name,year,techniques,algorithms and challenges. A comparison between levels of acceptance rates and accuracy levels are made
- …