52 research outputs found

    A Survey on Hybrid Techniques Using SVM

    Get PDF
    Support Vector Machines (SVM) with linear or nonlinear kernels has become one of the most promising learning algorithms for classification as well as for regression. All the multilayer perceptron (MLP),Radial Basic Function(RBF) and Learning Polynomials are also worked efficiently with SVM. SVM is basically derived from statistical Learning Theory and it is very powerful statistical tool. The basic principal for the SVM is structural risk minimization and closely related to regularization theory. SVM is a group of supervised learning techniques or methods, which is used to do for classification or regression. In this paper discussed the importance of Support Vector Machines in various areas. This paper discussing the efficiency of SVM with the combination of other classification techniques

    Skeleton-aided Articulated Motion Generation

    Full text link
    This work make the first attempt to generate articulated human motion sequence from a single image. On the one hand, we utilize paired inputs including human skeleton information as motion embedding and a single human image as appearance reference, to generate novel motion frames, based on the conditional GAN infrastructure. On the other hand, a triplet loss is employed to pursue appearance-smoothness between consecutive frames. As the proposed framework is capable of jointly exploiting the image appearance space and articulated/kinematic motion space, it generates realistic articulated motion sequence, in contrast to most previous video generation methods which yield blurred motion effects. We test our model on two human action datasets including KTH and Human3.6M, and the proposed framework generates very promising results on both datasets.Comment: ACM MM 201

    Network-based features for retinal fundus vessel structure analysis

    Get PDF
    Retinal fundus imaging is a non-invasive method that allows visualizing the structure of the blood vessels in the retina whose features may indicate the presence of diseases such as diabetic retinopathy (DR) and glaucoma. Here we present a novel method to analyze and quantify changes in the retinal blood vessel structure in patients diagnosed with glaucoma or with DR. First, we use an automatic unsupervised segmentation algorithm to extract a tree-like graph from the retina blood vessel structure. The nodes of the graph represent branching (bifurcation) points and endpoints, while the links represent vessel segments that connect the nodes. Then, we quantify structural differences between the graphs extracted from the groups of healthy and non-healthy patients. We also use fractal analysis to characterize the extracted graphs. Applying these techniques to three retina fundus image databases we find significant differences between the healthy and non-healthy groups (p-values lower than 0.005 or 0.001 depending on the method and on the database). The results are sensitive to the segmentation method (manual or automatic) and to the resolution of the images.Peer ReviewedPostprint (published version

    Compressed Sensing for Open-ended Waveguide Non-Destructive Testing and Evaluation

    Get PDF
    Ph. D. ThesisNon-destructive testing and evaluation (NDT&E) systems using open-ended waveguide (OEW) suffer from critical challenges. In the sensing stage, data acquisition is time-consuming by raster scan, which is difficult for on-line detection. Sensing stage also disregards demand for the latter feature extraction process, leading to an excessive amount of data and processing overhead for feature extraction. In the feature extraction stage, efficient and robust defect region segmentation in the obtained image is challenging for a complex image background. Compressed sensing (CS) demonstrates impressive data compression ability in various applications using sparse models. How to develop CS models in OEW NDT&E that jointly consider sensing & processing for fast data acquisition, data compression, efficient and robust feature extraction is remaining challenges. This thesis develops integrated sensing-processing CS models to address the drawbacks in OEW NDT systems and carries out their case studies in low-energy impact damage detection for carbon fibre reinforced plastics (CFRP) materials. The major contributions are: (1) For the challenge of fast data acquisition, an online CS model is developed to offer faster data acquisition and reduce data amount without any hardware modification. The images obtained with OEW are usually smooth which can be sparsely represented with discrete cosine transform (DCT) basis. Based on this information, a customised 0/1 Bernoulli matrix for CS measurement is designed for downsampling. The full data is reconstructed with orthogonal matching pursuit algorithm using the downsampling data, DCT basis, and the customised 0/1 Bernoulli matrix. It is hard to determine the sampling pixel numbers for sparse reconstruction when lacking training data, to address this issue, an accumulated sampling and recovery process is developed in this CS model. The defect region can be extracted with the proposed histogram threshold edge detection (HTED) algorithm after each recovery, which forms an online process. A case study in impact damage detection on CFRP materials is carried out for validation. The results show that the data acquisition time is reduced by one order of magnitude while maintaining equivalent image quality and defect region as raster scan. (2) For the challenge of efficient data compression that considers the later feature extraction, a feature-supervised CS data acquisition method is proposed and evaluated. It reserves interested features while reducing the data amount. The frequencies which reveal the feature only occupy a small part of the frequency band, this method finds these sparse frequency range firstly to supervise the later sampling process. Subsequently, based on joint sparsity of neighbour frame and the extracted frequency band, an aligned spatial-spectrum sampling scheme is proposed. The scheme only samples interested frequency range for required features by using a customised 0/1 Bernoulli measurement matrix. The interested spectral-spatial data are reconstructed jointly, which has much faster speed than frame-by-frame methods. The proposed feature-supervised CS data acquisition is implemented and compared with raster scan and the traditional CS reconstruction in impact damage detection on CFRP materials. The results show that the data amount is reduced greatly without compromising feature quality, and the gain in reconstruction speed is improved linearly with the number of measurements. (3) Based on the above CS-based data acquisition methods, CS models are developed to directly detect defect from CS data rather than using the reconstructed full spatial data. This method is robust to texture background and more time-efficient that HTED algorithm. Firstly, based on the histogram is invariant to down-sampling using the customised 0/1 Bernoulli measurement matrix, a qualitative method which only gives binary judgement of defect is developed. High probability of detection and accuracy is achieved compared to other methods. Secondly, a new greedy algorithm of sparse orthogonal matching pursuit (spOMP)-based defect region segmentation method is developed to quantitatively extract the defect region, because the conventional sparse reconstruction algorithms cannot properly use the sparse character of correlation between the measurement matrix and CS data. The proposed algorithms are faster and more robust to interference than other algorithms.China Scholarship Counci

    Segmentation based coding of depth Information for 3D video

    Get PDF
    Increased interest in 3D artifact and the need of transmitting, broadcasting and saving the whole information that represents the 3D view, has been a hot topic in recent years. Knowing that adding the depth information to the views will increase the encoding bitrate considerably, we decided to find a new approach to encode/decode the depth information for 3D video. In this project, different approaches to encode/decode the depth information are experienced and a new method is implemented which its result is compared to the best previously developed method considering both bitrate and quality (PSNR)

    A survey on classification algorithms of brain images in Alzheimer’s disease based on feature extraction techniques

    Get PDF
    Abstract: Alzheimer’s disease (AD) is one of the most serious neurological disorders for elderly people. AD affected patient experiences severe memory loss. One of the main reasons for memory loss in AD patients is atrophy in the hippocampus, amygdala, etc. Due to the enormous growth of AD patients and the paucity of proper diagnostic tools, detection and classification of AD are considered as a challenging research area. Before a Cognitively normal (CN) person develops symptoms of AD, he may pass through an intermediate stage, commonly known as Mild Cognitive Impairment (MCI). MCI is having two stages, namely StableMCI (SMCI) and Progressive MCI (PMCI). In SMCI, a patient remains stable, whereas, in the case of PMCI, a person gradually develops few symptoms of AD. Several research works are in progress on the detection and classification of AD based on changes in the brain. In this paper, we have analyzed few existing state-of-art works for AD detection and classification, based on different feature extraction approaches. We have summarized the existing research articles with detailed observations. We have also compared the performance and research issues in each of the feature extraction mechanisms and observed that the AD classification using the wavelet transform-based feature extraction approaches might achieve convincing results

    Biometrics

    Get PDF
    Biometrics uses methods for unique recognition of humans based upon one or more intrinsic physical or behavioral traits. In computer science, particularly, biometrics is used as a form of identity access management and access control. It is also used to identify individuals in groups that are under surveillance. The book consists of 13 chapters, each focusing on a certain aspect of the problem. The book chapters are divided into three sections: physical biometrics, behavioral biometrics and medical biometrics. The key objective of the book is to provide comprehensive reference and text on human authentication and people identity verification from both physiological, behavioural and other points of view. It aims to publish new insights into current innovations in computer systems and technology for biometrics development and its applications. The book was reviewed by the editor Dr. Jucheng Yang, and many of the guest editors, such as Dr. Girija Chetty, Dr. Norman Poh, Dr. Loris Nanni, Dr. Jianjiang Feng, Dr. Dongsun Park, Dr. Sook Yoon and so on, who also made a significant contribution to the book

    Machine learning methods for the characterization and classification of complex data

    Get PDF
    This thesis work presents novel methods for the analysis and classification of medical images and, more generally, complex data. First, an unsupervised machine learning method is proposed to order anterior chamber OCT (Optical Coherence Tomography) images according to a patient's risk of developing angle-closure glaucoma. In a second study, two outlier finding techniques are proposed to improve the results of above mentioned machine learning algorithm, we also show that they are applicable to a wide variety of data, including fraud detection in credit card transactions. In a third study, the topology of the vascular network of the retina, considering it a complex tree-like network is analyzed and we show that structural differences reveal the presence of glaucoma and diabetic retinopathy. In a fourth study we use a model of a laser with optical injection that presents extreme events in its intensity time-series to evaluate machine learning methods to forecast such extreme events.El presente trabajo de tesis desarrolla nuevos métodos para el análisis y clasificación de imágenes médicas y datos complejos en general. Primero, proponemos un método de aprendizaje automático sin supervisión que ordena imágenes OCT (tomografía de coherencia óptica) de la cámara anterior del ojo en función del grado de riesgo del paciente de padecer glaucoma de ángulo cerrado. Luego, desarrollamos dos métodos de detección automática de anomalías que utilizamos para mejorar los resultados del algoritmo anterior, pero que su aplicabilidad va mucho más allá, siendo útil, incluso, para la detección automática de fraudes en transacciones de tarjetas de crédito. Mostramos también, cómo al analizar la topología de la red vascular de la retina considerándola una red compleja, podemos detectar la presencia de glaucoma y de retinopatía diabética a través de diferencias estructurales. Estudiamos también un modelo de un láser con inyección óptica que presenta eventos extremos en la serie temporal de intensidad para evaluar diferentes métodos de aprendizaje automático para predecir dichos eventos extremos.Aquesta tesi desenvolupa nous mètodes per a l’anàlisi i la classificació d’imatges mèdiques i dades complexes. Hem proposat, primer, un mètode d’aprenentatge automàtic sense supervisió que ordena imatges OCT (tomografia de coherència òptica) de la cambra anterior de l’ull en funció del grau de risc del pacient de patir glaucoma d’angle tancat. Després, hem desenvolupat dos mètodes de detecció automàtica d’anomalies que hem utilitzat per millorar els resultats de l’algoritme anterior, però que la seva aplicabilitat va molt més enllà, sent útil, fins i tot, per a la detecció automàtica de fraus en transaccions de targetes de crèdit. Mostrem també, com en analitzar la topologia de la xarxa vascular de la retina considerant-la una xarxa complexa, podem detectar la presència de glaucoma i de retinopatia diabètica a través de diferències estructurals. Finalment, hem estudiat un làser amb injecció òptica, el qual presenta esdeveniments extrems en la sèrie temporal d’intensitat. Hem avaluat diferents mètodes per tal de predir-los.Postprint (published version
    • …
    corecore