6,540 research outputs found

    Spectral-spatial classification of hyperspectral images: three tricks and a new supervised learning setting

    Get PDF
    Spectral-spatial classification of hyperspectral images has been the subject of many studies in recent years. In the presence of only very few labeled pixels, this task becomes challenging. In this paper we address the following two research questions: 1) Can a simple neural network with just a single hidden layer achieve state of the art performance in the presence of few labeled pixels? 2) How is the performance of hyperspectral image classification methods affected when using disjoint train and test sets? We give a positive answer to the first question by using three tricks within a very basic shallow Convolutional Neural Network (CNN) architecture: a tailored loss function, and smooth- and label-based data augmentation. The tailored loss function enforces that neighborhood wavelengths have similar contributions to the features generated during training. A new label-based technique here proposed favors selection of pixels in smaller classes, which is beneficial in the presence of very few labeled pixels and skewed class distributions. To address the second question, we introduce a new sampling procedure to generate disjoint train and test set. Then the train set is used to obtain the CNN model, which is then applied to pixels in the test set to estimate their labels. We assess the efficacy of the simple neural network method on five publicly available hyperspectral images. On these images our method significantly outperforms considered baselines. Notably, with just 1% of labeled pixels per class, on these datasets our method achieves an accuracy that goes from 86.42% (challenging dataset) to 99.52% (easy dataset). Furthermore we show that the simple neural network method improves over other baselines in the new challenging supervised setting. Our analysis substantiates the highly beneficial effect of using the entire image (so train and test data) for constructing a model.Comment: Remote Sensing 201

    Surface profile prediction and analysis applied to turning process

    Get PDF
    An approach for the prediction of surface profile in turning process using Radial Basis Function (RBF) neural networks is presented. The input parameters of the RBF networks are cutting speed, depth of cut and feed rate. The output parameters are Fast Fourier Transform (FFT) vector of surface profile for the prediction of surface profile. The RBF networks are trained with adaptive optimal training parameters related to cutting parameters and predict surface profile using the corresponding optimal network topology for each new cutting condition. A very good performance of surface profile prediction, in terms of agreement with experimental data, was achieved with high accuracy, low cost and high speed. It is found that the RBF networks have the advantage over Back Propagation (BP) neural networks. Furthermore, a new group of training and testing data were also used to analyse the influence of tool wear and chip formation on prediction accuracy using RBF neural networks

    A Predictive Model for Assessment of Successful Outcome in Posterior Spinal Fusion Surgery

    Get PDF
    Background: Low back pain is a common problem in many people. Neurosurgeons recommend posterior spinal fusion (PSF) surgery as one of the therapeutic strategies to the patients with low back pain. Due to the high risk of this type of surgery and the critical importance of making the right decision, accurate prediction of the surgical outcome is one of the main concerns for the neurosurgeons.Methods: In this study, 12 types of multi-layer perceptron (MLP) networks and 66 radial basis function (RBF) networks as the types of artificial neural network methods and a logistic regression (LR) model created and compared to predict the satisfaction with PSF surgery as one of the most well-known spinal surgeries.Results: The most important clinical and radiologic features as twenty-seven factors for 480 patients (150 males, 330 females; mean age 52.32 ± 8.39 years) were considered as the model inputs that included: age, sex, type of disorder, duration of symptoms, job, walking distance without pain (WDP), walking distance without sensory (WDS) disorders, visual analog scale (VAS) scores, Japanese Orthopaedic Association (JOA) score, diabetes, smoking, knee pain (KP), pelvic pain (PP), osteoporosis, spinal deformity and etc. The indexes such as receiver operating characteristic–area under curve (ROC-AUC), positive predictive value, negative predictive value and accuracy calculated to determine the best model. Postsurgical satisfaction was 77.5% at 6 months follow-up. The patients divided into the training, testing, and validation data sets.Conclusion: The findings showed that the MLP model performed better in comparison with RBF and LR models for prediction of PSF surgery.Keywords: Posterior spinal fusion surgery (PSF); Prediction, Surgical satisfaction; Multi-layer perceptron (MLP); Logistic regression (LR) (PDF) A Predictive Model for Assessment of Successful Outcome in Posterior Spinal Fusion Surgery. Available from: https://www.researchgate.net/publication/325679954_A_Predictive_Model_for_Assessment_of_Successful_Outcome_in_Posterior_Spinal_Fusion_Surgery [accessed Jul 11 2019].Peer reviewe

    Automatic Analysis of Facial Expressions Based on Deep Covariance Trajectories

    Get PDF
    In this paper, we propose a new approach for facial expression recognition using deep covariance descriptors. The solution is based on the idea of encoding local and global Deep Convolutional Neural Network (DCNN) features extracted from still images, in compact local and global covariance descriptors. The space geometry of the covariance matrices is that of Symmetric Positive Definite (SPD) matrices. By conducting the classification of static facial expressions using Support Vector Machine (SVM) with a valid Gaussian kernel on the SPD manifold, we show that deep covariance descriptors are more effective than the standard classification with fully connected layers and softmax. Besides, we propose a completely new and original solution to model the temporal dynamic of facial expressions as deep trajectories on the SPD manifold. As an extension of the classification pipeline of covariance descriptors, we apply SVM with valid positive definite kernels derived from global alignment for deep covariance trajectories classification. By performing extensive experiments on the Oulu-CASIA, CK+, and SFEW datasets, we show that both the proposed static and dynamic approaches achieve state-of-the-art performance for facial expression recognition outperforming many recent approaches.Comment: A preliminary version of this work appeared in "Otberdout N, Kacem A, Daoudi M, Ballihi L, Berretti S. Deep Covariance Descriptors for Facial Expression Recognition, in British Machine Vision Conference 2018, BMVC 2018, Northumbria University, Newcastle, UK, September 3-6, 2018. ; 2018 :159." arXiv admin note: substantial text overlap with arXiv:1805.0386
    • …
    corecore