7 research outputs found
A Multi-Layer Fusion-Based Facial Expression Recognition Approach with Optimal Weighted AUs
Affective computing is an increasingly important outgrowth of Artificial Intelligence, which is intended to deal with rich and subjective human communication. In view of the complexity of affective expression, discriminative feature extraction and corresponding high-performance classifier selection are still a big challenge. Specific features/classifiers display different performance in different datasets. There has currently been no consensus in the literature that any expression feature or classifier is always good in all cases. Although the recently updated deep learning algorithm, which uses learning deep feature instead of manual construction, appears in the expression recognition research, the limitation of training samples is still an obstacle of practical application. In this paper, we aim to find an effective solution based on a fusion and association learning strategy with typical manual features and classifiers. Taking these typical features and classifiers in facial expression area as a basis, we fully analyse their fusion performance. Meanwhile, to emphasize the major attributions of affective computing, we select facial expression relative Action Units (AUs) as basic components. In addition, we employ association rules to mine the relationships between AUs and facial expressions. Based on a comprehensive analysis from different perspectives, we propose a novel facial expression recognition approach that uses multiple features and multiple classifiers embedded into a stacking framework based on AUs. Extensive experiments on two public datasets show that our proposed multi-layer fusion system based on optimal AUs weighting has gained dramatic improvements on facial expression recognition in comparison to an individual feature/classifier and some state-of-the-art methods, including the recent deep learning based expression recognition one
Investigating the Capabilities of Various Multispectral Remote Sensors Data to Map Mineral Prospectivity Based on Random Forest Predictive Model: A Case Study for Gold Deposits in Hamissana Area, NE Sudan
Remote sensing data provide significant information about surface geological features, but they have not been fully investigated as a tool for delineating mineral prospective targets using the latest advancements in machine learning predictive modeling. In this study, besides available geological data (lithology, structure, lineaments), Landsat-8, Sentinel-2, and ASTER multispectral remote sensing data were processed to produce various predictor maps, which then formed four distinct datasets (namely Landsat-8, Sentinel-2, ASTER, and Data-integration). Remote sensing enhancement techniques, including band ratio (BR), principal component analysis (PCA), and minimum noise fraction (MNF), were applied to produce predictor maps related to hydrothermal alteration zones in Hamissana area, while geological-based predictor maps were derived from applying spatial analysis methods. These four datasets were used independently to train a random forest algorithm (RF), which was then employed to conduct data-driven gold mineral prospectivity modeling (MPM) of the study area and compare the capability of different datasets. The modeling results revealed that ASTER and Sentinel-2 datasets achieved very similar accuracy and outperformed Landsat-8 dataset. Based on the area under the ROC curve (AUC), both datasets had the same prediction accuracy of 0.875. However, ASTER dataset yielded the highest overall classification accuracy of 73%, which is 6% higher than Sentinel-2 and 13% higher than Landsat-8. By using the data-integration concept, the prediction accuracy increased by about 6% (AUC: 0.938) compared with the ASTER dataset. Hence, these results suggest that the framework of exploiting remote sensing data is promising and should be used as an alternative technique for MPM in case of data availability issues