9 research outputs found

    Facial expression recognition via a jointly-learned dual-branch network

    Get PDF
    Human emotion recognition depends on facial expressions, and essentially on the extraction of relevant features. Accurate feature extraction is generally difficult due to the influence of external interference factors and the mislabelling of some datasets, such as the Fer2013 dataset. Deep learning approaches permit an automatic and intelligent feature extraction based on the input database. But, in the case of poor database distribution or insufficient diversity of database samples, extracted features will be negatively affected. Furthermore, one of the main challenges for efficient facial feature extraction and accurate facial expression recognition is the facial expression datasets, which are usually considerably small compared to other image datasets. To solve these problems, this paper proposes a new approach based on a dual-branch convolutional neural network for facial expression recognition, which is formed by three modules: The two first ones ensure features engineering stage by two branches, and features fusion and classification are performed by the third one. In the first branch, an improved convolutional part of the VGG network is used to benefit from its known robustness, the transfer learning technique with the EfficientNet network is applied in the second branch, to improve the quality of limited training samples in datasets. Finally, and in order to improve the recognition performance, a classification decision will be made based on the fusion of both branchesā€™ feature maps. Based on the experimental results obtained on the Fer2013 and CK+ datasets, the proposed approach shows its superiority compared to several state-of-the-art results as well as using one model at a time. Those results are very competitive, especially for the CK+ dataset, for which the proposed dual branch model reaches an accuracy of 99.32, while for the FER-2013 dataset, the VGG-inspired CNN obtains an accuracy of 67.70, which is considered an acceptable accuracy, given the difficulty of the images of this dataset

    Multi-agents system for breast tumour detection in mammography by deep learning pre-processing and watershed segmentation

    No full text
    International audienceMammography is the most used process for females to diagnosis and screening breast cancer. In this paper, we presented an enhanced automatic watershed segmentation for breast tumour detection and segmentation reinforced with a group of interactive agents. First, we started by a pre-processing based on deep learning (DL), where a convolution neural network (CNN) is applied, to classify the breast density by AlexNet architecture. Second, classic watershed segmentation was applied on these images. Afterward, a multi-agents system (MASs) was introduced. The information within pixels, regions and breast density were explored, to create a region of interest (ROI), to emerge the MAS segmentation. Experimental results were promising in term of accuracy (ACC), with an overall of (97.18%) over three datasets, Mammographic Image Analysis Society (MIAS), INBreast, and a local dataset called Database of Digital Mammograms of Annaba (DDMA). In some cases, our approach was able to detect accurately breast calcification

    Mammographic mass classification according to Biā€RADS lexicon

    No full text
    The goal of this study is to propose a computerā€aided diagnosis system to differentiate between four breast imaging reporting and data system (Biā€RADS) classes in digitised mammograms. This system is inspired by the approach of the doctor during the radiologic examination as it was agreed in BIā€RADS, where masses are described by their form, their boundary and their density. The segmentation of masses in the authorsā€™ approach is manual because it is supposed that the detection is already made. When the segmented region is available, the features extraction process can be carried out. 22 visual characteristics are automatically computed from shape, edge and textural properties; only one human feature is used in this study, which is the patient's age. Classification is finally done using a multiā€layer perceptron according to two separate schemes; the first one consists of classify masses to distinguish between the four BIā€RADS classes (2, 3, 4 and 5). In the second one the authors classify abnormalities on two classes (benign and malign). The proposed approach has been evaluated on 480 mammographic masses extracted from the digital database for screening mammography, and the obtained results are encouraging
    corecore