5 research outputs found

    Facial expression recognition on Android

    Get PDF
    In this paper we implemented an algorithm to detect facial expression in images provided by the camera in real time. The system is built on tree interesting stages: 1-Face detection using boosted cascade classifier based on haar-like features, to train the cascade a MIT-CBCL face database has been used ; 2-Tracking features points in the face; 3-Facial expression identification using facial Actions Units (AU) defined by P.Viola et al. in their Facial Action Coding System (FACS) [1]. The application can detect four facial expressions: joy, fear, sadness and anger

    Machine learning methods for automatic segmentation of images of field-and glasshouse-based plants for high-throughput phenotyping

    Get PDF
    Image segmentation is a fundamental but critical step for achieving automated high- throughput phenotyping. While conventional segmentation methods perform well in homogenous environments, the performance decreases when used in more complex environments. This study aimed to develop a fast and robust neural-network-based segmentation tool to phenotype plants in both field and glasshouse environments in a high-throughput manner. Digital images of cowpea (from glasshouse) and wheat (from field) with different nutrient supplies across their full growth cycle were acquired. Image patches from 20 randomly selected images from the acquired dataset were transformed from their original RGB format to multiple color spaces. The pixels in the patches were annotated as foreground and background with a pixel having a feature vector of 24 color properties. A feature selection technique was applied to choose the sensitive features, which were used to train a multilayer perceptron network (MLP) and two other traditional machine learning models: support vector machines (SVMs) and random forest (RF). The performance of these models, together with two standard color-index segmentation techniques (excess green (ExG) and excess green–red (ExGR)), was compared. The proposed method outperformed the other methods in producing quality segmented images with over 98%-pixel classification accuracy. Regression models developed from the different segmentation methods to predict Soil Plant Analysis Development (SPAD) values of cowpea and wheat showed that images from the proposed MLP method produced models with high predictive power and accuracy comparably. This method will be an essential tool for the development of a data analysis pipeline for high-throughput plant phenotyping. The proposed technique is capable of learning from different environmental conditions, with a high level of robustness

    Modeling the spatial-spectral characteristics of plants for nutrient status identification using hyperspectral data and deep learning methods

    Get PDF
    Sustainable fertilizer management in precision agriculture is essential for both economic and environmental reasons. To effectively manage fertilizer input, various methods are employed to monitor and track plant nutrient status. One such method is hyperspectral imaging, which has been on the rise in recent times. It is a remote sensing tool used to monitor plant physiological changes in response to environmental conditions and nutrient availability. However, conventional hyperspectral processing mainly focuses on either the spectral or spatial information of plants. This study aims to develop a hybrid convolution neural network (CNN) capable of simultaneously extracting spatial and spectral information from quinoa and cowpea plants to identify their nutrient status at different growth stages. To achieve this, a nutrient experiment with four treatments (high and low levels of nitrogen and phosphorus) was conducted in a glasshouse. A hybrid CNN model comprising a 3D CNN (extracts joint spectral-spatial information) and a 2D CNN (for abstract spatial information extraction) was proposed. Three pre-processing techniques, including second-order derivative, standard normal variate, and linear discriminant analysis, were applied to selected regions of interest within the plant spectral hypercube. Together with the raw data, these datasets were used as inputs to train the proposed model. This was done to assess the impact of different pre-processing techniques on hyperspectral-based nutrient phenotyping. The performance of the proposed model was compared with a 3D CNN, a 2D CNN, and a Hybrid Spectral Network (HybridSN) model. Effective wavebands were selected from the best-performing dataset using a greedy stepwise-based correlation feature selection (CFS) technique. The selected wavebands were then used to retrain the models to identify the nutrient status at five selected plant growth stages. From the results, the proposed hybrid model achieved a classification accuracy of over 94% on the test dataset, demonstrating its potential for identifying nitrogen and phosphorus status in cowpea and quinoa at different growth stages

    A new pipeline for the recognition of universal expressions of multiple faces in a video sequence

    Get PDF
    International audienceFacial Expression Recognition (FER) is a crucial issue in human-machine interaction. It allows machines to act according to facial expression changes. However, acting in real time requires recognizing the expressions at video speed. Usually, the video speed differs from one device to another. However, one of the standard settings for shooting videos is 24 fps. This speed is considered as the low-end of what our brain can perceive as fluid video. From this perspective, to achieve a real-time FER the image analysis must be completed, strictly, in less than 0.042 second no matter how the background complexity is or how many faces exists in the scene. In this paper, a new pipeline has been proposed in order to recognize the fundamental facial expressions for more than one person in real world sequence videos. First, the pipeline takes as input a video and performs a face detection and tracking. Regions of Interest (ROI) are extracted from the detected face in order to extract the shape information when applying the Histogram of Oriented Gradient (HOG) descriptor. The number of features yield by HOG de-scriptor is reduced by means of a Linear Discriminant Analysis (LDA). Then, a deep data analysis was carried out, exploiting the pipeline, for the objective of setting up the LDA classifier. The analysis aimed at proving the suitability of the decision rule selected to separate the facial expression clusters in the LDA training phase. To conduct our analysis, we used ChonKanade(CK+) database and F-measure as an evaluation metric to calculate the average recognition rates. An automatic evaluation over time is proposed, where labelled videos is utilized to investigate the suitability of the pipeline in real world condition. The pipeline results showed that the use of HOG descriptor and the LDA gives a high recognition rate of 94.66%. It should be noted that the proposed pipeline achieves an average processing time of 0.018 second, without requiring any device that speeds up the processing
    corecore