6 research outputs found

    Combining deep neural network with traditional classifier to recognize facial expressions

    Get PDF
    Facial expressions are important in people's daily communications. Recognising facial expressions also has many important applications in the areas such as healthcare and e-learning. Existing facial expression recognition systems have problems such as background interference. Furthermore, systems using traditional approaches like SVM (Support Vector Machine) have weakness in dealing with unseen images. Systems using deep neural network have problems such as requirement for GPU, longer training time and requirement for large memory. To overcome the shortcomings of pure deep neural network and traditional facial recognition approaches, this paper presents a new facial expression recognition approach which has image pre-processing techniques to remove unnecessary background information and combines deep neural network ResNet50 and a traditional classifier-the multiclass model for Support Vector Machine to recognise facial expressions. The proposed approach has better recognition accuracy than traditional approaches like Support Vector Machine and doesn't need GPU. We have compared 3 proposed frameworks with a traditional SVM approach against the Karolinska Directed Emotional Faces (KDEF) Database, the Japanese Female Facial Expression (JAFFE) Database and the extended Cohn-Kanade dataset (CK+), respectively. The experiment results show that the features extracted from the layer 49Relu have the best performance for these three datasets

    Inspection robots in oil and gas industry : a review of current solutions and future trends

    Get PDF
    With the increasing demands for energy, oil and gas companies have a demand to improve their efficiency, productivity and safety. Any potential corrosions and cracks on their production, storage or transportation facilities could cause disasters to both human society and the natural environment. Since many oil and gas assets are located in the extreme environment, there is an ongoing demand for robots to perform inspection tasks, which will be more cost-effective and safer. This paper provides a state of art review of inspection robots used in the oil and gas industry which including remotely operated vehicles (ROVs), autonomous underwater vehicles (AUVs), unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs). Different kinds of inspection robots are designed for inspecting different asset structures. The outcome of the review suggests that the reliable autonomous inspection UAVs and AUVs will gain interest among these robots and reliable autonomous localisation, environment mapping, intelligent control strategies, path planning and Non-Destructive Testing (NDT) technology will be the primary areas of research

    Collaborative mobile industrial manipulator : a review of system architecture and applications

    Get PDF
    This paper provides a comprehensive review of the development of Collaborative Mobile Industrial Manipulator (CMIM), which is currently in high demand. Such a review is necessary to have an overall understanding about CMIM advanced technology. This is the first review to combine the system architecture and application which is necessary in order to gain a full understanding of the system. The classical framework of CMIM is firstly discussed, including hardware and software. Subsystems that are typically involved in hardware such as mobile platform, manipulator, end-effector and sensors are presented. With regards to software, planner, controller, perception, interaction and so on are also described. Following this, the common applications (logistics, manufacturing and assembly) in industry are surveyed. Finally, the trends are predicted and issues are indicated as references for CMIM researchers. Specifically, more research is needed in the areas of interaction, fully autonomous control, coordination and standards. Besides, experiments in real environment would be performed more and novel collaborative robotic systems would be proposed in future. Additionally, some advanced technology in other areas would also be applied into the system. In all, the system would become more intelligent, collaborative and autonomous

    AMCD : an accurate deep learning-based metallic corrosion detector for MAV-based real-time visual inspection

    Get PDF
    Corrosion has been concerned as a serious safety issue for metallic facilities. Visual inspection carried out by an engineer is expensive, subjective and time-consuming. Micro Aerial Vehicles (MAVs) equipped with detection algorithms have the potential to perform safer and much more efficient visual inspection tasks than engineers. Towards corrosion detection algorithms, convolution neural networks (CNNs) have enabled the power for high accuracy metallic corrosion detection. However, these detectors are restricted by MAVs on-board capabilities. In this study, based on You Only Look Once v3-tiny (Yolov3-tiny), an accurate deep learning-based metallic corrosion detector (AMCD) is proposed for MAVs on-board metallic corrosion detection. Specifically, a backbone with depthwise separable convolution (DSConv) layers is designed to realise efficient corrosion detection. The convolutional block attention module (CBAM), three-scale object detection and focal loss are incorporated to improve the detection accuracy. Moreover, the spatial pyramid pooling (SPP) module is improved to fuse local features for further improvement of detection accuracy. A eld inspection image dataset labelled with four types of corrosions (the nubby corrosion, bar corrosion, exfoliation and fastener corrosion) is utilised for training and testing the AMCD. Test results show that the AMCD achieves 84.96% mean average precision (mAP), which outperforms other state-of-the-art detectors. Meanwhile, 20.18 frames per second (FPS) is achieved leveraging NVIDIA Jetson TX2, the most popular MAVs on-board computer, and the model size is only 6.1MB

    Automated High-resolution Earth Observation Image Interpretation: Outcome of the 2020 Gaofen Challenge

    Get PDF
    In this article, we introduce the 2020 Gaofen Challenge and relevant scientific outcomes. The 2020 Gaofen Challenge is an international competition, which is organized by the China High-Resolution Earth Observation Conference Committee and the Aerospace Information Research Institute, Chinese Academy of Sciences and technically cosponsored by the IEEE Geoscience and Remote Sensing Society and the International Society for Photogrammetry and Remote Sensing. It aims at promoting the academic development of automated high-resolution earth observation image interpretation. Six independent tracks have been organized in this challenge, which cover the challenging problems in the field of object detection and semantic segmentation. With the development of convolutional neural networks, deep-learning-based methods have achieved good performance on image interpretation. In this article, we report the details and the best-performing methods presented so far in the scope of this challenge

    Visual Saliency Modeling for River Detection in High-resolution SAR Imagery

    Get PDF
    OAPA Accurate detection of rivers plays a significant role in water conservancy construction and ecological protection, where airborne Synthetic Aperture Radar (SAR) data has already become one of the main sources. However, extracting river information from radar data efficiently and accurately still remains an open problem. The existing methods for detecting rivers are typically based on rivers & #x2019; edges, which are easily mixed with those of artificial buildings or farmland. In addition, pixel based image processing approaches cannot meet the requirement of real time processing. Inspired by the feature integration and target recognition capabilities of biological vision systems, in this paper, we present a hierarchical method for automated detection of river networks in the high-resolution SAR data using biologically visual saliency modeling. For effective saliency detection, the original image is first over-segmented into a set of primitive superpixels. A visual feature (VF) set is designed to extract a regional feature histogram, which is then quantized based on the optimal parameters learned from the labeled SAR images. Afterwards, three saliency measurements based on the specificity of the rivers in the SAR images are proposed to generate a single layer saliency map, i.e., Local Region Contrast (LRC), Boundary Connectivity (BC) and Edge Density (ED). Finally, by exploiting belief propagation, we propose a multi-layer saliency fusion approach to derive a high-quality saliency map. Extensive experimental results on three airborne SAR image datasets with the ground truth demonstrate that the proposed saliency model consistently outperforms the existing saliency target detection models
    corecore