88,348 research outputs found
Underwater Fish Detection with Weak Multi-Domain Supervision
Given a sufficiently large training dataset, it is relatively easy to train a
modern convolution neural network (CNN) as a required image classifier.
However, for the task of fish classification and/or fish detection, if a CNN
was trained to detect or classify particular fish species in particular
background habitats, the same CNN exhibits much lower accuracy when applied to
new/unseen fish species and/or fish habitats. Therefore, in practice, the CNN
needs to be continuously fine-tuned to improve its classification accuracy to
handle new project-specific fish species or habitats. In this work we present a
labelling-efficient method of training a CNN-based fish-detector (the Xception
CNN was used as the base) on relatively small numbers (4,000) of project-domain
underwater fish/no-fish images from 20 different habitats. Additionally, 17,000
of known negative (that is, missing fish) general-domain (VOC2012) above-water
images were used. Two publicly available fish-domain datasets supplied
additional 27,000 of above-water and underwater positive/fish images. By using
this multi-domain collection of images, the trained Xception-based binary
(fish/not-fish) classifier achieved 0.17% false-positives and 0.61%
false-negatives on the project's 20,000 negative and 16,000 positive holdout
test images, respectively. The area under the ROC curve (AUC) was 99.94%.Comment: Published in the 2019 International Joint Conference on Neural
Networks (IJCNN-2019), Budapest, Hungary, July 14-19, 2019,
https://www.ijcnn.org/ , https://ieeexplore.ieee.org/document/885190
Time-Efficient Hybrid Approach for Facial Expression Recognition
Facial expression recognition is an emerging research area for improving human and computer interaction. This research plays a significant role in the field of social communication, commercial enterprise, law enforcement, and other computer interactions. In this paper, we propose a time-efficient hybrid design for facial expression recognition, combining image pre-processing steps and different Convolutional Neural Network (CNN) structures providing better accuracy and greatly improved training time. We are predicting seven basic emotions of human faces: sadness, happiness, disgust, anger, fear, surprise and neutral. The model performs well regarding challenging facial expression recognition where the emotion expressed could be one of several due to their quite similar facial characteristics such as anger, disgust, and sadness. The experiment to test the model was conducted across multiple databases and different facial orientations, and to the best of our knowledge, the model provided an accuracy of about 89.58% for KDEF dataset, 100% accuracy for JAFFE dataset and 71.975% accuracy for combined (KDEF + JAFFE + SFEW) dataset across these different scenarios. Performance evaluation was done by cross-validation techniques to avoid bias towards a specific set of images from a database
A simple and efficient face detection algorithm for video database applications
The objective of this work is to provide a simple and yet efficient tool to detect human faces in video sequences. This information can be very useful for many applications such as video indexing and video browsing. In particular the paper focuses on the significant improvements made to our face detection algorithm presented by Albiol, Bouman and Delp (see IEEE Int. Conference on Image Processing, Kobe, Japan, 1999). Specifically, a novel approach to retrieve skin-like homogeneous regions is presented, which is later used to retrieve face images. Good results have been obtained for a large variety of video sequences.Peer ReviewedPostprint (published version
Automatic landmark annotation and dense correspondence registration for 3D human facial images
Dense surface registration of three-dimensional (3D) human facial images
holds great potential for studies of human trait diversity, disease genetics,
and forensics. Non-rigid registration is particularly useful for establishing
dense anatomical correspondences between faces. Here we describe a novel
non-rigid registration method for fully automatic 3D facial image mapping. This
method comprises two steps: first, seventeen facial landmarks are automatically
annotated, mainly via PCA-based feature recognition following 3D-to-2D data
transformation. Second, an efficient thin-plate spline (TPS) protocol is used
to establish the dense anatomical correspondence between facial images, under
the guidance of the predefined landmarks. We demonstrate that this method is
robust and highly accurate, even for different ethnicities. The average face is
calculated for individuals of Han Chinese and Uyghur origins. While fully
automatic and computationally efficient, this method enables high-throughput
analysis of human facial feature variation.Comment: 33 pages, 6 figures, 1 tabl
- …