224 research outputs found
Matching Forensic Sketches to Mug Shot Photos using Speeded Up Robust Features
The problem that is dealt in the project is to match a forensic sketch against a gallery of mug shot photos. Research in past decade offered solutions for matching sketches that were drawn while looking at the subject (viewed sketches). In this thesis, emphasis is made on matching the forensic sketches, which are the sketches drawn by specially trained artists in police department based on the description of subject by an eyewitness. Recently, a method for forensic sketch matching using LFDA (Local Feature based Discriminant Analysis) was published. Here, the same problem is addressed using a novel preprocessing technique combined with a local feature descriptor called SURF (Speeded Up Robust Features). In our method, first, the images are preprocessed using a novel preprocessing technique suitable for forensic sketch matching that is based on the cognitive research on human memory. After the preprocessing, SURF is used for matching. SURF extracts features in the form of 64-variable vectors for each image. Then all these vectors of one image are combined to form the SURF descriptor vector for that image. These descriptor vectors are then used for matching. This method of forensic sketch matching was applied to match a dataset of 64 Forensic Sketches against a gallery of 1058 photos. From our experiments, it was observed that our approach of image preprocessing combined with SURF had shown promising results with a good accuracy
Forensic face photo-sketch recognition using a deep learning-based architecture
Numerous methods that automatically identify subjects depicted in sketches as described by eyewitnesses have been implemented, but their performance often degrades when using real-world forensic sketches and extended galleries that mimic law enforcement mug-shot galleries. Moreover, little work has been done to apply deep learning for face photo-sketch recognition despite its success in numerous application domains including traditional face recognition. This is primarily due to the limited number of sketch images available, which are insufficient to robustly train large networks. This letter aims to tackle these issues with the following contributions: 1) a state-of-the-art model pre-trained for face photo recognition is tuned for face photo-sketch recognition by applying transfer learning, 2) a three-dimensional morphable model is used to synthesise new images and artificially expand the training data, allowing the network to prevent over-fitting and learn better features, 3) multiple synthetic sketches are also used in the testing stage to improve performance, and 4) fusion of the proposed method with a state-of-the-art algorithm is shown to further boost performance. An extensive evaluation of several popular and state-of-the-art algorithms is also performed using publicly available datasets, thereby serving as a benchmark for future algorithms. Compared to a leading method, the proposed framework is shown to reduce the error rate by 80.7% for viewed sketches and lowers the mean retrieval rank by 32.5% for real-world forensic sketches.peer-reviewe
High-Quality Facial Photo-Sketch Synthesis Using Multi-Adversarial Networks
Synthesizing face sketches from real photos and its inverse have many
applications. However, photo/sketch synthesis remains a challenging problem due
to the fact that photo and sketch have different characteristics. In this work,
we consider this task as an image-to-image translation problem and explore the
recently popular generative models (GANs) to generate high-quality realistic
photos from sketches and sketches from photos. Recent GAN-based methods have
shown promising results on image-to-image translation problems and
photo-to-sketch synthesis in particular, however, they are known to have
limited abilities in generating high-resolution realistic images. To this end,
we propose a novel synthesis framework called Photo-Sketch Synthesis using
Multi-Adversarial Networks, (PS2-MAN) that iteratively generates low resolution
to high resolution images in an adversarial way. The hidden layers of the
generator are supervised to first generate lower resolution images followed by
implicit refinement in the network to generate higher resolution images.
Furthermore, since photo-sketch synthesis is a coupled/paired translation
problem, we leverage the pair information using CycleGAN framework. Both Image
Quality Assessment (IQA) and Photo-Sketch Matching experiments are conducted to
demonstrate the superior performance of our framework in comparison to existing
state-of-the-art solutions. Code available at:
https://github.com/lidan1/PhotoSketchMAN.Comment: Accepted by 2018 13th IEEE International Conference on Automatic Face
& Gesture Recognition (FG 2018)(Oral
Deep Sketch-Photo Face Recognition Assisted by Facial Attributes
In this paper, we present a deep coupled framework to address the problem of
matching sketch image against a gallery of mugshots. Face sketches have the
essential in- formation about the spatial topology and geometric details of
faces while missing some important facial attributes such as ethnicity, hair,
eye, and skin color. We propose a cou- pled deep neural network architecture
which utilizes facial attributes in order to improve the sketch-photo
recognition performance. The proposed Attribute-Assisted Deep Con- volutional
Neural Network (AADCNN) method exploits the facial attributes and leverages the
loss functions from the facial attributes identification and face verification
tasks in order to learn rich discriminative features in a common em- bedding
subspace. The facial attribute identification task increases the inter-personal
variations by pushing apart the embedded features extracted from individuals
with differ- ent facial attributes, while the verification task reduces the
intra-personal variations by pulling together all the fea- tures that are
related to one person. The learned discrim- inative features can be well
generalized to new identities not seen in the training data. The proposed
architecture is able to make full use of the sketch and complementary fa- cial
attribute information to train a deep model compared to the conventional
sketch-photo recognition methods. Exten- sive experiments are performed on
composite (E-PRIP) and semi-forensic (IIIT-D semi-forensic) datasets. The
results show the superiority of our method compared to the state- of-the-art
models in sketch-photo recognition algorithm
Suspect identification based on descriptive facial attributes
We present a method for using human describable face attributes to perform face identification in criminal inves-tigations. To enable this approach, a set of 46 facial at-tributes were carefully defined with the goal of capturing all describable and persistent facial features. Using crowd sourced labor, a large corpus of face images were manually annotated with the proposed attributes. In turn, we train an automated attribute extraction algorithm to encode target repositories with the attribute information. Attribute extrac-tion is performed using localized face components to im-prove the extraction accuracy. Experiments are conducted to compare the use of attribute feature information, derived from crowd workers, to face sketch information, drawn by expert artists. In addition to removing the dependence on expert artists, the proposed method complements sketch-based face recognition by allowing investigators to imme-diately search face repositories without the time delay that is incurred due to sketch generation. 1
Matching software-generated sketches to face photographs with a very deep CNN, morphed faces, and transfer learning
Sketches obtained from eyewitness descriptions of criminals have proven to be useful in apprehending criminals, particularly when there is a lack of evidence. Automated methods to identify subjects depicted in sketches have been proposed in the literature, but their performance is still unsatisfactory when using software-generated sketches and when tested using extensive galleries with a large amount of subjects. Despite the success of deep learning in several applications including face recognition, little work has been done in applying it for face photograph-sketch recognition. This is mainly a consequence of the need to ensure robust training of deep networks by using a large number of images, yet limited quantities are publicly available. Moreover, most algorithms have not been designed to operate on software-generated face composite sketches which are used by numerous law enforcement agencies worldwide. This paper aims to tackle these issues with the following contributions: 1) a very deep convolutional neural network is utilised to determine the identity of a subject in a composite sketch by comparing it to face photographs and is trained by applying transfer learning to a state-of-the-art model pretrained for face photograph recognition; 2) a 3-D morphable model is used to synthesise both photographs and sketches to augment the available training data, an approach that is shown to significantly aid performance; and 3) the UoM-SGFS database is extended to contain twice the number of subjects, now having 1200 sketches of 600 subjects. An extensive evaluation of popular and stateof-the-art algorithms is also performed due to the lack of such information in the literature, where it is demonstrated that the proposed approach comprehensively outperforms state-of-the-art methods on all publicly available composite sketch datasets.peer-reviewe
- …