77 research outputs found
Analysis and comparison of facial animation algorithms: caricatures
The thesis will be aimed to review what has been done from around the 2000 until
now regarding the caricature generation field in 2D.
It will be organized in classifying the methods found first, telling their contributions
to the field and choosing a paper among them to implement and discuss more
thoroughly. A total of three papers will be selected.
Finally, an overview discussion on the papers implemented and their contributions to
the field will be given.
Brief comment on the Master Thesis small change of title:
In the very beginning, when I was planning to do the thesis, I talked with my tutor
and found that doing a review and comparison of some methods in the facial
animation field would suit. However, while reading papers on the topic, I found that a
great number of them required hardware which I didn’t have any access to.
The generation of 2D caricatures is still close to the field, and it didn’t need any
additional hardware devic
QUIS-CAMPI: Biometric Recognition in Surveillance Scenarios
The concerns about individuals security have justified the increasing number of surveillance
cameras deployed both in private and public spaces. However, contrary to popular belief,
these devices are in most cases used solely for recording, instead of feeding intelligent analysis
processes capable of extracting information about the observed individuals. Thus, even though
video surveillance has already proved to be essential for solving multiple crimes, obtaining relevant
details about the subjects that took part in a crime depends on the manual inspection
of recordings. As such, the current goal of the research community is the development of
automated surveillance systems capable of monitoring and identifying subjects in surveillance
scenarios. Accordingly, the main goal of this thesis is to improve the performance of biometric
recognition algorithms in data acquired from surveillance scenarios. In particular, we aim at
designing a visual surveillance system capable of acquiring biometric data at a distance (e.g.,
face, iris or gait) without requiring human intervention in the process, as well as devising biometric
recognition methods robust to the degradation factors resulting from the unconstrained
acquisition process.
Regarding the first goal, the analysis of the data acquired by typical surveillance systems
shows that large acquisition distances significantly decrease the resolution of biometric samples,
and thus their discriminability is not sufficient for recognition purposes. In the literature,
diverse works point out Pan Tilt Zoom (PTZ) cameras as the most practical way for acquiring
high-resolution imagery at a distance, particularly when using a master-slave configuration. In
the master-slave configuration, the video acquired by a typical surveillance camera is analyzed
for obtaining regions of interest (e.g., car, person) and these regions are subsequently imaged
at high-resolution by the PTZ camera. Several methods have already shown that this configuration
can be used for acquiring biometric data at a distance. Nevertheless, these methods
failed at providing effective solutions to the typical challenges of this strategy, restraining its
use in surveillance scenarios. Accordingly, this thesis proposes two methods to support the development
of a biometric data acquisition system based on the cooperation of a PTZ camera
with a typical surveillance camera. The first proposal is a camera calibration method capable
of accurately mapping the coordinates of the master camera to the pan/tilt angles of the PTZ
camera. The second proposal is a camera scheduling method for determining - in real-time -
the sequence of acquisitions that maximizes the number of different targets obtained, while
minimizing the cumulative transition time. In order to achieve the first goal of this thesis,
both methods were combined with state-of-the-art approaches of the human monitoring field
to develop a fully automated surveillance capable of acquiring biometric data at a distance and
without human cooperation, designated as QUIS-CAMPI system.
The QUIS-CAMPI system is the basis for pursuing the second goal of this thesis. The analysis
of the performance of the state-of-the-art biometric recognition approaches shows that these
approaches attain almost ideal recognition rates in unconstrained data. However, this performance
is incongruous with the recognition rates observed in surveillance scenarios. Taking into
account the drawbacks of current biometric datasets, this thesis introduces a novel dataset comprising
biometric samples (face images and gait videos) acquired by the QUIS-CAMPI system at a
distance ranging from 5 to 40 meters and without human intervention in the acquisition process.
This set allows to objectively assess the performance of state-of-the-art biometric recognition
methods in data that truly encompass the covariates of surveillance scenarios. As such, this set
was exploited for promoting the first international challenge on biometric recognition in the wild. This thesis describes the evaluation protocols adopted, along with the results obtained
by the nine methods specially designed for this competition. In addition, the data acquired by
the QUIS-CAMPI system were crucial for accomplishing the second goal of this thesis, i.e., the
development of methods robust to the covariates of surveillance scenarios. The first proposal
regards a method for detecting corrupted features in biometric signatures inferred by a redundancy
analysis algorithm. The second proposal is a caricature-based face recognition approach
capable of enhancing the recognition performance by automatically generating a caricature
from a 2D photo. The experimental evaluation of these methods shows that both approaches
contribute to improve the recognition performance in unconstrained data.A crescente preocupação com a segurança dos indivĂduos tem justificado o crescimento
do nĂşmero de câmaras de vĂdeo-vigilância instaladas tanto em espaços privados como pĂşblicos.
Contudo, ao contrário do que normalmente se pensa, estes dispositivos são, na maior parte dos
casos, usados apenas para gravação, não estando ligados a nenhum tipo de software inteligente
capaz de inferir em tempo real informações sobre os indivĂduos observados. Assim, apesar de a
vĂdeo-vigilância ter provado ser essencial na resolução de diversos crimes, o seu uso está ainda
confinado Ă disponibilização de vĂdeos que tĂŞm que ser manualmente inspecionados para extrair
informações relevantes dos sujeitos envolvidos no crime. Como tal, atualmente, o principal
desafio da comunidade cientĂfica Ă© o desenvolvimento de sistemas automatizados capazes de
monitorizar e identificar indivĂduos em ambientes de vĂdeo-vigilância.
Esta tese tem como principal objetivo estender a aplicabilidade dos sistemas de reconhecimento
biomĂ©trico aos ambientes de vĂdeo-vigilância. De forma mais especifica, pretende-se
1) conceber um sistema de vĂdeo-vigilância que consiga adquirir dados biomĂ©tricos a longas distâncias
(e.g., imagens da cara, Ăris, ou vĂdeos do tipo de passo) sem requerer a cooperação dos
indivĂduos no processo; e 2) desenvolver mĂ©todos de reconhecimento biomĂ©trico robustos aos
fatores de degradação inerentes aos dados adquiridos por este tipo de sistemas.
No que diz respeito ao primeiro objetivo, a análise aos dados adquiridos pelos sistemas tĂpicos
de vĂdeo-vigilância mostra que, devido Ă distância de captura, os traços biomĂ©tricos amostrados
não são suficientemente discriminativos para garantir taxas de reconhecimento aceitáveis.
Na literatura, vários trabalhos advogam o uso de câmaras Pan Tilt Zoom (PTZ) para adquirir
imagens de alta resolução à distância, principalmente o uso destes dispositivos no modo masterslave.
Na configuração master-slave um módulo de análise inteligente seleciona zonas de interesse
(e.g. carros, pessoas) a partir do vĂdeo adquirido por uma câmara de vĂdeo-vigilância
e a câmara PTZ é orientada para adquirir em alta resolução as regiões de interesse. Diversos
métodos já mostraram que esta configuração pode ser usada para adquirir dados biométricos
à distância, ainda assim estes não foram capazes de solucionar alguns problemas relacionados
com esta estratĂ©gia, impedindo assim o seu uso em ambientes de vĂdeo-vigilância. Deste modo,
esta tese propõe dois métodos para permitir a aquisição de dados biométricos em ambientes de
vĂdeo-vigilância usando uma câmara PTZ assistida por uma câmara tĂpica de vĂdeo-vigilância. O
primeiro é um método de calibração capaz de mapear de forma exata as coordenadas da câmara
master para o ângulo da câmara PTZ (slave) sem o auxĂlio de outros dispositivos Ăłticos. O
segundo método determina a ordem pela qual um conjunto de sujeitos vai ser observado pela
câmara PTZ. O método proposto consegue determinar em tempo-real a sequência de observações
que maximiza o nĂşmero de diferentes sujeitos observados e simultaneamente minimiza o
tempo total de transição entre sujeitos. De modo a atingir o primeiro objetivo desta tese, os
dois métodos propostos foram combinados com os avanços alcançados na área da monitorização
de humanos para assim desenvolver o primeiro sistema de vĂdeo-vigilância completamente automatizado
e capaz de adquirir dados biométricos a longas distâncias sem requerer a cooperação
dos indivĂduos no processo, designado por sistema QUIS-CAMPI.
O sistema QUIS-CAMPI representa o ponto de partida para iniciar a investigação relacionada
com o segundo objetivo desta tese. A análise do desempenho dos métodos de reconhecimento
biométrico do estado-da-arte mostra que estes conseguem obter taxas de reconhecimento
quase perfeitas em dados adquiridos sem restrições (e.g., taxas de reconhecimento
maiores do que 99% no conjunto de dados LFW). Contudo, este desempenho nĂŁo Ă© corroborado pelos resultados observados em ambientes de vĂdeo-vigilância, o que sugere que os conjuntos
de dados atuais nĂŁo contĂŞm verdadeiramente os fatores de degradação tĂpicos dos ambientes de
vĂdeo-vigilância. Tendo em conta as vulnerabilidades dos conjuntos de dados biomĂ©tricos atuais,
esta tese introduz um novo conjunto de dados biomĂ©tricos (imagens da face e vĂdeos do tipo de
passo) adquiridos pelo sistema QUIS-CAMPI a uma distância máxima de 40m e sem a cooperação
dos sujeitos no processo de aquisição. Este conjunto permite avaliar de forma objetiva o desempenho
dos mĂ©todos do estado-da-arte no reconhecimento de indivĂduos em imagens/vĂdeos
capturados num ambiente real de vĂdeo-vigilância. Como tal, este conjunto foi utilizado para
promover a primeira competição de reconhecimento biométrico em ambientes não controlados.
Esta tese descreve os protocolos de avaliação usados, assim como os resultados obtidos por 9
métodos especialmente desenhados para esta competição. Para além disso, os dados adquiridos
pelo sistema QUIS-CAMPI foram essenciais para o desenvolvimento de dois métodos para
aumentar a robustez aos fatores de degradação observados em ambientes de vĂdeo-vigilância. O
primeiro Ă© um mĂ©todo para detetar caracterĂsticas corruptas em assinaturas biomĂ©tricas atravĂ©s
da análise da redundância entre subconjuntos de caracterĂsticas. O segundo Ă© um mĂ©todo de
reconhecimento facial baseado em caricaturas automaticamente geradas a partir de uma Ăşnica
foto do sujeito. As experiências realizadas mostram que ambos os métodos conseguem reduzir
as taxas de erro em dados adquiridos de forma nĂŁo controlada
Hybrid learning-based model for exaggeration style of facial caricature
Prediction of facial caricature based on exaggeration style of a particular artist is a significant task in computer generated caricature in order to produce an artistic facial caricature that is very similar to the real artist’s work without the need for skilled user (artist) input. The exaggeration style of an artist is difficult to be coded in algorithmic method. Fortunately, artificial neural network, which possesses self-learning and generalization ability, has shown great promise in addressing the problem of capturing and learning an artist’s style to predict a facial caricature. However, one of the main issues faced by this study is inconsistent artist style due to human factors and limited collection on image-caricature pair data. Thus, this study proposes facial caricature dataset preparation process to get good quality dataset which captures the artist’s exaggeration style and a hybrid model to generalize the inconsistent style so that a better, more accurate prediction can be obtained even using small amount of dataset. The proposed data preparation process involves facial features parameter extraction based on landmark-based geometric morphometric and modified data normalization method based on Procrustes superimposition method. The proposed hybrid model (BP-GANN) combines Backpropagation Neural Network (BPNN) and Genetic Algorithm Neural Network (GANN). The experimental result shows that the proposed hybrid BP-GANN model is outperform the traditional hybrid GA-BPNN model, individual BPNN model and individual GANN model. The modified Procrustes superimposition method also produces a better quality dataset than the original one
EigenFIT : a statistical learning approach to facial composites
EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Learning to Warp for Style Transfer
Since its inception in 2015, Style Transfer has focused on texturing a content image using an art exemplar. Recently, the geometric changes that artists make have been acknowledged as an important component of style[42], [55], [62], [63]. Our contribution is to propose a neural network that, uniquely, learns a mapping from a 4D array of inter-feature distances to a non-parametric 2D warp field. The system is generic in not being limited by semantic class, a single learned model will suffice; all examples in this paper are output from one model.Our approach combines the benefits of the high speed of Liu et al. [42] with the non-parametric warping of Kim et al. [55]. Furthermore, our system extends the normal NST paradigm: although it can be used with a single exemplar, we also allow two style exemplars: one for texture and another geometry. This supports far greater flexibility in use cases than single exemplars can provide
Perception and recognition of computer-enhanced facial attributes and abstracted prototypes
The influence of the human facial image was surveyed and the nature of its many interpretations were examined. The role of distinctiveness was considered particularly relevant as it accounted for many of the impressions of character and identity ascribed to individuals. The notion of structural differences with respect to some selective essence of normality is especially important as it allows a wide range of complex facial types to be considered and understood in an objective manner. A software tool was developed which permitted the manipulation of facial images. Quantitative distortions of digital images were examined using perceptual and recognition memory paradigms. Seven experiments investigated the role of distinctiveness in memory for faces using synthesised caricatures. The results showed that caricatures, both photographic and line-drawing, improved recognition speed and accuracy, indicating that both veridical and distinctiveness information are coded for familiar faces in long-term memory. The impact of feature metrics on perceptual estimates of facial age was examined using 'age-caricatured' images and were found to be in relative accordance with the 'intended' computed age. Further modifying the semantics permitted the differences between individual faces to be visualised in terms of facial structure and skin texture patterns. Transformations of identity between two, or more, faces established the necessary matrices which can offer an understanding of facial expression in a categorical manner and the inherent interactions. A procedural extension allowed generation of composite images in which all features are perfectly aligned. Prototypical facial types specified in this manner enabled high-level manipulations to be made of gender and attractiveness; two experiments corroborated previously speculative material and thus gave credence to the prototype model. In summary, psychological assessment of computer-manipulated facial images demonstrated the validity of the objective techniques and highlighted particular parameters which contribute to our perception and recognition of the individual and of underlying facial types
Animating Virtual Human for Virtual Batik Modeling
This research paper describes a development of animating virtual human for virtual
batik modeling project. The objectives of this project are to animate the virtual
human, to map the cloth with the virtual human body, to present the batik cloth, and
to evaluate the application in terms of realism of virtual human look, realism of
virtual human movement, realism of 3D scene, application suitability, application
usability, fashion suitability and user acceptance. The final goal is to accomplish an
animated virtual human for virtual batik modeling. There are 3 essential phases
which research and analysis (data collection of modeling and animating technique),
development (model and animate virtual human, map cloth to body and add a music)
and evaluation (evaluation of realism of virtual human look, realism of virtual human
movement, realism of props, application suitability, application usability, fashion
suitability and user acceptance). The result for application usability is the highest
percentage which 90%. Result show that this application is useful to the people. In
conclusion, this project has met the objective, which the realism is achieved by used a
suitable technique for modeling and animating
3D Shape Reconstruction from Sketches via Multi-view Convolutional Networks
We propose a method for reconstructing 3D shapes from 2D sketches in the form
of line drawings. Our method takes as input a single sketch, or multiple
sketches, and outputs a dense point cloud representing a 3D reconstruction of
the input sketch(es). The point cloud is then converted into a polygon mesh. At
the heart of our method lies a deep, encoder-decoder network. The encoder
converts the sketch into a compact representation encoding shape information.
The decoder converts this representation into depth and normal maps capturing
the underlying surface from several output viewpoints. The multi-view maps are
then consolidated into a 3D point cloud by solving an optimization problem that
fuses depth and normals across all viewpoints. Based on our experiments,
compared to other methods, such as volumetric networks, our architecture offers
several advantages, including more faithful reconstruction, higher output
surface resolution, better preservation of topology and shape structure.Comment: 3DV 2017 (oral
- …