3,095 research outputs found
Recovering Faces from Portraits with Auxiliary Facial Attributes
Recovering a photorealistic face from an artistic portrait is a challenging
task since crucial facial details are often distorted or completely lost in
artistic compositions. To handle this loss, we propose an Attribute-guided Face
Recovery from Portraits (AFRP) that utilizes a Face Recovery Network (FRN) and
a Discriminative Network (DN). FRN consists of an autoencoder with residual
block-embedded skip-connections and incorporates facial attribute vectors into
the feature maps of input portraits at the bottleneck of the autoencoder. DN
has multiple convolutional and fully-connected layers, and its role is to
enforce FRN to generate authentic face images with corresponding facial
attributes dictated by the input attribute vectors. %Leveraging on the spatial
transformer networks, FRN automatically compensates for misalignments of
portraits. % and generates aligned face images. For the preservation of
identities, we impose the recovered and ground-truth faces to share similar
visual features. Specifically, DN determines whether the recovered image looks
like a real face and checks if the facial attributes extracted from the
recovered image are consistent with given attributes. %Our method can recover
high-quality photorealistic faces from unaligned portraits while preserving the
identity of the face images as well as it can reconstruct a photorealistic face
image with a desired set of attributes. Our method can recover photorealistic
identity-preserving faces with desired attributes from unseen stylized
portraits, artistic paintings, and hand-drawn sketches. On large-scale
synthesized and sketch datasets, we demonstrate that our face recovery method
achieves state-of-the-art results.Comment: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV
Mining social media to create personalized recommendations for tourist visits
International audiencePhoto sharing platforms users often annotate their trip photos with landmark names. These annotations can be aggregated in order to recommend lists of popular visitor attractions similar to those found in classical tourist guides. However, individual tourist preferences can vary significantly so good recommendations should be tailored to individual tastes. Here we pose this visit personalization as a collaborative filtering problem. We mine the record of visited landmarks exposed in online user data to build a user-user similarity matrix. When a user wants to visit a new destination, a list of potentially interesting visitor attractions is produced based on the experience of like-minded users who already visited that destination. We compare our recommender to a baseline which simulates classical tourist guides on a large sample of Flickr users
Visualyzart Project – The role in education
The VisualYzARt project intends to develop research on mobile platforms, web and social scenarios in order to bring augmented reality and natural interaction for the general public, aiming to study and validate the adequacy of YVision platform in various fields of activity such as digital arts, design, education, culture and leisure. The VisualYzARt project members analysed the components available in YVision platform and are defining new ones that allow the creation of applications to a chosen activity, effectively adding a new language to the domain YVision. In this paper we will present the role of the InstitutoPolitécnico de Santarém which falls into the field of education.VisualYzART is funded by QREN – Sistema de Incentivos à Investigação e Desenvolvimento Tecnológico (SI
I&DT), Project n. º 23201 - VisualYzARt (from January 2013 to December 2014). Partners: YDreams Portugal;
Instituto Politécnico de Santarém - Gabinete de e-Learning; Universidade de Coimbra - Centro de Informática e Sistemas; Instituto Politécnico de Leiria - Centro de Investigação em Informática e Comunicações; Universidade Católica do Porto - Centro de Investigação em Ciência e Tecnologia das Artes.info:eu-repo/semantics/publishedVersio
Organizer team at ImageCLEFlifelog 2017: baseline approaches for lifelog retrieval and summarization
This paper describes the participation of Organizer Team in the ImageCLEFlifelog 2017 Retrieval and Summarization subtasks. In this paper, we propose some baseline approaches, using only the provided information, which require different involvement levels from the users. With these baselines we target at providing references for other approaches that aim to solve the problems of lifelog retrieval and summarization
Regimes of Temporality: China, Tibet and the Politics of Time in the Post-2008 Era
While the politics of time are an important dimension of Chinese state discourse about Tibet, it remains insufficiently explored in theoretical and practical terms. This article examines the written and visual discourses of Tibetan temporality across Chinese state media in the post-2008 era. It analyses how these media discourses attempt to construct a ‘regime of temporality’ in order to manage public opinion about Tibet and consolidate Chinese rule over the region. While the expansion of online technologies has allowed the state to consolidate its discourses about Tibet’s place within the People’s Republic of China (PRC), they have also provided Tibetans a limited but valuable space to challenge these official representations through counter readings of Tibet’s past, present and future. In doing so, this article contributes new insights on the production of state power over Tibet, online media practices in China, and the disruptive potential of social media as sites of Tibetan counter discourses
Identity-preserving Face Recovery from Portraits
Recovering the latent photorealistic faces from their artistic portraits aids
human perception and facial analysis. However, a recovery process that can
preserve identity is challenging because the fine details of real faces can be
distorted or lost in stylized images. In this paper, we present a new
Identity-preserving Face Recovery from Portraits (IFRP) to recover latent
photorealistic faces from unaligned stylized portraits. Our IFRP method
consists of two components: Style Removal Network (SRN) and Discriminative
Network (DN). The SRN is designed to transfer feature maps of stylized images
to the feature maps of the corresponding photorealistic faces. By embedding
spatial transformer networks into the SRN, our method can compensate for
misalignments of stylized faces automatically and output aligned realistic face
images. The role of the DN is to enforce recovered faces to be similar to
authentic faces. To ensure the identity preservation, we promote the recovered
and ground-truth faces to share similar visual features via a distance measure
which compares features of recovered and ground-truth faces extracted from a
pre-trained VGG network. We evaluate our method on a large-scale synthesized
dataset of real and stylized face pairs and attain state of the art results. In
addition, our method can recover photorealistic faces from previously unseen
stylized portraits, original paintings and human-drawn sketches
State College Times, November 30, 1932
Volume 21, Issue 37https://scholarworks.sjsu.edu/spartandaily/12806/thumbnail.jp
State College Times, November 30, 1932
Volume 21, Issue 37https://scholarworks.sjsu.edu/spartandaily/12806/thumbnail.jp
- …