138,668 research outputs found

    Controllable Image-to-Video Translation: A Case Study on Facial Expression Generation

    Full text link
    The recent advances in deep learning have made it possible to generate photo-realistic images by using neural networks and even to extrapolate video frames from an input video clip. In this paper, for the sake of both furthering this exploration and our own interest in a realistic application, we study image-to-video translation and particularly focus on the videos of facial expressions. This problem challenges the deep neural networks by another temporal dimension comparing to the image-to-image translation. Moreover, its single input image fails most existing video generation methods that rely on recurrent models. We propose a user-controllable approach so as to generate video clips of various lengths from a single face image. The lengths and types of the expressions are controlled by users. To this end, we design a novel neural network architecture that can incorporate the user input into its skip connections and propose several improvements to the adversarial training method for the neural network. Experiments and user studies verify the effectiveness of our approach. Especially, we would like to highlight that even for the face images in the wild (downloaded from the Web and the authors' own photos), our model can generate high-quality facial expression videos of which about 50\% are labeled as real by Amazon Mechanical Turk workers.Comment: 10 page

    Surviving in Manchester: Naratives on Movement from the Men's Room

    Get PDF
    The Men’s Room is an arts and social care agency that works creatively with young men, offering them opportunities to get involved in arts projects whilst accessing support for challenges they may be facing in their lives. The project engages different constituencies of young men experiencing severe and multiple disadvantage, including those involved with sex work or with experience of sexual exploitation, and those with experience of homelessness and/or the criminal justice system. ‘Surviving in Manchester’ was commissioned by the Lankelly Chase Foundation (LCF) and aimed to explore young men’s routes into the Men’s Room as well as how they defined successful service provision. The research included ethnographic fieldwork, walking tours led by young men to sites that they connected with their survival in the city, and a Visual Matrix conducted with staff and volunteers. It argues that the relational approach of the Men’s Room is a key organisational strength. This approach combines informal and formal support, unconditional acceptance, clear ground rules, and gauging of supportive interventions in ways that are sensitive to the young men’s readiness and ability to ‘move on’. It also includes valuable opportunities for social gathering, creative expression and public storytelling and image-making that extend the artistic and imaginative capacities of the young men and celebrate their abilities and experiences

    Vision-based Detection of Acoustic Timed Events: a Case Study on Clarinet Note Onsets

    Get PDF
    Acoustic events often have a visual counterpart. Knowledge of visual information can aid the understanding of complex auditory scenes, even when only a stereo mixdown is available in the audio domain, \eg identifying which musicians are playing in large musical ensembles. In this paper, we consider a vision-based approach to note onset detection. As a case study we focus on challenging, real-world clarinetist videos and carry out preliminary experiments on a 3D convolutional neural network based on multiple streams and purposely avoiding temporal pooling. We release an audiovisual dataset with 4.5 hours of clarinetist videos together with cleaned annotations which include about 36,000 onsets and the coordinates for a number of salient points and regions of interest. By performing several training trials on our dataset, we learned that the problem is challenging. We found that the CNN model is highly sensitive to the optimization algorithm and hyper-parameters, and that treating the problem as binary classification may prevent the joint optimization of precision and recall. To encourage further research, we publicly share our dataset, annotations and all models and detail which issues we came across during our preliminary experiments.Comment: Proceedings of the First International Conference on Deep Learning and Music, Anchorage, US, May, 2017 (arXiv:1706.08675v1 [cs.NE]
    • …
    corecore