1,125,184 research outputs found

    When the flame dies

    Get PDF
    When the flame dies Composer - Ed Hughes Librettist - Roger Morris Video - Will Reynolds & Poppy Burton-Morgan With the voices of Andrew McIntosh (baritone); Lucy Williams (mezzo); Peter Kirk (tenor); Emily Phillips (soprano); Ben Williamson (counter-tenor); also video artist Loren O'Dair. Ensemble - The New Music Players Advisers: Tim Hopkins and David Chandler (Professor of Photography, University of Plymouth). Duration: 70 minutes The unnamed Poet, protagonist of the drama, dreams of the Underworld where he meets the characters of his past and his imagination. He must choose between love and creativity. This new opera is being worked on during Autumn 2011 and Spring 2012 towards a full scoring for a cast of five singers and ensemble (The New Music Players) with live electronics. A public presentation is planned for 2013. The project will explore the use of specially created video, combining newly conceived material with archive stills and film footage, in order to devise new textures in the concert performance of opera, and to find fresh ways of contextualising works with historical and mythical resonances in performance

    A Web video retrieval method using hierarchical structure of Web video groups

    Get PDF
    In this paper, we propose a Web video retrieval method that uses hierarchical structure of Web video groups. Existing retrieval systems require users to input suitable queries that identify the desired contents in order to accurately retrieve Web videos; however, the proposed method enables retrieval of the desired Web videos even if users cannot input the suitable queries. Specifically, we first select representative Web videos from a target video dataset by using link relationships between Web videos obtained via metadata “related videos” and heterogeneous video features. Furthermore, by using the representative Web videos, we construct a network whose nodes and edges respectively correspond to Web videos and links between these Web videos. Then Web video groups, i.e., Web video sets with similar topics are hierarchically extracted based on strongly connected components, edge betweenness and modularity. By exhibiting the obtained hierarchical structure of Web video groups, users can easily grasp the overview of many Web videos. Consequently, even if users cannot write suitable queries that identify the desired contents, it becomes feasible to accurately retrieve the desired Web videos by selecting Web video groups according to the hierarchical structure. Experimental results on actual Web videos verify the effectiveness of our method

    Diavideos: a Diabetes Health Video Portal

    Get PDF
    Diavideos1 is a web platform that collects trustworthy diabetes health videos from YouTube and offers them in a easy way. YouTube is a big repository of health videos, but good content is sometimes mixed with misleading and harmful videos such as promoting anorexia [1].Diavideos is a web portal that provides easy access to a repository of trustworthy diabetes videos. This poster describes Diavideos and explains the crawling method used to retrieve these videos from trusted channels

    Summarizing First-Person Videos from Third Persons' Points of Views

    Full text link
    Video highlight or summarization is among interesting topics in computer vision, which benefits a variety of applications like viewing, searching, or storage. However, most existing studies rely on training data of third-person videos, which cannot easily generalize to highlight the first-person ones. With the goal of deriving an effective model to summarize first-person videos, we propose a novel deep neural network architecture for describing and discriminating vital spatiotemporal information across videos with different points of view. Our proposed model is realized in a semi-supervised setting, in which fully annotated third-person videos, unlabeled first-person videos, and a small number of annotated first-person ones are presented during training. In our experiments, qualitative and quantitative evaluations on both benchmarks and our collected first-person video datasets are presented.Comment: 16+10 pages, ECCV 201
    corecore