9,051 research outputs found

    CHORUS Deliverable 3.3: Vision Document - Intermediate version

    Get PDF
    The goal of the CHORUS vision document is to create a high level vision on audio-visual search engines in order to give guidance to the future R&D work in this area (in line with the mandate of CHORUS as a Coordination Action). This current intermediate draft of the CHORUS vision document (D3.3) is based on the previous CHORUS vision documents D3.1 to D3.2 and on the results of the six CHORUS Think-Tank meetings held in March, September and November 2007 as well as in April, July and October 2008, and on the feedback from other CHORUS events. The outcome of the six Think-Thank meetings will not just be to the benefit of the participants which are stakeholders and experts from academia and industry – CHORUS, as a coordination action of the EC, will feed back the findings (see Summary) to the projects under its purview and, via its website, to the whole community working in the domain of AV content search. A few subjections of this deliverable are to be completed after the eights (and presumably last) Think-Tank meeting in spring 2009

    Utilizing Video Multimedia Tools in Biology Labs

    Get PDF
    Research has shown that the addition of multimedia into education improves learning. Particularly in life-sciences, multimedia has shown to be beneficial because it provides visualization of concepts that are difficult to envision. Tutorial videos were implemented in the WPI introductory biology laboratories with the intention of helping to improve material retention. It was concluded that although the video protocols were appreciated and utilized by students, it cannot be determined whether the presence of the videos effectively increased student comprehension

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio
    corecore