5,597 research outputs found

    Synote mobile HTML5 responsive design video annotation application

    No full text
    Synote Mobile has been developed as an accessible cross device and cross browser HTML5 webbased collaborative replay and annotation tool to make web-based recordings easier to access, search, manage, and exploit for learners, teachers and others. It has been developed as a new mobile HTML5 version of the award winning open source and freely available Synote which has been used since 2008 by students throughout the world to learn interactively from recordings. While most UK students now carry mobile devices capable of replaying Internet video, the majority of these devices cannot replay Synote’s accessible, searchable, annotated recordings as Synote was created in 2008 when few students had phones or tablets capable of replaying these videos

    Query independent measures of annotation and annotator impact

    Get PDF
    The modern-day web-user plays a far more active role in the creation of content for the web as a whole. In this paper we present Annoby, a free-text annotation system built to give users a more interactive experience of the events of the Rugby World Cup 2007. Annotations can be used for query-independent ranking of both the annotations and the original recorded video footage (or documents) which has been annotated, based on the social interactions of a community of users. We present two algorithms, AuthorRank and MessageRank, designed to take advantage of these interactions so as to provide a means of ranking documents by their social impact

    Online Interactive E-Learning Using Video Annotation

    Get PDF
    Streaming video on the Internet is being wide deployed, and work employment, E-lecture and distance education area unit key applications. The facility to annotate video on cyberspace can provide important added price in these and different areas. Written and spoken annotations can provide “in context” personal notes and would possibly modify asynchronous collaboration among groups of users. With annotations, users don't seem to be to any extent further restricted to viewing content passively on internet, but area unit absolve to add and share statement and links, therefore transforming internet into academic degree interactive medium. we tend to tend to debate vogue problems in constructing a cooperative video annotation system which we tend to introduce our model, called ABVR .We gift preliminary data on the employment of we tend Web-based annotations for personal note-taking and for sharing notes throughout a distance education scenario. Users showed a strong preference for ABVR System over pen-and-paper for taking notes, despite taking longer to undertake and do so. They put together indicated that they may produce further comments and queries with system ABVR than throughout a “live” state of affairs, that sharing added substantial price. and jump into videos at express time stamp by a tagging to the videos DOI: 10.17762/ijritcc2321-8169.150610

    SportsAnno: what do you think?

    Get PDF
    The automatic summarisation of sports video is of growing importance with the increased availability of on-demand content. Consumers who are unable to view events live often have a desire to watch a summary which allows then to quickly come to terms with all that has happened during a sporting event. Sports forums show that it is not only summaries that are desirable but also the opportunity to share one’s own point of view and discuss the opinions with a community of similar users. In this paper we give an overview of the ways in which annotations have been used to augment existing visual media. We present SportsAnno, a system developed to summarise World Cup 2006 matches and provide a means for open discussion of events within these matches

    Leveraging video annotations in video-based e-learning

    Get PDF
    The e-learning community has been producing and using video content for a long time, and in the last years, the advent of MOOCs greatly relied on video recordings of teacher courses. Video annotations are information pieces that can be anchored in the temporality of the video so as to sustain various processes ranging from active reading to rich media editing. In this position paper we study how video annotations can be used in an e-learning context - especially MOOCs - from the triple point of view of pedagogical processes, current technical platforms functionalities, and current challenges. Our analysis is that there is still plenty of room for leveraging video annotations in MOOCs beyond simple active reading, namely live annotation, performance annotation and annotation for assignment; and that new developments are needed to accompany this evolution.Comment: 7th International Conference on Computer Supported Education (CSEDU), Barcelone : Spain (2014

    Synote: weaving media fragments and linked data

    No full text
    While end users could easily share and tag the multimedia resources online, the searching and reusing of the inside content of multimedia, such as a certain area within an image or a ten minutes segment within a one-hour video, is still difficult. Linked data is a promising way to interlink media fragments with other resources. Many applications in Web 2.0 have generated large amount of external annotations linked to media fragments. In this paper, we use Synote as the target application to discuss how media fragments could be published together with external annotations following linked data principles. Our design solves the dereferencing, describing and interlinking methods problems in interlinking multimedia. We also implement a model to let Google index media fragments which improves media fragments' online presence. The evaluation shows that our design can successfully publish media fragments and annotations for both semantic Web agents and traditional search engines. Publishing media fragments using the design we describe in this paper will lead to better indexing of multimedia resources and their consequent findabilit

    Using Sensor Metadata Streams to Identify Topics of Local Events in the City

    Get PDF
    In this paper, we study the emerging Information Retrieval (IR) task of local event retrieval using sensor metadata streams. Sensor metadata streams include information such as the crowd density from video processing, audio classifications, and social media activity. We propose to use these metadata streams to identify the topics of local events within a city, where each event topic corresponds to a set of terms representing a type of events such as a concert or a protest. We develop a supervised approach that is capable of mapping sensor metadata observations to an event topic. In addition to using a variety of sensor metadata observations about the current status of the environment as learning features, our approach incorporates additional background features to model cyclic event patterns. Through experimentation with data collected from two locations in a major Spanish city, we show that our approach markedly outperforms an alternative baseline. We also show that modelling background information improves event topic identification
    corecore