6 research outputs found
Personalized video summarization by highest quality frames
In this work, a user-centered approach has been the basis for generation of the personalized video summaries. Primarily, the video experts score and annotate the video frames during the enrichment phase. Afterwards, the frames scores for different video segments will be updated based on the captured end-users (different with video experts) priorities towards existing video scenes. Eventually, based on the pre-defined skimming time, the highest scored video frames will be extracted to be included into the personalized video summaries. In order to evaluate the effectiveness of our proposed model, we have compared the video summaries generated by our system against the results from 4 other summarization tools using different modalities
Image/video indexing, retrieval and summarization based on eye movement
Information retrieval is one of the most fundamental functions in this era information. There is ambiguity in the scope of interest of users, regarding image/video retrieval, since an image usually contains one or more main objects in focus, as well as other objects which are considered as "background".This ambiguity often reduces the accuracy of image-based retrieval such as query by image example. Gaze detection is a promising approach to implicitly detect the focus of interest in an image or in video data to improve the performance of image retrieval, filtering and video summarization.In this paper, image/video indexing, retrieval and summarization based on gaze detection are described
Recommended from our members
User-centred video abstraction
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University LondonThe rapid growth of digital video content in recent years has imposed the need for the development of technologies with the capability to produce condensed but semantically rich versions of the input video stream in an effective manner. Consequently, the topic of Video Summarisation is becoming increasingly popular in multimedia community and numerous video abstraction approaches have been proposed accordingly. These recommended techniques can be divided into two major categories of automatic and semi-automatic in accordance with the required level of human intervention in summarisation process. The fully-automated methods mainly adopt the low-level visual, aural and textual features alongside the mathematical and statistical algorithms in furtherance to extract the most significant segments of original video. However, the effectiveness of this type of techniques is restricted by a number of factors such as domain-dependency, computational expenses and the inability to understand the semantics of videos from low-level features. The second category of techniques however, attempts to alleviate the quality of summaries by involving humans in the abstraction process to bridge the semantic gap. Nonetheless, a single user’s subjectivity and other external contributing factors such as distraction will potentially deteriorate the performance of this group of approaches. Accordingly, in this thesis we have focused on the development of three user-centred effective video summarisation techniques that could be applied to different video categories and generate satisfactory results. According to our first proposed approach, a novel mechanism for a user-centred video summarisation has been presented for the scenarios in which multiple actors are employed in the video summarisation process in order to minimise the negative effects of sole user adoption. Based on our recommended algorithm, the video frames were initially scored by a group of video annotators ‘on the fly’. This was followed by averaging these assigned scores in order to generate a singular saliency score for each video frame and, finally, the highest scored video frames alongside the corresponding audio and textual contents were extracted to be included into the final summary. The effectiveness of our approach has been assessed by comparing the video summaries generated based on our approach against the results obtained from three existing automatic summarisation tools that adopt different modalities for abstraction purposes. The experimental results indicated that our proposed method is capable of delivering remarkable outcomes in terms of Overall Satisfaction and Precision with an acceptable Recall rate, indicating the usefulness of involving user input in the video summarisation process. In an attempt to provide a better user experience, we have proposed our personalised video summarisation method with an ability to customise the generated summaries in accordance with the viewers’ preferences. Accordingly, the end-user’s priority levels towards different video scenes were captured and utilised for updating the average scores previously assigned by the video annotators. Finally, our earlier proposed summarisation method was adopted to extract the most significant audio-visual content of the video. Experimental results indicated the capability of this approach to deliver superior outcomes compared with our previously proposed method and the three other automatic summarisation tools. Finally, we have attempted to reduce the required level of audience involvement for personalisation purposes by proposing a new method for producing personalised video summaries. Accordingly, SIFT visual features were adopted to identify the video scenes’ semantic categories. Fusing this retrieved data with pre-built users’ profiles, personalised video abstracts can be created. Experimental results showed the effectiveness of this method in delivering superior outcomes comparing to our previously recommended algorithm and the three other automatic summarisation techniques
Recommended from our members
Student-Âsummarized videos in an Adaptive and Collaborative E-learning Environment (ACES)
The purpose of this research was to develop a collaborative e-Learning framework using summarised videos as learning media to provide a more efficient learning experience where participants’ engagement and motivations are enhanced. The research aims to increase participants’ overall learning level, understanding level; motivation and communication skills.
For this research, a collaborative environment has been built where students participate in a video sharing system allowing them to create their own summarized
Videos from existing course video material. Students can then share these videos with other system participants with the ability to view, rate and comment on videos. Instructors upload the core video footage, which the students are able to edit and summarize.
Two experiments were run with live modules within the Department of Informatics; a pilot study and full experiment. Feedback from the pilot study was used to develop the framework for the full study. The experiments involved pre and post participation surveys to measure satisfaction and awareness effects. Also, system participation data was used for analysis of engagement and other factors defining the outcomes of this experiment.
The findings showed a considerable increase in student satisfaction regarding their understanding and motivation with video summarization tool used in the experiments. The results of [the] collaboration aspect of the experiment showed a slight increase in their satisfaction on their learning level, however, it had minimal effect on students’ motivation and engagement as no significant difference was noted after using the system