1,055 research outputs found

    Multimodal Video Analysis and Modeling

    Get PDF
    From recalling long forgotten experiences based on a familiar scent or on a piece of music, to lip reading aided conversation in noisy environments or travel sickness caused by mismatch of the signals from vision and the vestibular system, the human perception manifests countless examples of subtle and effortless joint adoption of the multiple senses provided to us by evolution. Emulating such multisensory (or multimodal, i.e., comprising multiple types of input modes or modalities) processing computationally offers tools for more effective, efficient, or robust accomplishment of many multimedia tasks using evidence from the multiple input modalities. Information from the modalities can also be analyzed for patterns and connections across them, opening up interesting applications not feasible with a single modality, such as prediction of some aspects of one modality based on another. In this dissertation, multimodal analysis techniques are applied to selected video tasks with accompanying modalities. More specifically, all the tasks involve some type of analysis of videos recorded by non-professional videographers using mobile devices.Fusion of information from multiple modalities is applied to recording environment classification from video and audio as well as to sport type classification from a set of multi-device videos, corresponding audio, and recording device motion sensor data. The environment classification combines support vector machine (SVM) classifiers trained on various global visual low-level features with audio event histogram based environment classification using k nearest neighbors (k-NN). Rule-based fusion schemes with genetic algorithm (GA)-optimized modality weights are compared to training a SVM classifier to perform the multimodal fusion. A comprehensive selection of fusion strategies is compared for the task of classifying the sport type of a set of recordings from a common event. These include fusion prior to, simultaneously with, and after classification; various approaches for using modality quality estimates; and fusing soft confidence scores as well as crisp single-class predictions. Additionally, different strategies are examined for aggregating the decisions of single videos to a collective prediction from the set of videos recorded concurrently with multiple devices. In both tasks multimodal analysis shows clear advantage over separate classification of the modalities.Another part of the work investigates cross-modal pattern analysis and audio-based video editing. This study examines the feasibility of automatically timing shot cuts of multi-camera concert recordings according to music-related cutting patterns learnt from professional concert videos. Cut timing is a crucial part of automated creation of multicamera mashups, where shots from multiple recording devices from a common event are alternated with the aim at mimicing a professionally produced video. In the framework, separate statistical models are formed for typical patterns of beat-quantized cuts in short segments, differences in beats between consecutive cuts, and relative deviation of cuts from exact beat times. Based on music meter and audio change point analysis of a new recording, the models can be used for synthesizing cut times. In a user study the proposed framework clearly outperforms a baseline automatic method with comparably advanced audio analysis and wins 48.2 % of comparisons against hand-edited videos

    Automatic mashup generation of multiple-camera videos

    Get PDF
    The amount of user generated video content is growing enormously with the increase in availability and affordability of technologies for video capturing (e.g. camcorders, mobile-phones), storing (e.g. magnetic and optical devices, online storage services), and sharing (e.g. broadband internet, social networks). It has become a common sight at social occasions like parties, concerts, weddings, vacations that many people are shooting videos at approximately the same time. Such concurrent recordings provide multiple views of the same event. In professional video production, the use of multiple cameras is very common. In order to compose an interesting video to watch, audio and video segments from different recordings are mixed into a single video stream. However, in case of non-professional recordings, mixing different camera recordings is not common as the process is considered very time consuming and requires expertise to do. In this thesis, we research on how to automatically combine multiple-camera recordings in a single video stream, called as a mashup. Since non-professional recordings, in general, are characterized by low signal quality and lack of artistic appeal, our objective is to use mashups to enrich the viewing experience of such recordings. In order to define a target application and collect requirements for a mashup, we conducted a study by involving experts on video editing and general camera users by means of interviews and focus groups. Based on the study results, we decided to work on the domain of concert video. We listed the requirements for concert video mashups such as image quality, diversity, and synchronization. According to the requirements, we proposed a solution approach for mashup generation and introduced a formal model consisting of pre-processing, mashupcomposition and post-processing steps. This thesis describes the pre-processing and mashup-composition steps, which result in the automatic generation of a mashup satisfying a set of the elicited requirements. At the pre-processing step, we synchronized multiple-camera recordings to be represented in a common time-line. We proposed and developed synchronization methods based on detecting and matching audio and video features extracted from the recorded content. We developed three realizations of the approach using different features: still-camera flashes in video, audio-fingerprints and audio-onsets. The realizations are independent of the frame rate of the recordings, the number of cameras and provide the synchronization offset accuracy at frame level. Based on their performance in a common data-set, audio-fingerprint and audio-onset were found as the most suitable to apply in generating mashups of concert videos. In the mashup-composition step, we proposed an optimization based solution to compose a mashup from the synchronized recordings. The solution is based on maximizing an objective function containing a number of parameters, which represent the requirements that influence the mashup quality. The function is subjected to a number of constraints, which represent the requirements that must be fulfilled in a mashup. Different audio-visual feature extraction and analysis techniques were employed to measure the degree of fulfillment of the requirements represented in the objective function. We developed an algorithm, first-fit, to compose a mashup satisfying the constraints and maximizing the objective function. Finally, to validate our solution approach, we evaluated the mashups generated by the first-fit algorithm with the ones generated by two other methods. In the first method, naive, a mashup was generated by satisfying only the requirements given as constraints and in the second method, manual, a mashup was created by a professional. In the objective evaluation, first-fit mashups scored higher than both the manual and naive mashups. To assess the end-user satisfaction, we also conducted a user study where we measured user preferences on the mashups generated by the three methods on different aspects of mashup quality. In all the aspects, the naive mashup scored significantly low, while the manual and first-fit mashups scored similarly. We can conclude that the perceived quality of a mashup generated by the naive method is lower than first-fit and manual while the perceived quality of the mashups generated by first-fit and manual methods are similar

    Circulant temporal encoding for video retrieval and temporal alignment

    Get PDF
    We address the problem of specific video event retrieval. Given a query video of a specific event, e.g., a concert of Madonna, the goal is to retrieve other videos of the same event that temporally overlap with the query. Our approach encodes the frame descriptors of a video to jointly represent their appearance and temporal order. It exploits the properties of circulant matrices to efficiently compare the videos in the frequency domain. This offers a significant gain in complexity and accurately localizes the matching parts of videos. The descriptors can be compressed in the frequency domain with a product quantizer adapted to complex numbers. In this case, video retrieval is performed without decompressing the descriptors. We also consider the temporal alignment of a set of videos. We exploit the matching confidence and an estimate of the temporal offset computed for all pairs of videos by our retrieval approach. Our robust algorithm aligns the videos on a global timeline by maximizing the set of temporally consistent matches. The global temporal alignment enables synchronous playback of the videos of a given scene

    Music Information Retrieval Meets Music Education

    Get PDF
    This paper addresses the use of Music Information Retrieval (MIR) techniques in music education and their integration in learning software. A general overview of systems that are either commercially available or in research stage is presented. Furthermore, three well-known MIR methods used in music learning systems and their state-of-the-art are described: music transcription, solo and accompaniment track creation, and generation of performance instructions. As a representative example of a music learning system developed within the MIR community, the Songs2See software is outlined. Finally, challenges and directions for future research are described

    Automatic Mobile Video Remixing and Collaborative Watching Systems

    Get PDF
    In the thesis, the implications of combining collaboration with automation for remix creation are analyzed. We first present a sensor-enhanced Automatic Video Remixing System (AVRS), which intelligently processes mobile videos in combination with mobile device sensor information. The sensor-enhanced AVRS system involves certain architectural choices, which meet the key system requirements (leverage user generated content, use sensor information, reduce end user burden), and user experience requirements. Architecture adaptations are required to improve certain key performance parameters. In addition, certain operating parameters need to be constrained, for real world deployment feasibility. Subsequently, sensor-less cloud based AVRS and low footprint sensorless AVRS approaches are presented. The three approaches exemplify the importance of operating parameter tradeoffs for system design. The approaches cover a wide spectrum, ranging from a multimodal multi-user client-server system (sensor-enhanced AVRS) to a mobile application which can automatically generate a multi-camera remix experience from a single video. Next, we present the findings from the four user studies involving 77 users related to automatic mobile video remixing. The goal was to validate selected system design goals, provide insights for additional features and identify the challenges and bottlenecks. Topics studied include the role of automation, the value of a video remix as an event memorabilia, the requirements for different types of events and the perceived user value from creating multi-camera remix from a single video. System design implications derived from the user studies are presented. Subsequently, sport summarization, which is a specific form of remix creation is analyzed. In particular, the role of content capture method is analyzed with two complementary approaches. The first approach performs saliency detection in casually captured mobile videos; in contrast, the second one creates multi-camera summaries from role based captured content. Furthermore, a method for interactive customization of summary is presented. Next, the discussion is extended to include the role of users’ situational context and the consumed content in facilitating collaborative watching experience. Mobile based collaborative watching architectures are described, which facilitate a common shared context between the participants. The concept of movable multimedia is introduced to highlight the multidevice environment of current day users. The thesis presents results which have been derived from end-to-end system prototypes tested in real world conditions and corroborated with extensive user impact evaluation

    Semi-Automation in Video Editing

    Get PDF
    Semi-automasjon i video redigering Hvordan kan vi bruke kunstig intelligens (KI) og maskin læring til å gjøre videoredigering like enkelt som å redigere tekst? I denne avhandlingen vil jeg adressere problemet med å bruke KI i videoredigering fra et Menneskelig-KI interaksjons perspektiv, med fokus på å bruke KI til å støtte brukerne. Video er et audiovisuelt medium. Redigere videoer krever synkronisering av både det visuelle og det auditive med presise operasjoner helt ned på millisekund nivå. Å gjøre dette like enkelt som å redigere tekst er kanskje ikke mulig i dag. Men hvordan skal vi da støtte brukerne med KI og hva er utfordringene med å gjøre det? Det er fem hovedspørsmål som har drevet forskningen i denne avhandlingen. Hva er dagens "state-of-the-art" i KI støttet videoredigering? Hva er behovene og forventningene av fagfolkene om KI? Hva er påvirkningen KI har på effektiviteten og nøyaktigheten når det blir brukt på teksting? Hva er endringene i brukeropplevelsen når det blir brukt KI støttet teksting? Hvordan kan flere KI metoder bli brukt for å støtte beskjærings- og panoreringsoppgaver? Den første artikkelen av denne avhandlingen ga en syntese og kritisk gjennomgang av eksisterende arbeid med KI-baserte verktøy for videoredigering. Artikkelen ga også noen svar på hvordan og hva KI kan bli brukt til for å støtte brukere ved en undersøkelse utført av 14 fagfolk. Den andre studien presenterte en prototype av KI-støttet videoredigerings verktøy bygget på et eksisterende videoproduksjons program. I tillegg kom det en evaluasjon av både ytelse og brukeropplevelse på en KI-støttet teksting fra 24 nybegynnere. Den tredje studien beskrev et idiom-basert verktøy for å konvertere bredskjermsvideoer lagd for TV til smalere størrelsesforhold for mobil og sosiale medieplattformer. Den tredje studien utforsker også nye metoder for å utøve beskjæring og panorering ved å bruke fem forskjellige KI-modeller. Det ble også presentert en evaluering fra fem brukere. I denne avhandlingen brukte vi en brukeropplevelse og oppgave basert framgangsmåte, for å adressere det semi-automatiske i videoredigering.How can we use artificial intelligence (AI) and machine learning (ML) to make video editing as easy as "editing text''? In this thesis, this problem of using AI to support video editing is explored from the human--AI interaction perspective, with the emphasis on using AI to support users. Video is a dual-track medium with audio and visual tracks. Editing videos requires synchronization of these two tracks and precise operations at milliseconds. Making it as easy as editing text might not be currently possible. Then how should we support the users with AI, and what are the current challenges in doing so? There are five key questions that drove the research in this thesis. What is the start of the art in using AI to support video editing? What are the needs and expectations of video professionals from AI? What are the impacts on efficiency and accuracy of subtitles when AI is used to support subtitling? What are the changes in user experience brought on by AI-assisted subtitling? How can multiple AI methods be used to support cropping and panning task? In this thesis, we employed a user experience focused and task-based approach to address the semi-automation in video editing. The first paper of this thesis provided a synthesis and critical review of the existing work on AI-based tools for videos editing and provided some answers to how should and what more AI can be used in supporting users by a survey of 14 video professional. The second paper presented a prototype of AI-assisted subtitling built on a production grade video editing software. It is the first comparative evaluation of both performance and user experience of AI-assisted subtitling with 24 novice users. The third work described an idiom-based tool for converting wide screen videos made for television to narrower aspect ratios for mobile social media platforms. It explores a new method to perform cropping and panning using five AI models, and an evaluation with 5 users and a review with a professional video editor were presented.Doktorgradsavhandlin

    Exploring Sparse, Unstructured Video Collections of Places

    Get PDF
    The abundance of mobile devices and digital cameras with video capture makes it easy to obtain large collections of video clips that contain the same location, environment, or event. However, such an unstructured collection is difficult to comprehend and explore. We propose a system that analyses collections of unstructured but related video data to create a Videoscape: a data structure that enables interactive exploration of video collections by visually navigating — spatially and/or temporally — between different clips. We automatically identify transition opportunities, or portals. From these portals, we construct the Videoscape, a graph whose edges are video clips and whose nodes are portals between clips. Now structured, the videos can be interactively explored by walking the graph or by geographic map. Given this system, we gauge preference for different video transition styles in a user study, and generate heuristics that automatically choose an appropriate transition style. We evaluate our system using three further user studies, which allows us to conclude that Videoscapes provides significant benefits over related methods. Our system leads to previously unseen ways of interactive spatio-temporal exploration of casually captured videos, and we demonstrate this on several video collections
    corecore