9,535 research outputs found

    Indexing, browsing and searching of digital video

    Get PDF
    Video is a communications medium that normally brings together moving pictures with a synchronised audio track into a discrete piece or pieces of information. The size of a “piece ” of video can variously be referred to as a frame, a shot, a scene, a clip, a programme or an episode, and these are distinguished by their lengths and by their composition. We shall return to the definition of each of these in section 4 this chapter. In modern society, video is ver

    What does not happen: quantifying embodied engagement using NIMI and self-adaptors

    Get PDF
    Previous research into the quantification of embodied intellectual and emotional engagement using non-verbal movement parameters has not yielded consistent results across different studies. Our research introduces NIMI (Non-Instrumental Movement Inhibition) as an alternative parameter. We propose that the absence of certain types of possible movements can be a more holistic proxy for cognitive engagement with media (in seated persons) than searching for the presence of other movements. Rather than analyzing total movement as an indicator of engagement, our research team distinguishes between instrumental movements (i.e. physical movement serving a direct purpose in the given situation) and non-instrumental movements, and investigates them in the context of the narrative rhythm of the stimulus. We demonstrate that NIMI occurs by showing viewers’ movement levels entrained (i.e. synchronised) to the repeating narrative rhythm of a timed computer-presented quiz. Finally, we discuss the role of objective metrics of engagement in future context-aware analysis of human behaviour in audience research, interactive media and responsive system and interface design

    Biogeographic analysis of the Tortugas Ecological Reserve: Examining the refuge effect following reserve establishment

    Get PDF
    Almost 120 days at sea aboard three NOAA research vessels and one fishing vessel over the past three years have supported biogeographic characterization of Tortugas Ecological Reserve (TER). This work initiated measurement of post-implementation effects of TER as a refuge for exploited species. In Tortugas South, seafloor transect surveys were conducted using divers, towed operated vehicles (TOV), remotely operated vehicles (ROV), various sonar platforms, and the Deepworker manned submersible. ARGOS drifter releases, satellite imagery, ichthyoplankton surveys, sea surface temperature, and diver census were combined to elucidate potential dispersal of fish spawning in this environment. Surveys are being compiled into a GIS to allow resource managers to gauge benthic resource status and distribution. Drifter studies have determined that within the ~ 30 days of larval life stage for fishes spawning at Tortugas South, larvae could reach as far downstream as Tampa Bay on the west Florida coast and Cape Canaveral on the east coast. Together with actual fish surveys and water mass delineation, this work demonstrates that the refuge status of this area endows it with tremendous downstream spillover and larval export potential for Florida reef habitats and promotes the maintenance of their fish communities. In Tortugas North, 30 randomly selected, permanent stations were established. Five stations were assigned to each of the following six areas: within Dry Tortugas National Park, falling north of the prevailing currents (Park North); within Dry Tortugas National Park, falling south of the prevailing currents (Park South); within the Ecological Reserve falling north of the prevailing currents (Reserve North); within the Ecological Reserve falling south of the prevailing currents (Reserve South); within areas immediately adjacent to these two strata, falling north of the prevailing currents (Out North); and within areas immediately adjacent to these two strata, falling south of the prevailing currents (Out South). Intensive characterization of these sites was conducted using multiple sonar techniques, TOV, ROV, diver-based digital video collection, diver-based fish census, towed fish capture, sediment particle-size, benthic chlorophyll analyses, and stable isotope analyses of primary producers, fish, and, shellfish. In order to complement and extend information from studies focused on the coral reef, we have targeted the ecotone between the reef and adjacent, non-reef habitats as these areas are well-known in ecology for indicating changes in trophic relationships at the ecosystem scale. Such trophic changes are hypothesized to occur as top-down control of the system grows with protection of piscivorous fishes. Preliminary isotope data, in conjunction with our prior results from the west Florida shelf, suggest that the shallow water benthic habitats surrounding the coral reefs of TER will prove to be the source of a significant amount of the primary production ultimately fueling fish production throughout TER and downstream throughout the range of larval fish dispersal. Therefore, the status and influence of the previously neglected, non-reef habitat within the refuge (comprising ~70% of TER) appears to be intimately tied to the health of the coral reef community proper. These data, collected in a biogeographic context, employing an integrated Before-After Control Impact design at multiple spatial scales, leave us poised to document and quantify the postimplementation effects of TER. Combined with the work at Tortugas South, this project represents a multi-disciplinary effort of sometimes disparate disciplines (fishery oceanography, benthic ecology, food web analysis, remote sensing/geography/landscape ecology, and resource management) and approaches (physical, biological, ecological). We expect the continuation of this effort to yield critical information for the management of TER and the evaluation of protected areas as a refuge for exploited species. (PDF contains 32 pages.

    Improved fade and dissolve detection for reliable video segmentation

    Full text link
    We present improved algorithms for automatic fade and dissolve detection in digital video analysis. We devise new two-step algorithms for fade and dissolve detection and introduce a method for eliminating false positives from a list of detected candidate transitions. In our detailed study of these gradual shot transitions, our objective has been to accurately classify the type of transitions (fade-in, fade-out, and dissolve) and to precisely locate the boundary of the transitions. This distinguishes our work from early work in scene change detection which focuses on identifying the existence of a transition rather than its precise temporal extent. We evaluate our algorithms against two other commonly used methods on a comprehensive data set, and demonstrate the improved performance due to our enhancements

    Video summarisation: A conceptual framework and survey of the state of the art

    Get PDF
    This is the post-print (final draft post-refereeing) version of the article. Copyright @ 2007 Elsevier Inc.Video summaries provide condensed and succinct representations of the content of a video stream through a combination of still images, video segments, graphical representations and textual descriptors. This paper presents a conceptual framework for video summarisation derived from the research literature and used as a means for surveying the research literature. The framework distinguishes between video summarisation techniques (the methods used to process content from a source video stream to achieve a summarisation of that stream) and video summaries (outputs of video summarisation techniques). Video summarisation techniques are considered within three broad categories: internal (analyse information sourced directly from the video stream), external (analyse information not sourced directly from the video stream) and hybrid (analyse a combination of internal and external information). Video summaries are considered as a function of the type of content they are derived from (object, event, perception or feature based) and the functionality offered to the user for their consumption (interactive or static, personalised or generic). It is argued that video summarisation would benefit from greater incorporation of external information, particularly user based information that is unobtrusively sourced, in order to overcome longstanding challenges such as the semantic gap and providing video summaries that have greater relevance to individual users

    Overview of Augmented Reality Technology

    Full text link
    В статье рассматривается технология дополненной реальности, описаны принципы работы, компоненты для реализации, сравнение данной технологии с виртуальной реальностью. Рассмотрены существующие реализации приложений на основе дополненной реальности. Предоставлена информация о перспективах развития технологии

    A query description model based on basic semantic unit composite Petri-Net for soccer video

    Get PDF
    Digital video networks are making available increasing amounts of sports video data. The volume of material on offer means that sports fans often rely on prepared summaries of game highlights to follow the progress of their favourite teams. A significant application area for automated video analysis technology is the generation of personalized highlights of sports events. One of the most popular sports around world is soccer. A soccer game is composed of a range of significant events, such as goal scoring, fouls, and substitutions. Automatically detecting these events in a soccer video can enable users to interactively design their own highlights programmes. From an analysis of broadcast soccer video, we propose a query description model based on Basic Semantic Unit Composite Petri-Nets (BSUCPN) to automatically detect significant events within soccer video. Firstly we define a Basic Semantic Unit (BSU) set for soccer videos based on identifiable feature elements within a soccer video, Secondly we design Composite Petri-Net (CPN) models for semantic queries and use these to describe BSUCPNs for semantic events in soccer videos. A particular strength of this approach is that users are able to design their own semantic event queries based on BSUCPNs to search interactively within soccer videos. Experimental results based on recorded soccer broadcasts are used to illustrate the potential of this approach

    Highly efficient low-level feature extraction for video representation and retrieval.

    Get PDF
    PhDWitnessing the omnipresence of digital video media, the research community has raised the question of its meaningful use and management. Stored in immense multimedia databases, digital videos need to be retrieved and structured in an intelligent way, relying on the content and the rich semantics involved. Current Content Based Video Indexing and Retrieval systems face the problem of the semantic gap between the simplicity of the available visual features and the richness of user semantics. This work focuses on the issues of efficiency and scalability in video indexing and retrieval to facilitate a video representation model capable of semantic annotation. A highly efficient algorithm for temporal analysis and key-frame extraction is developed. It is based on the prediction information extracted directly from the compressed domain features and the robust scalable analysis in the temporal domain. Furthermore, a hierarchical quantisation of the colour features in the descriptor space is presented. Derived from the extracted set of low-level features, a video representation model that enables semantic annotation and contextual genre classification is designed. Results demonstrate the efficiency and robustness of the temporal analysis algorithm that runs in real time maintaining the high precision and recall of the detection task. Adaptive key-frame extraction and summarisation achieve a good overview of the visual content, while the colour quantisation algorithm efficiently creates hierarchical set of descriptors. Finally, the video representation model, supported by the genre classification algorithm, achieves excellent results in an automatic annotation system by linking the video clips with a limited lexicon of related keywords
    corecore