52,618 research outputs found

    DocMIR: An automatic document-based indexing system for meeting retrieval

    Get PDF
    This paper describes the DocMIR system which captures, analyzes and indexes automatically meetings, conferences, lectures, etc. by taking advantage of the documents projected (e.g. slideshows, budget tables, figures, etc.) during the events. For instance, the system can automatically apply the above-mentioned procedures to a lecture and automatically index the event according to the presented slides and their contents. For indexing, the system requires neither specific software installed on the presenter's computer nor any conscious intervention of the speaker throughout the presentation. The only material required by the system is the electronic presentation file of the speaker. Even if not provided, the system would temporally segment the presentation and offer a simple storyboard-like browsing interface. The system runs on several capture boxes connected to cameras and microphones that records events, synchronously. Once the recording is over, indexing is automatically performed by analyzing the content of the captured video containing projected documents and detects the scene changes, identifies the documents, computes their duration and extracts their textual content. Each of the captured images is identified from a repository containing all original electronic documents, captured audio-visual data and metadata created during post-production. The identification is based on documents' signatures, which hierarchically structure features from both layout structure and color distributions of the document images. Video segments are finally enriched with textual content of the identified original documents, which further facilitate the query and retrieval without using OCR. The signature-based indexing method proposed in this article is robust and works with low-resolution images and can be applied to several other applications including real-time document recognition, multimedia IR and augmented reality system

    Knowledge Enhanced Notes (KEN)

    Get PDF
    To aid the creation and through-life support of large complex engineering products, organisations are placing a greater emphasis on constructing complete and accurate records of design activities. Current documentary approaches are not sufficient to capture activities and decisions in their entirety and can lead to organisations revisiting and in some cases reworking design decisions in order to understand previous design episodes. This paper presents an overview of the challenges in creating accurate, re-usable records of synchronous design activities, enhancing the through-life support of engineering products, followed by the development of an information capture software system to address these challenges. The main objectives for the development of the Knowledge Enhanced Notes system are described followed by the techniques chosen to address the objectives, and finally a description of a use-case for the system. Whilst the focus of the KEN System was to aid the creation and through-life support of large complex engineering products through constructing complete and accurate records of design activities, the system is entirely generic in its application to synchronous activities

    Space time pixels

    Get PDF
    This paper reports the design of a networked system, the aim of which is to provide an intermediate virtual space that will establish a connection and support interaction between multiple participants in two distant physical spaces. The intention of the project is to explore the potential of the digital space to generate original social relationships between people that their current (spatial or social) position can difficultly allow the establishment of innovative connections. Furthermore, to explore if digital space can sustain, in time, low-level connections like these, by balancing between the two contradicting needs of communication and anonymity. The generated intermediate digital space is a dynamic reactive environment where time and space information of two physical places is superimposed to create a complex common ground where interaction can take place. It is a system that provides awareness of activity in a distant space through an abstract mutable virtual environment, which can be perceived in several different ways – varying from a simple dynamic background image to a common public space in the junction of two private spaces or to a fully opened window to the other space – according to the participants will. The thesis is that the creation of an intermediary environment that operates as an activity abstraction filter between several users, and selectively communicates information, could give significance to the ambient data that people unconsciously transmit to others when co-existing. It can therefore generate a new layer of connections and original interactivity patterns; in contrary to a straight-forward direct real video and sound system, that although it is functionally more feasible, it preserves the existing social constraints that limit interaction into predefined patterns
    • 

    corecore