7 research outputs found

    Sketch-Based Spatial Queries for the Retrieval of Human Locomotion Patterns in Smart Environments

    Get PDF
    A system for retrieving video sequences created by tracking humans in a smart environment, by using spatial queries, is presented. Sketches made with a pointing device on the floor layout of the environment are used to form queries corresponding to locomotion patterns. The sketches are analyzed to identify the type of the query. Directional search algorithms based on the minimum distance between points are applied for finding the best matches to the sketch. The results are ranked according to the similarity and presented to the user. The system was developed in two stages. An initial version of the system was implemented and evaluated by conducting a user study. Modifications were made where appropriate, according to the results and the feedback, to make the system more accurate and usable. We present the details of the initial system, the user study and the results, and the modifications thus made. The overall accuracy of retrieval for the initial system was approximately 93%, when tested on a collection of data from a real-life experiment. This is improved to approximately 97% after the modifications. The user interaction strategy and the search algorithms are usable in any environment for automated retrieval of locomotion patterns. The subjects who evaluated the system found it easy to learn and use. Their comments included several prospective applications for the user interaction strategy, providing valuable insight for future directions

    Evaluation of video summarization for a large number of cameras in ubiquitous home

    No full text
    A system for video summarization in a ubiquitous environment is presented. Data from pressure-based floor sensors are clustered to segment footsteps of different persons. Video handover has been implemented to retrieve a continuous video showing a person moving in the environment. Several methods for extracting key frames from the resulting video sequences have been implemented, and evaluated by experiments. It was found that most of the key frames the human subjects desire to see could be retrieved using an adaptive algorithm based on camera changes and the number of footsteps within the view of the same camera. The system consists of a graphical user interface that can be used to retrieve video summaries interactively using simple queries

    Human Factors Evaluation of a Vision-Based Facial Gesture Interface

    No full text
    We adapted a vision-based face tracking system for cursor control by head movement. An additional vision-based algorithm allowed the user to enter a click by opening the mouth. The Fitts law information throughput rate of cursor movements was measured to be 2.0 bits/sec with the ISO 9241-9 international standard method for testing input devices. A usability assessment was also conducted and we report and discuss the results. A practical application of this facial gesture interface was studied: text input using the Dasher system, which allows a user to type by moving the cursor. The measured typing speed was 7-12 words/minute, depending on level of user expertise. Performance of the system is compared to a conventional mouse interface

    Experience Retrieval in a Ubiquitous Home

    No full text
    We present a system for retrieval and summarization of continuously archived multimedia data from a home-like ubiquitous environment. Data from pressure-based floor sensors are analyzed to index video and audio from a large number of sources. Video and audio handover are implemented to retrieve continuous video streams with sound as a person is moving in the environment. Key frame extraction is proposed and several algorithms are implemented to obtain compact summaries corresponding to the activity of each person. Clustering algorithms and image analysis are used to identify actions and events. The system consists of a graphical user interface that can be used to retrieve video summaries interactively using simple queries

    Video Handover for Retrieval in a Ubiquitous Environment Using Floor Sensor Data

    No full text
    A system for retrieving video captured in a ubiquitous environment is presented. Data from pressure-based floor sensors are obtained as a supplementary input together with video from multiple stationary cameras. Unsupervised data mining techniques are used to reduce noise present in floor sensor data. An algorithm based on agglomerative hierarchical clustering is used to segment footpaths of individual persons. Video handover is proposed and two methods are implemented to retrieve video and key frame sequences showing a person moving in the house. Users can query the system based on time and retrieve video or key frames using either of the handover techniques. We compare the results of retrieval using different techniques subjectively. We conclude with suggestions for improvements, and future directions. 1

    MUST 2010:Welcome message from workshop organizers: FutureTech 2010

    No full text
    On behalf of the 2010 International Workshop on Multimedia and Semantic Technologies (MUST2010), we are pleased to welcome you to Busan, Korea. The workshop will foster state-of-the-art research in the area of multimedia semantic computing including multimedia acquisition, generation, storage, processing, and retrieval of large-scale multimedia information and semantic web technologies. The MUST-10 will also provide an opportunity for academic and industry professionals to discuss the latest issues and progress in the area of multimedia and semantic web
    corecore