20,848 research outputs found

    Personal Photo Indexing

    Get PDF
    Sorting one’s own private photo collection is a time consuming and tedious task. We demonstrate our event-centered approach to perform this task fully automatically. In the course of the demonstration, we either use our own photo collections, or invite the conference visitors to bring their own cameras and photos. We will sort the photos into a semantically meaningful hierarchy for the users within a couple of minutes. Events as a media aggregator allow a user to manage and annotate a photo collection in more convenient and natural to the human being way. Based on the recognized user behavior the application is able to re- veal the nature of an event and build its hierarchy with a event/sub-event relationship. One important prerequisite of our approach is a precise GPS based spatial annotation of the photos. To accommodate for devices without GPS chips or temporary low GPS perception, we propose an approach to enrich the collection with automatically estimated GPS data by semantically interpolating possible routes of the user. We are positive that we can provide a well received service for the conference visitors, especially since the conference venue will trigger a lot of memorable photos. Large scale experimental validation showed that the approach is able to recreate a user’s desired hierarchy with an F-measure of about 0.8

    Automatic text searching for personal photos

    Get PDF
    This demonstration presents the MediAssist prototype system for organisation of personal digital photo collections based on contextual information, such as time and location of image capture, and content-based analysis, such as face detection and recognition. This metadata is used directly for identification of photos which match specified attributes, and also to create text surrogates for photos, allowing for text-based queries of photo collections without relying on manual annotation. MediAssist illustrates our research into digital photo management, showing how a combination of automatically extracted context and content-based information, together with user annotation and traditional text indexing techniques, facilitates efficient searching of personal photo collections

    Using text search for personal photo collections with the MediAssist system

    Get PDF
    The MediAssist system enables organisation and searching of personal digital photo collections based on contextual information, content-based analysis and semi-automatic annotation. One mode of user interaction uses automatically extracted features to create text surrogates for photos, which enables text search of photo collections without manual annotation. Our evaluation shows that this text search facility is effective for known-item search

    Adaptive Information Cluster at Dublin City University

    Get PDF
    The Adaptive Information Cluster (AIC) is a collaboration between Dublin City University and University College Dublin, and in the AIC at DCU, we investigate and develop as one stream of our research activities, various content analysis tools that can automatically index and structure video information. This includes movies or CCTV footage and the motivation is to support useful searching and browsing features for the envisaged end-users of such systems. We bring in the HCI perspective to this highly-technically-oriented research by brainstorming, generating scenarios, sketching and prototyping the user-interfaces to the resulting video retrieval systems we develop, and we conduct usability studies to better understand the usage and opinions of such systems so as to guide the future direction of our technological research

    Mining user activity as a context source for search and retrieval

    Get PDF
    Nowadays in information retrieval it is generally accepted that if we can better understand the context of users then this could help the search process, either at indexing time by including more metadata or at retrieval time by better modelling the user context. In this work we explore how activity recognition from tri-axial accelerometers can be employed to model a user's activity as a means of enabling context-aware information retrieval. In this paper we discuss how we can gather user activity automatically as a context source from a wearable mobile device and we evaluate the accuracy of our proposed user activity recognition algorithm. Our technique can recognise four kinds of activities which can be used to model part of an individual's current context. We discuss promising experimental results, possible approaches to improve our algorithms, and the impact of this work in modelling user context toward enhanced search and retrieval

    Mobile access to personal digital photograph archives

    Get PDF
    Handheld computing devices are becoming highly connected devices with high capacity storage. This has resulted in their being able to support storage of, and access to, personal photo archives. However the only means for mobile device users to browse such archives is typically a simple one-by-one scroll through image thumbnails in the order that they were taken, or by manually organising them based on folders. In this paper we describe a system for context-based browsing of personal digital photo archives. Photos are labeled with the GPS location and time they are taken and this is used to derive other context-based metadata such as weather conditions and daylight conditions. We present our prototype system for mobile digital photo retrieval, and an experimental evaluation illustrating the utility of location information for effective personal photo retrieval

    A Study of User's Performance and Satisfaction on the Web Based Photo Annotation with Speech Interaction

    Get PDF
    This paper reports on empirical evaluation study of users' performance and satisfaction with prototype of Web Based speech photo annotation with speech interaction. Participants involved consist of Johor Bahru citizens from various background. They have completed two parts of annotation task; part A involving PhotoASys; photo annotation system with proposed speech interaction and part B involving Microsoft Microsoft Vista Speech Interaction style. They have completed eight tasks for each part including system login and selection of album and photos. Users' performance was recorded using computer screen recording software. Data were captured on the task completion time and subjective satisfaction. Participants need to complete a questionnaire on the subjective satisfaction when the task was completed. The performance data show the comparison between proposed speech interaction and Microsoft Vista Speech interaction applied in photo annotation system, PhotoASys. On average, the reduction in annotation performance time due to using proposed speech interaction style was 64.72% rather than using speech interaction Microsoft Vista style. Data analysis were showed in different statistical significant in annotation performance and subjective satisfaction for both styles of interaction. These results could be used for the next design in related software which involves personal belonging management.Comment: IEEE Publication Format, https://sites.google.com/site/journalofcomputing
    corecore