23 research outputs found

    Integrating memory context into personal information re-finding

    Get PDF
    Personal information archives are emerging as a new challenge for information retrieval (IR) techniques. The user’s memory plays a greater role in retrieval from person archives than from other more traditional types of information collection (e.g. the Web), due to the large overlap of its content and individual human memory of the captured material. This paper presents a new analysis on IR of personal archives from a cognitive perspective. Some existing work on personal information management (PIM) has begun to employ human memory features into their IR systems. In our work we seek to go further, we assume that for IR in PIM system terms can be weighted not only by traditional IR methods, but also taking the user’s recall reliability into account. We aim to develop algorithms that combine factors from both the system side and the user side to achieve more effective searching. In this paper, we discuss possible applications of human memory theories for this algorithm, and present results from a pilot study and a proposed model of data structure for the HDMs achieves

    Multiple multimodal mobile devices: Lessons learned from engineering lifelog solutions

    Get PDF
    For lifelogging, or the recording of one’s life history through digital means, to be successful, a range of separate multimodal mobile devices must be employed. These include smartphones such as the N95, the Microsoft SenseCam – a wearable passive photo capture device, or wearable biometric devices. Each collects a facet of the bigger picture, through, for example, personal digital photos, mobile messages and documents access history, but unfortunately, they operate independently and unaware of each other. This creates significant challenges for the practical application of these devices, the use and integration of their data and their operation by a user. In this chapter we discuss the software engineering challenges and their implications for individuals working on integration of data from multiple ubiquitous mobile devices drawing on our experiences working with such technology over the past several years for the development of integrated personal lifelogs. The chapter serves as an engineering guide to those considering working in the domain of lifelogging and more generally to those working with multiple multimodal devices and integration of their data

    Unsupervised Understanding of Location and Illumination Changes in Egocentric Videos

    Full text link
    Wearable cameras stand out as one of the most promising devices for the upcoming years, and as a consequence, the demand of computer algorithms to automatically understand the videos recorded with them is increasing quickly. An automatic understanding of these videos is not an easy task, and its mobile nature implies important challenges to be faced, such as the changing light conditions and the unrestricted locations recorded. This paper proposes an unsupervised strategy based on global features and manifold learning to endow wearable cameras with contextual information regarding the light conditions and the location captured. Results show that non-linear manifold methods can capture contextual patterns from global features without compromising large computational resources. The proposed strategy is used, as an application case, as a switching mechanism to improve the hand-detection problem in egocentric videos.Comment: Submitted for publicatio

    Information access tasks and evaluation for personal lifelogs

    Get PDF
    Emerging personal lifelog (PL) collections contain permanent digital records of information associated with individuals’ daily lives. This can include materials such as emails received and sent, web content and other documents with which they have interacted, photographs, videos and music experienced passively or created, logs of phone calls and text messages, and also personal and contextual data such as location (e.g. via GPS sensors), persons and objects present (e.g. via Bluetooth) and physiological state (e.g. via biometric sensors). PLs can be collected by individuals over very extended periods, potentially running to many years. Such archives have many potential applications including helping individuals recover partial forgotten information, sharing experiences with friends or family, telling the story of one’s life, clinical applications for the memory impaired, and fundamental psychological investigations of memory. The Centre for Digital Video Processing (CDVP) at Dublin City University is currently engaged in the collection and exploration of applications of large PLs. We are collecting rich archives of daily life including textual and visual materials, and contextual context data. An important part of this work is to consider how the effectiveness of our ideas can be measured in terms of metrics and experimental design. While these studies have considerable similarity with traditional evaluation activities in areas such as information retrieval and summarization, the characteristics of PLs mean that new challenges and questions emerge. We are currently exploring the issues through a series of pilot studies and questionnaires. Our initial results indicate that there are many research questions to be explored and that the relationships between personal memory, context and content for these tasks is complex and fascinating

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio

    Quick and dirty : streamlined 3D scanning in archaeology

    Get PDF
    Capturing data is a key part of archaeological practice, whether for preserving records or to aid interpretation. But the technologies used are complex and expensive, resulting in time-consuming processes associated with their use. These processes force a separation between ongoing interpretive work and capture. Through two field studies we elicit more detail as to what is important about this interpretive work and what might be gained through a closer integration of capture technology with these practices. Drawing on these insights, we go on to present a novel, portable, wireless 3D modeling system that emphasizes "quick and dirty" capture. We discuss its design rational in relation to our field observations and evaluate this rationale further by giving the system to archaeological experts to explore in a variety of settings. While our device compromises on the resolution of traditional 3D scanners, its support of interpretation through emphasis on real-time capture, review and manipulability suggests it could be a valuable tool for the future of archaeology

    Creating digital life stories through activity recognition with image filtering

    Get PDF
    Abstract. This paper presents two algorithms that enables the MemoryLane system to support persons with mild dementia through creation of digital life stories. The MemoryLane system consists of a Logging Kit that captures context and image data, and a Review Client that recognizes activities and enables review of the captured data. The image filtering algorithm is based on image characteristics such as brightness, blurriness and similarity, and is a central component of the Logging Kit. The activity recognition algorithm is based on the captured contextual data together with concepts of persons and places. The initial results indicate that the MemoryLane system is technically feasible and that activity-based creation of digital life stories for persons with mild dementia is possible
    corecore