32 research outputs found

    IAPMA 2011: 2nd Workshop on information access to personal media archives

    Get PDF
    Towards e-Memories: challenges of capturing, summarising, presenting, understanding, using, and retrieving relevant information from heterogeneous data contained in personal media archives. Welcome to IAPMA 2011, the second international workshop on "Information Access for Personal Media Archives". It is now possible to archive much of our life experiences in digital form using a variety of sources, e.g. blogs written, tweets made, social network status updates, photographs taken, videos seen, music heard, physiological monitoring, locations visited and environmentally sensed data of those places, details of people met, etc. Information can be captured from a myriad of personal information devices including desktop computers, PDAs, digital cameras, video and audio recorders, and various sensors, including GPS, Bluetooth, and biometric devices

    Providing effective memory retrieval cues through automatic structuring and augmentation of a lifelog of images

    Get PDF
    Lifelogging is an area of research which is concerned with the capture of many aspects of an individual's life digitally, and within this rapidly emerging field is the significant challenge of managing images passively captured by an individual of their daily life. Possible applications vary from helping those with neurodegenerative conditions recall events from memory, to the maintenance and augmentation of extensive image collections of a tourist's trips. However, a large lifelog of images can quickly amass, with an average of 700,000 images captured each year, using a device such as the SenseCam. We address the problem of managing this vast collection of personal images by investigating automatic techniques that: 1. Identify distinct events within a full day of lifelog images (which typically consists of 2,000 images) e.g. breakfast, working on PC, meeting, etc. 2. Find similar events to a given event in a person's lifelog e.g. "show me other events where I was in the park" 3. Determine those events that are more important or unusual to the user and also select a relevant keyframe image for visual display of an event e.g. a "meeting" is more interesting to review than "working on PC" 4. Augment the images from a wearable camera with higher quality images from external "Web 2.0" sources e.g. find me pictures taken by others of the U2 concert in Croke Park In this dissertation we discuss novel techniques to realise each of these facets and how effective they are. The significance of this work is not only of benefit to the lifelogging community, but also to cognitive psychology researchers studying the potential benefits of lifelogging devices to those with neurodegenerative diseases

    Visual access to lifelog data in a virtual environment

    Get PDF
    Continuous image capture via a wearable camera is currently one of the most popular methods to establish a comprehensive record of the entirety of an indi- vidual’s life experience, referred to in the research community as a lifelog. These vast image corpora are further enriched by content analysis and combined with additional data such as biometrics to generate as extensive a record of a person’s life as possible. However, interfacing with such datasets remains an active area of research, and despite the advent of new technology and a plethora of com- peting mediums for processing digital information, there has been little focus on newly emerging platforms such as virtual reality. We hypothesise that the increase in immersion, accessible spatial dimensions, and more, could provide significant benefits in the lifelogging domain over more conventional media. In this work, we motivate virtual reality as a viable method of lifelog exploration by performing an in-depth analysis using a novel application prototype built for the HTC Vive. This research also includes the development of a governing design framework for lifelog applications which supported the development of our prototype but is also intended to support the development of future such lifelog systems

    Semantic interpretation of events in lifelogging

    Get PDF
    The topic of this thesis is lifelogging, the automatic, passive recording of a person’s daily activities and in particular, on performing a semantic analysis and enrichment of lifelogged data. Our work centers on visual lifelogged data, such as taken from wearable cameras. Such wearable cameras generate an archive of a person’s day taken from a first-person viewpoint but one of the problems with this is the sheer volume of information that can be generated. In order to make this potentially very large volume of information more manageable, our analysis of this data is based on segmenting each day’s lifelog data into discrete and non-overlapping events corresponding to activities in the wearer’s day. To manage lifelog data at an event level, we define a set of concepts using an ontology which is appropriate to the wearer, applying automatic detection of concepts to these events and then semantically enriching each of the detected lifelog events making them an index into the events. Once this enrichment is complete we can use the lifelog to support semantic search for everyday media management, as a memory aid, or as part of medical analysis on the activities of daily living (ADL), and so on. In the thesis, we address the problem of how to select the concepts to be used for indexing events and we propose a semantic, density- based algorithm to cope with concept selection issues for lifelogging. We then apply activity detection to classify everyday activities by employing the selected concepts as high-level semantic features. Finally, the activity is modeled by multi-context representations and enriched by Semantic Web technologies. The thesis includes an experimental evaluation using real data from users and shows the performance of our algorithms in capturing the semantics of everyday concepts and their efficacy in activity recognition and semantic enrichment

    Communicating with your E-memory: finding and refinding in personal lifelogs

    Get PDF
    The rapid development of technology enables the digital capture and storage of our life experiences in an “E-Memory” (electronic–memory) or personal lifelog (PLL). This offers the potential for people to store the details of their life in a permanent archive, so that the information is still available even when its physical existence has vanished and when memory traces of it have faded away. A major challenge for PLLs is enabling people to access information when it is needed. Many people may also want to share or transfer some of their memory to their friends and descendants, so that their experiences can be appreciated and their knowledge can be kept even after they have passed away. This thesis further explores people’s potential needs from their own PLLs, discuss the possible methods people may use and potential problems that they may encounter while accessing their PLLs, and hypothesize that better support of users’ own memory can provide better user experience and improved efficiency for accessing their E-memories (or PLLs). As part of a larger project, three lifeloggers collected their own prototype lifelog collection for about 20 months’ time. To complete this study, the author developed a prototype PLL system, called the iCLIPS Lifelog Archive Browser (LAB), based on the author’s theoretical exploration and empirical studies, and evaluated it using our prototype lifelog collections through a user study with the three lifeloggers. The results of this study provide promising evidence which support the hypothesis. The end of this thesis also discusses the issues that the lifeloggers encountered in using their lifelogs and future technologies that are desirable based the studies in this thesis

    Digital life stories: Semi-automatic (auto)biographies within lifelog collections

    Get PDF
    Our life stories enable us to reflect upon and share our personal histories. Through emerging digital technologies the possibility of collecting life experiences digitally is increasingly feasible; consequently so is the potential to create a digital counterpart to our personal narratives. In this work, lifelogging tools are used to collect digital artifacts continuously and passively throughout our day. These include images, documents, emails and webpages accessed; texts messages and mobile activity. This range of data when brought together is known as a lifelog. Given the complexity, volume and multimodal nature of such collections, it is clear that there are significant challenges to be addressed in order to achieve coherent and meaningful digital narratives of our events from our life histories. This work investigates the construction of personal digital narratives from lifelog collections. It examines the underlying questions, issues and challenges relating to construction of personal digital narratives from lifelogs. Fundamentally, it addresses how to organize and transform data sampled from an individual’s day-to-day activities into a coherent narrative account. This enquiry is enabled by three 20-month long-term lifelogs collected by participants and produces a narrative system which enables the semi-automatic construction of digital stories from lifelog content. Inspired by probative studies conducted into current practices of curation, from which a set of fundamental requirements are established, this solution employs a 2-dimensional spatial framework for storytelling. It delivers integrated support for the structuring of lifelog content and its distillation into storyform through information retrieval approaches. We describe and contribute flexible algorithmic approaches to achieve both. Finally, this research inquiry yields qualitative and quantitative insights into such digital narratives and their generation, composition and construction. The opportunities for such personal narrative accounts to enable recollection, reminiscence and reflection with the collection owners are established and its benefit in sharing past personal experience experiences is outlined. Finally, in a novel investigation with motivated third parties we demonstrate the opportunities such narrative accounts may have beyond the scope of the collection owner in: personal, societal and cultural explorations, artistic endeavours and as a generational heirloom

    The role of context in human memory augmentation

    Get PDF
    Technology has always had a direct impact on what humans remember. In the era of smartphones and wearable devices, people easily capture on a daily basis information and videos, which can help them remember past experiences and attained knowledge, or simply evoke memories for reminiscing. The increasing use of such ubiquitous devices and technologies produces a sheer volume of pictures and videos that, in combination with additional contextual information, could potentially significantly improve one’s ability to recall a past experience and prior knowledge. Calendar entries, application use logs, social media posts, and activity logs comprise only a few examples of such potentially memory-supportive additional information. This work explores how such memory-supportive information can be collected, filtered, and eventually utilized, for generating memory cues, fragments of past experience or prior knowledge, purposed for triggering one’s memory recall. In this thesis, we showcase how we leverage modern ubiquitous technologies as a vessel for transferring established psychological methods from the lab into the real world, for significantly and measurably augmenting human memory recall in a diverse set of often challenging contexts. We combine experimental evidence garnered from numerous field and lab studies, with knowledge amassed from an extensive literature review, for substantially informing the design and development of future pervasive memory augmentation systems. Ultimately, this work contributes to the fundamental understanding of human memory and how today’s modern technologies can be actuated for augmenting it

    Impact of video summary viewing on episodic memory recall:design guidelines for video summarizations

    Get PDF
    Reviewing lifelogging data has been proposed as a useful tool to support human memory. However, the sheer volume of data (particularly images) that can be captured by modern lifelogging systems makes the selection and presentation of material for review a challenging task. We present the results of a five-week user study involving 16 participants and over 69,000 images that explores both individual requirements for video summaries and the differences in cognitive load, user experience, memory experience, and recall experience between review using video summarisations and non-summary review techniques. Our results can be used to inform the design of future lifelogging data summarisation systems for memory augmentation

    Temporal multimodal video and lifelog retrieval

    Get PDF
    The past decades have seen exponential growth of both consumption and production of data, with multimedia such as images and videos contributing significantly to said growth. The widespread proliferation of smartphones has provided everyday users with the ability to consume and produce such content easily. As the complexity and diversity of multimedia data has grown, so has the need for more complex retrieval models which address the information needs of users. Finding relevant multimedia content is central in many scenarios, from internet search engines and medical retrieval to querying one's personal multimedia archive, also called lifelog. Traditional retrieval models have often focused on queries targeting small units of retrieval, yet users usually remember temporal context and expect results to include this. However, there is little research into enabling these information needs in interactive multimedia retrieval. In this thesis, we aim to close this research gap by making several contributions to multimedia retrieval with a focus on two scenarios, namely video and lifelog retrieval. We provide a retrieval model for complex information needs with temporal components, including a data model for multimedia retrieval, a query model for complex information needs, and a modular and adaptable query execution model which includes novel algorithms for result fusion. The concepts and models are implemented in vitrivr, an open-source multimodal multimedia retrieval system, which covers all aspects from extraction to query formulation and browsing. vitrivr has proven its usefulness in evaluation campaigns and is now used in two large-scale interdisciplinary research projects. We show the feasibility and effectiveness of our contributions in two ways: firstly, through results from user-centric evaluations which pit different user-system combinations against one another. Secondly, we perform a system-centric evaluation by creating a new dataset for temporal information needs in video and lifelog retrieval with which we quantitatively evaluate our models. The results show significant benefits for systems that enable users to specify more complex information needs with temporal components. Participation in interactive retrieval evaluation campaigns over multiple years provides insight into possible future developments and challenges of such campaigns
    corecore