45,434 research outputs found

    Augmenting human memory using personal lifelogs

    Get PDF
    Memory is a key human facility to support life activities, including social interactions, life management and problem solving. Unfortunately, our memory is not perfect. Normal individuals will have occasional memory problems which can be frustrating, while those with memory impairments can often experience a greatly reduced quality of life. Augmenting memory has the potential to make normal individuals more effective, and those with significant memory problems to have a higher general quality of life. Current technologies are now making it possible to automatically capture and store daily life experiences over an extended period, potentially even over a lifetime. This type of data collection, often referred to as a personal life log (PLL), can include data such as continuously captured pictures or videos from a first person perspective, scanned copies of archival material such as books, electronic documents read or created, and emails and SMS messages sent and received, along with context data of time of capture and access and location via GPS sensors. PLLs offer the potential for memory augmentation. Existing work on PLLs has focused on the technologies of data capture and retrieval, but little work has been done to explore how these captured data and retrieval techniques can be applied to actual use by normal people in supporting their memory. In this paper, we explore the needs for augmenting human memory from normal people based on the psychology literature on mechanisms about memory problems, and discuss the possible functions that PLLs can provide to support these memory augmentation needs. Based on this, we also suggest guidelines for data for capture, retrieval needs and computer-based interface design. Finally we introduce our work-in-process prototype PLL search system in the iCLIPS project to give an example of augmenting human memory with PLLs and computer based interfaces

    Applying psychological science to the CCTV review process: a review of cognitive and ergonomic literature

    Get PDF
    As CCTV cameras are used more and more often to increase security in communities, police are spending a larger proportion of their resources, including time, in processing CCTV images when investigating crimes that have occurred (Levesley & Martin, 2005; Nichols, 2001). As with all tasks, there are ways to approach this task that will facilitate performance and other approaches that will degrade performance, either by increasing errors or by unnecessarily prolonging the process. A clearer understanding of psychological factors influencing the effectiveness of footage review will facilitate future training in best practice with respect to the review of CCTV footage. The goal of this report is to provide such understanding by reviewing research on footage review, research on related tasks that require similar skills, and experimental laboratory research about the cognitive skills underpinning the task. The report is organised to address five challenges to effectiveness of CCTV review: the effects of the degraded nature of CCTV footage, distractions and interrupts, the length of the task, inappropriate mindset, and variability in people’s abilities and experience. Recommendations for optimising CCTV footage review include (1) doing a cognitive task analysis to increase understanding of the ways in which performance might be limited, (2) exploiting technology advances to maximise the perceptual quality of the footage (3) training people to improve the flexibility of their mindset as they perceive and interpret the images seen, (4) monitoring performance either on an ongoing basis, by using psychophysiological measures of alertness, or periodically, by testing screeners’ ability to find evidence in footage developed for such testing, and (5) evaluating the relevance of possible selection tests to screen effective from ineffective screener

    What do people want from their lifelogs?

    Get PDF
    The practice of lifelogging potentially consists of automatically capturing and storing a digital record of every piece of information that a person (lifelogger) encounters in their daily experiences. Lifelogging has become an increasingly popular area of research in recent years. Most current lifeloggiing research focuses on techniques for data capture or processing. Current applications of lifelogging technology are usually driven by new technology inventions, creative ideas of researchers, or the special needs of a particular user group, e.g. individuals with memory impairment. To the best of our knowledge, little work has explored potential lifelogs applications from the perspective of the desires of the general public. One of the difficulties of carrying out such a study is the balancing of the information given to the subject regarding lifelog technology to enable them to generate realistic ideas without limiting or directing their imaginations by providing too much specific information. We report a study in which we take a progressive approach where we introduce lifelogging in three stages, and collect the ideas and opinions of a volunteer group of general public participants on techniques for lifelog capture, and applications and functionality

    Communication System For Firefighters

    Get PDF
    Currently firefighters use two-way radios to communicate on the job, and they are forced to write reports based on their memory because there is not an easy way to record the communications between two-way radios. Firefighters need a system to automatically document what happened while they were responding to a call. To save them a significant amount of time when creating reports, our solution is to implement an application that allows firefighters to take pictures, record video and communicate in real time with their team of on-site responders. The proposed system will use a Wireless Local Area Network (WLAN) hosted on the fire truck itself to act as an access point (AP) to which the firefighters can connect. This AP will also save communication between firefighters to a local storage location. Upon return to the fire station, the AP will route all of the information stored locally to a larger database. For now, Wi-Fi will be our communication medium, with a prediction that our technology can eventually be extended to include radio signal

    On Acquisition and Analysis of a Dataset Comprising of Gait, Ear and Semantic data

    No full text
    In outdoor scenarios such as surveillance where there is very little control over the environments, complex computer vision algorithms are often required for analysis. However constrained environments, such as walkways in airports where the surroundings and the path taken by individuals can be controlled, provide an ideal application for such systems. Figure 1.1 depicts an idealised constrained environment. The path taken by the subject is restricted to a narrow path and once inside is in a volume where lighting and other conditions are controlled to facilitate biometric analysis. The ability to control the surroundings and the flow of people greatly simplifes the computer vision task, compared to typical unconstrained environments. Even though biometric datasets with greater than one hundred people are increasingly common, there is still very little known about the inter and intra-subject variation in many biometrics. This information is essential to estimate the recognition capability and limits of automatic recognition systems. In order to accurately estimate the inter- and the intra- class variance, substantially larger datasets are required [40]. Covariates such as facial expression, headwear, footwear type, surface type and carried items are attracting increasing attention; although considering the potentially large impact on an individuals biometrics, large trials need to be conducted to establish how much variance results. This chapter is the first description of the multibiometric data acquired using the University of Southampton's Multi-Biometric Tunnel [26, 37]; a biometric portal using automatic gait, face and ear recognition for identification purposes. The tunnel provides a constrained environment and is ideal for use in high throughput security scenarios and for the collection of large datasets. We describe the current state of data acquisition of face, gait, ear, and semantic data and present early results showing the quality and range of data that has been collected. The main novelties of this dataset in comparison with other multi-biometric datasets are: 1. gait data exists for multiple views and is synchronised, allowing 3D reconstruction and analysis; 2. the face data is a sequence of images allowing for face recognition in video; 3. the ear data is acquired in a relatively unconstrained environment, as a subject walks past; and 4. the semantic data is considerably more extensive than has been available previously. We shall aim to show the advantages of this new data in biometric analysis, though the scope for such analysis is considerably greater than time and space allows for here

    CatchIt: Capturing Cues of Bookmarked Moment to Feed Digital Parrot

    Get PDF
    CatchIt is a mobile application which aims to capture information about moments in a user’s life in a semi-automatic way. The captured moments are then ready to be fed into an augmented memory system for later retrieval. In CatchIt, the time and location contexts of the captured moment will be saved automatically from the system and GPS respectively once the user bookmarks the moment with the ability of modifying them later; whereas the desired other context, people, will be saved manually. The user of CatchIt also can capture further details (contents) of the moments by different ways: taking notes (textual information), taking photo, recording video, and recording audio. Additionally, the user can revise an earlier moment if the user wishes to. Later on, the user can transfer the selective bookmarked moments into the augmented memory system called the Digital Parrot. The implementation of CatchIt is the central focus of this study. To do so, the requirements of CatchIt are specified based on results of a previous study and from a scenario. The conceptual architecture and the user interface of CatchIt are designed according to the CatchIt requirements. The user interface is implemented in addition to the database where is the captured information from the user interface will be stored in and retrieved from. Evaluating the usability of CatchIt will come next. In the evaluation, the study will involve three sessions: (1) initial questionnaire to know more about the preferred capturing ways of different scenarios, (2) testing the applications where an existing mobile application, Hansel, will be tested for the same length of testing CatchIt (one week for each) and (3) a guided interview which will extract the usability of CatchIt comparing with Hansel on one hand and adding a new contact to the phone using the application Contacts. An additional goal of the last session of the study will be to extract the feeling and the interests of using CatchIt. The results of this study indicated the user interface of CatchIt needs to be even easier to use. The findings of the study form the foundation for further work to improve the user interface of CatchIt and to understand more of the user needs of such mobile capturing application

    Capturing Situational Context in an Augmented Memory System

    Get PDF
    Bookmarking a moment is a new approach being introduced to capture past experience and insert information into an augmented memory system. This idea is inspired from the concept of the bookmark in web browsers. Semi-automatic bookmarking different moments when time is limited and revisiting these moments before inserting them into an augmented memory system will help people to remember their past experience. An exploratory study was conducted to discover and shape the design requirements for a system called CatchIt. It aims to understand end-users’ needs to capture their personal experience, which is an important and complex issue in the case of capture and access of personal experiences. CatchIt is a system to bookmark the significant moments during the day before enriching them, and entering them into the augmented memory system called Digital Parrot. The conceptual design of CatchIt will be the main aim of this study. The primary requirements were derived from the scenarios and analysis of the findings of five different study stages were designed to inspect these: unobserved field visit, shadowing, using indictors, Wizard of Oz and using technology. Thirty participants were involved in field visit, survey and follows up interview. Each stage had different tasks to be performed and the findings of each stage contributed to understanding different parts of user needs and system design requirements. The results of this study indicated the system should automatically record the context information, especially the time and location since they were typically neglected by the participants. Different information such as textual and visual information should be manually recorded based on the users’ setting or situations. A single button is a promising input mechanism to bookmark a moment and it should be fast and effort- less. The result showed no clear correlation between learning style and type of the information that had been captured. Also, we found that there might be a correlation between passive capture and false memories. All these findings were used to provide a foundation for further work to implement the bookmark system and evaluate this approach. Some issues raised in this study need further research. The work will contribute to a greater understanding of human memory and selective capture

    Methods of small group research

    Get PDF

    Narcissus to a Man: Lifelogging, Technology and the Normativity of Truth

    No full text
    The growth of the practice of lifelogging, exploiting the capabilities provided by the exponential increase in computer storage, and using technologies such as SenseCam as well as location-based services, Web 2.0, social networking and photo-sharing sites, has led to a growing sense of unease, articulated in books such as Mayer-Schönberger's Delete, that the semi-permanent storage of memories could lead to problematic social consequences. This talk examines the arguments against lifelogging and storage, and argues that they seem less worrying when placed in the context of a wider debate about the nature of mind and memory and their relationship to our environment and the technology we use
    corecore