12 research outputs found

    Multimodal segmentation of lifelog data

    Get PDF
    A personal lifelog of visual and audio information can be very helpful as a human memory augmentation tool. The SenseCam, a passive wearable camera, used in conjunction with an iRiver MP3 audio recorder, will capture over 20,000 images and 100 hours of audio per week. If used constantly, very soon this would build up to a substantial collection of personal data. To gain real value from this collection it is important to automatically segment the data into meaningful units or activities. This paper investigates the optimal combination of data sources to segment personal data into such activities. 5 data sources were logged and processed to segment a collection of personal data, namely: image processing on captured SenseCam images; audio processing on captured iRiver audio data; and processing of the temperature, white light level, and accelerometer sensors onboard the SenseCam device. The results indicate that a combination of the image, light and accelerometer sensor data segments our collection of personal data better than a combination of all 5 data sources. The accelerometer sensor is good for detecting when the user moves to a new location, while the image and light sensors are good for detecting changes in wearer activity within the same location, as well as detecting when the wearer socially interacts with others

    The SenseCam as a tool for task observation

    Get PDF
    The SenseCam is a passive capture wearable camera, worn around the neck and developed by Microsoft Research in the UK. When worn continuously it takes an average of 2,000 images per day. It was originally envisaged for use within the domain of Human Digital Memory to create a personal lifelog or visual recording of the wearer's life, which can be helpful as an aid to human memory. However, within this paper, we explore its applicability as a tool for use within observational and ethnographic studies. We employed the SenseCam as a tool for the collection of observational data in an empirical study, which sought to determine the information access practices of molecular medicine researchers. The affordances of the SenseCam making it appropriate for use within this domain, as well as its limitations, are discussed in the context of this study. We found that while the SenseCam, in its current form, will not offer a complete replacement of traditional observational methods, it offers a complimentary and supplementary route to the collection of observational data

    Multiple multimodal mobile devices: Lessons learned from engineering lifelog solutions

    Get PDF
    For lifelogging, or the recording of one’s life history through digital means, to be successful, a range of separate multimodal mobile devices must be employed. These include smartphones such as the N95, the Microsoft SenseCam – a wearable passive photo capture device, or wearable biometric devices. Each collects a facet of the bigger picture, through, for example, personal digital photos, mobile messages and documents access history, but unfortunately, they operate independently and unaware of each other. This creates significant challenges for the practical application of these devices, the use and integration of their data and their operation by a user. In this chapter we discuss the software engineering challenges and their implications for individuals working on integration of data from multiple ubiquitous mobile devices drawing on our experiences working with such technology over the past several years for the development of integrated personal lifelogs. The chapter serves as an engineering guide to those considering working in the domain of lifelogging and more generally to those working with multiple multimodal devices and integration of their data

    Pedestrian Detection with Wearable Cameras for the Blind: A Two-way Perspective

    Full text link
    Blind people have limited access to information about their surroundings, which is important for ensuring one's safety, managing social interactions, and identifying approaching pedestrians. With advances in computer vision, wearable cameras can provide equitable access to such information. However, the always-on nature of these assistive technologies poses privacy concerns for parties that may get recorded. We explore this tension from both perspectives, those of sighted passersby and blind users, taking into account camera visibility, in-person versus remote experience, and extracted visual information. We conduct two studies: an online survey with MTurkers (N=206) and an in-person experience study between pairs of blind (N=10) and sighted (N=40) participants, where blind participants wear a working prototype for pedestrian detection and pass by sighted participants. Our results suggest that both of the perspectives of users and bystanders and the several factors mentioned above need to be carefully considered to mitigate potential social tensions.Comment: The 2020 ACM CHI Conference on Human Factors in Computing Systems (CHI 2020

    The Cost of Turning Heads - The Design and Evaluation of Vocabulary Prompts on a Head-Worn Display to Support Persons with Aphasia in Conversation

    Get PDF
    Symbol-based dictionaries could provide persons with aphasia a resource for finding needed words, but they can detract from conversation. This research explores the potential of head-worn displays (HWDs) to provide glanceable vocabulary support that is unobtrusive and always-available. Two formative studies explored the benefits and challenges of using a HWD, and evaluated a proof-of-concept prototype in both lab and field settings. These studies showed that a HWD may allow wearers to maintain focus on the conversation, reduce reliance on external support (e.g., paper and pen, or people), and minimize the visibility of support by others. A third study compared use of a HWD to a smartphone, and found preliminary evidence that the HWD may offer a better overall experience with assistive vocabulary and may better support the wearer in advancing through conversation. These studies should motivate further investigation of head-worn conversational support

    An examination of the effects of a wearable display on informal face-to-face communication

    No full text

    Toward multimodality: gesture and vibrotactile feedback in natural human computer interaction

    Get PDF
    In the present work, users’ interaction with advanced systems has been investigated in different application domains and with respect to different interfaces. The methods employed were carefully devised to respond to the peculiarities of the interfaces under examination. We could extract a set of recommendations for developers. The first application domain examined regards the home. In particular, we addressed the design of a gestural interface for controlling a lighting system embedded into a piece of furniture in the kitchen. A sample of end users was observed while interacting with the virtual simulation of the interface. Based on the videoanalysis of users’ spontaneous behaviors, we could derive a set of significant interaction trends The second application domain involved the exploration of an urban environment in mobility. In a comparative study, a haptic-audio interface and an audio-visual interface were employed for guiding users towards landmarks and for providing them with information. We showed that the two systems were equally efficient in supporting the users and they were both well- received by them. In a navigational task we compared two tactile displays each embedded in a different wearable device, i.e., a glove and a vest. Despite the differences in the shape and size, both systems successfully directed users to the target. The strengths and the flaws of the two devices were pointed out and commented by users. In a similar context, two devices supported Augmented Reality technology, i.e., a pair of smartglasses and a smartphone, were compared. The experiment allowed us to identify the circumstances favoring the use of smartglasses or the smartphone. Considered altogether, our findings suggest a set of recommendations for developers of advanced systems. First, we outline the importance of properly involving end users for unveiling intuitive interaction modalities with gestural interfaces. We also highlight the importance of providing the user the chance to choose the interaction mode better fitting the contextual characteristics and to adjust the features of every interaction mode. Finally, we outline the potential of wearable devices to support interactions on the move and the importance of finding a proper balance between the amount of information conveyed to the user and the size of the device

    The WEAR Scale: Development of a measure of the social acceptability of a wearable device

    Get PDF
    The factors affecting the social acceptability of wearable devices are poorly understood, yet they have a strong influence on whether a new wearable succeeds or fails. Because consumer wearable devices are a recently expanding and distinct form of technology, the literature is limited and existing measures of technology acceptance are insufficient. Factors uniquely affecting wearable devices, as compared to technologies not worn on the body, include manners, moral codes, the symbolic communication of dress, habits of dress, fashion, context of use, form, and aesthetics. Therefore, a new measure must be developed to understand the factors affecting the social acceptability of wearable devices and to predict acceptance. The objective of this research was to use established scale development methodology to develop the WEAR (WEarable Acceptability Range) Scale, a measure of wearable acceptability that can be used with regard to any wearable device. The first step was to determine what is being measured by defining the construct “social acceptability of a wearable” using the literature and interviews of the intended population (Study 1). Next, the WEAR Scale’s initial item pool was composed, then reviewed by experts in Study 2. The resulting scale was administered to sample respondents along with similar scales and items for validation purposes. In Study 3, 221 participants responded to the items in response to a Bluetooth Headset. In Study 4, 306 participants responded to the items in response to Apple Watch and Google Glass. Factor analysis of Study 3 and Study 4 data resulted in a two-factor, fourteen-item solution (WEAR v.3) that was consistent among the three datasets. WEAR v.3 demonstrated good reliability across the three datasets, with alpha ranging from 0.79 to 0.88, and split-half reliability ranging from 0.81 to 0.88. Construct validity was demonstrated by significant correlations between the WEAR Scale and related constructs such as affinity for technology, likeableness ratings, and adoption of technology. The methodical and thorough development process provides a strong argument for content validity. The resulting WEAR Scale identifies two unique dimensions of wearable social acceptability, providing surprising and valuable information for many uses by both academia and industry, including predictive modeling, theory-building, and wearable development and applications
    corecore