157 research outputs found

    Life editing: Third-party perspectives on lifelog content

    Get PDF
    Lifelog collections digitally capture and preserve personal experiences and can be mined to reveal insights and understandings of individual significance. These rich data sources also offer opportunities for learning and discovery by motivated third parties. We employ a custom-designed storytelling application in constructing meaningful lifelog summaries from third-party perspectives. This storytelling initiative was implemented as a core component in a university media-editing course. We present promising results from a preliminary study conducted to evaluate the utility and potential of our approach in creatively interpreting a unique experiential dataset

    MyPlaces: detecting important settings in a visual diary

    Get PDF
    We describe a novel approach to identifying specific settings in large collections of passively captured images corresponding to a visual diary. An algorithm developed for setting detection should be capable of detecting images captured at the same real world locations (e.g. in the dining room at home, in front of the computer in the office, in the park, etc.). This requires the selection and implementation of suitable methods to identify visually similar backgrounds in images using their visual features. We use a Bag of Keypoints approach. This method is based on the sampling and subsequent vector quantization of multiple image patches. The image patches are sampled and described using Scale Invariant Feature Transform (SIFT) features. We compare two different classifiers, K Nearest Neighbour and Multiclass Linear Perceptron, and present results for classifying ten different settings across one week’s worth of images. Our results demonstrate that the method produces good classification accuracy even without exploiting geometric or context based information. We also describe an early prototype of a visual diary browser that integrates the classification results

    Factors Influencing British Adolescents’ Intake of Whole Grains: A Pilot Feasibility Study Using SenseCam Assisted Interviews

    Get PDF
    High whole grain intake is beneficial for health. However, adolescents consume low levels of whole grain and the understanding of the underpinning reasons for this is poor. Using a visual, participatory method, we carried out a pilot feasibility study to elicit in-depth accounts of young people’s whole grain consumption that were sensitive to their dietary, familial and social context. Furthermore, we explored barriers and suggested facilitators to whole grain intake and assessed the feasibility of using SenseCam to engage adolescents in research. Eight British adolescents (aged 11 to 16 years) wore a SenseCam device which auto-captured images every twenty seconds for three consecutive days. Participants then completed traditional 24-hour dietary recalls followed by in-depth interviews based on day three SenseCam images. Interview data were subjected to thematic analysis. Findings revealed that low adolescent whole grain intake was often due to difficulty in identifying whole grain products and their health benefits; and because of poor availability in and outside of the home. The images also captured the influence of parents and online media on adolescent daily life and choices. Low motivation to consume whole grains, a common explanation for poor diet quality, was rarely mentioned. Participants proposed that adolescent whole grain consumption could be increased by raising awareness through online media, improved sensory appeal, increased availability and variety, and tailoring of products for young people. SenseCam was effective in engaging young people in dietary research and capturing data relevant to dietary choices, which is useful for future research

    Organising and structuring a visual diary using visual interest point detectors

    Get PDF
    As wearable cameras become more popular, researchers are increasingly focusing on novel applications to manage the large volume of data these devices produce. One such application is the construction of a Visual Diary from an individual’s photographs. Microsoft’s SenseCam, a device designed to passively record a Visual Diary and cover a typical day of the user wearing the camera, is an example of one such device. The vast quantity of images generated by these devices means that the management and organisation of these collections is not a trivial matter. We believe wearable cameras, such as SenseCam, will become more popular in the future and the management of the volume of data generated by these devices is a key issue. Although there is a significant volume of work in the literature in the object detection and recognition and scene classification fields, there is little work in the area of setting detection. Furthermore, few authors have examined the issues involved in analysing extremely large image collections (like a Visual Diary) gathered over a long period of time. An algorithm developed for setting detection should be capable of clustering images captured at the same real world locations (e.g. in the dining room at home, in front of the computer in the office, in the park, etc.). This requires the selection and implementation of suitable methods to identify visually similar backgrounds in images using their visual features. We present a number of approaches to setting detection based on the extraction of visual interest point detectors from the images. We also analyse the performance of two of the most popular descriptors - Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF).We present an implementation of a Visual Diary application and evaluate its performance via a series of user experiments. Finally, we also outline some techniques to allow the Visual Diary to automatically detect new settings, to scale as the image collection continues to grow substantially over time, and to allow the user to generate a personalised summary of their data

    Eat4Thought: A Design of Food Journaling

    Full text link
    Food journaling is an effective method to help people identify their eating patterns and encourage healthy eating habits as it requires self-reflection on eating behaviors. Current tools have predominately focused on tracking food intake, such as carbohydrates, proteins, fats, and calories. Other factors, such as contextual information and momentary thoughts and feelings that are internal to an individual, are also essential to help people reflect upon and change attitudes about eating behaviors. However, current dietary tracking tools rarely support capturing these elements as a way to foster deep reflection. In this work, we present Eat4Thought -- a food journaling application that allows users to track their emotional, sensory, and spatio-temporal elements of meals as a means of supporting self-reflection. The application enables vivid documentation of experiences and self-reflection on the past through video recording. We describe our design process and an initial evaluation of the application. We also provide design recommendations for future work on food journaling.Comment: 8 page

    Validation Study of a Passive Image-Assisted Dietary Assessment Method with Automated Image Analysis Process

    Get PDF
    Background: Image-assisted dietary assessment is being developed to enhance accuracy of dietary assessment. This study validated a passive image-assisted dietary assessment method, with an emphasis on examining if food shape and complexity influenced results.Methods: A 2x2x2x2x3 mixed factorial design was used, with a between-subject factor of meal orders, and within-subject factors of food shapes, food complexities, meals, and methods of measurement, to validate the passive image-assisted dietary assessment method. Thirty men and women (22.7 ± 1.6 kg/m2, 25.1 ± 6.6 years, 46.7% White) wore the Sony Smarteyeglass that automatically took images while two meals containing four foods representing four food categories were consumed. Images from the first 5 minutes of each meal were coded and then compared to DietCam for food identification. The comparison produced four outcomes: DietCam identifying food correctly in image (True Positive), DietCam incorrectly identifying food in image (False Positive), DietCam not identifying food in image (False Negative), or DietCam correctly identifying that the food is not in the image (True Negative). Participants’ feedback about the Sony Smarteyeglass was obtained by a survey.Results: A total of 36,412 images were coded by raters and analyzed by DietCam, with raters coding that 92.4% of images contained foods and DietCam coding that 76.3% of images contained foods. Mixed factorial analysis of covariance revealed a significant main effect of percent agreement between DietCam and rater’s coded images [(F (3,48) = 8.5, p \u3c 0.0001]. The overall mean of True Positive was 22.2 ± 3.6 %, False Positive was 1.2 ± 0.4%, False Negative was 19.6 ± 5.0%, and True Negative was 56.8 ± 7.2%. True Negative was significantly (p \u3c 0.0001) different from all other percent agreement categories. No main effects of food shape or complexity were found. Participants reported that they were not willing to wear the Sony Smarteyeglass under different types of dining experiences.Conclusion: DietCam is most accurate in identifying images that do not contain food. The platform from which the images are collected needs to be modified to enhance consumer acceptance

    Experiments in lifelog organisation and retrieval at NTCIR

    Get PDF
    Lifelogging can be described as the process by which individuals use various software and hardware devices to gather large archives of multimodal personal data from multiple sources and store them in a personal data archive, called a lifelog. The Lifelog task at NTCIR was a comparative benchmarking exercise with the aim of encouraging research into the organisation and retrieval of data from multimodal lifelogs. The Lifelog task ran for over 4 years from NTCIR-12 until NTCIR-14 (2015.02–2019.06); it supported participants to submit to five subtasks, each tackling a different challenge related to lifelog retrieval. In this chapter, a motivation is given for the Lifelog task and a review of progress since NTCIR-12 is presented. Finally, the lessons learned and challenges within the domain of lifelog retrieval are presented

    Technologies that assess the location of physical activity and sedentary behavior: a systematic review

    Get PDF
    Background: The location in which physical activity and sedentary behavior are performed can provide valuable behavioral information, both in isolation and synergistically with other areas of physical activity and sedentary behavior research. Global positioning systems (GPS) have been used in physical activity research to identify outdoor location; however, while GPS can receive signals in certain indoor environments, it is not able to provide room- or subroom-level location. On average, adults spend a high proportion of their time indoors. A measure of indoor location would, therefore, provide valuable behavioral information. Objective: This systematic review sought to identify and critique technology which has been or could be used to assess the location of physical activity and sedentary behavior. Methods: To identify published research papers, four electronic databases were searched using key terms built around behavior, technology, and location. To be eligible for inclusion, papers were required to be published in English and describe a wearable or portable technology or device capable of measuring location. Searches were performed up to February 4, 2015. This was supplemented by backward and forward reference searching. In an attempt to include novel devices which may not yet have made their way into the published research, searches were also performed using three Internet search engines. Specialized software was used to download search results and thus mitigate the potential pitfalls of changing search algorithms. Results: A total of 188 research papers met the inclusion criteria. Global positioning systems were the most widely used location technology in the published research, followed by wearable cameras, and radio-frequency identification. Internet search engines identified 81 global positioning systems, 35 real-time locating systems, and 21 wearable cameras. Real-time locating systems determine the indoor location of a wearable tag via the known location of reference nodes. Although the type of reference node and location determination method varies between manufacturers, Wi-Fi appears to be the most popular method. Conclusions: The addition of location information to existing measures of physical activity and sedentary behavior will provide important behavioral information

    Sedentary Behavior in Children by Wearable Cameras: Development of an Annotation Protocol

    Get PDF
    Introduction There is increasing evidence that not all types of sedentary behavior have the same harmful effects on children's health. Hence, there has been a growing interest in the use of wearable cameras. The aim of this study is to develop a protocol to categorize children's wearable camera data into sedentary behavior components. Methods Wearable camera data were collected in 3 different samples of children in 2014. A development sample (3 children aged 4–8 years) was used to design the annotation protocol. A training sample (4 children aged 10 years) was used to train 3 different coders. The independent reliability sample (14 children aged 9–11 years) was used for independent coding of wearable camera images and to estimate inter-rater agreement. Data were analyzed in 2018. Cohen's Îș was calculated for every rater pair on a per-participant basis. Means and SDs were then calculated across per-participant Îș scores. Results A total of 41,651 images from 14 participants were considered for analysis. Inter-rater agreement over all raters over all the sedentary behavior components was almost perfect (mean Îș=0.85, 95% CI=0.83, 0.87). Inter-rater reliability for screen-based sedentary behavior (mean Îș=0.72, 95% CI=0.62, 0.82) and nonscreen sedentary behavior (Îș=0.69, 95% CI=0.65, 0.72) showed substantial agreement. Inter-rater reliability for location (Îș=0.91, 95% CI=0.88, 0.93) showed almost perfect agreement. Conclusions A reliable annotation protocol to categorize wearable camera data of children into sedentary behavior components was developed. Once applied to larger samples in children, this protocol can ultimately help to better understand the potential harms of screen time and sedentary behavior in children
    • 

    corecore