24 research outputs found

    Lifelog access modelling using MemoryMesh

    Get PDF
    As of very recently, we have observed a convergence of technologies that have led to the emergence of lifelogging as a technology for personal data application. Lifelogging will become ubiquitous in the near future, not just for memory enhancement and health management, but also in various other domains. While there are many devices available for gathering massive lifelogging data, there are still challenges to modelling large volume of multi-modal lifelog data. In the thesis, we explore and address the problem of how to model lifelog in order to make personal lifelogs more accessible to users from the perspective of collection, organization and visualization. In order to subdivide our research targets, we designed and followed the following steps to solve the problem: 1. Lifelog activity recognition. We use multiple sensor data to analyse various daily life activities. Data ranges from accelerometer data collected by mobile phones to images captured by wearable cameras. We propose a semantic, density-based algorithm to cope with concept selection issues for lifelogging sensory data. 2. Visual discovery of lifelog images. Most of the lifelog information we takeeveryday is in a form of images, so images contain significant information about our lives. Here we conduct some experiments on visual content analysis of lifelog images, which includes both image contents and image meta data. 3. Linkage analysis of lifelogs. By exploring linkage analysis of lifelog data, we can connect all lifelog images using linkage models into a concept called the MemoryMesh. The thesis includes experimental evaluations using real-life data collected from multiple users and shows the performance of our algorithms in detecting semantics of daily-life concepts and their effectiveness in activity recognition and lifelog retrieval

    A lifelogging system supporting multimodal access

    Get PDF
    Today, technology has progressed to allow us to capture our lives digitally such as taking pictures, recording videos and gaining access to WiFi to share experiences using smartphones. People’s lifestyles are changing. One example is from the traditional memo writing to the digital lifelog. Lifelogging is the process of using digital tools to collect personal data in order to illustrate the user’s daily life (Smith et al., 2011). The availability of smartphones embedded with different sensors such as camera and GPS has encouraged the development of lifelogging. It also has brought new challenges in multi-sensor data collection, large volume data storage, data analysis and appropriate representation of lifelog data across different devices. This study is designed to address the above challenges. A lifelogging system was developed to collect, store, analyse, and display multiple sensors’ data, i.e. supporting multimodal access. In this system, the multi-sensor data (also called data streams) is firstly transmitted from smartphone to server only when the phone is being charged. On the server side, six contexts are detected namely personal, time, location, social, activity and environment. Events are then segmented and a related narrative is generated. Finally, lifelog data is presented differently on three widely used devices which are the computer, smartphone and E-book reader. Lifelogging is likely to become a well-accepted technology in the coming years. Manual logging is not possible for most people and is not feasible in the long-term. Automatic lifelogging is needed. This study presents a lifelogging system which can automatically collect multi-sensor data, detect contexts, segment events, generate meaningful narratives and display the appropriate data on different devices based on their unique characteristics. The work in this thesis therefore contributes to automatic lifelogging development and in doing so makes a valuable contribution to the development of the field

    Varieties of artifacts: Embodied, perceptual, cognitive, and affective

    Get PDF
    The primary goal of this essay is to provide a comprehensive overview and analysis of the various relations between material artifacts and the embodied mind. A secondary goal of this essay is to identify some of the trends in the design and use of artifacts. First, based on their functional properties, I identify four categories of artifacts co-opted by the embodied mind, namely (1) embodied artifacts, (2) perceptual artifacts, (3) cognitive artifacts, and (4) affective artifacts. These categories can overlap and so some artifacts are members of more than one category. I also identify some of the techniques (or skills) we use when interacting with artifacts. Identifying these categories of artifacts and techniques allows us to map the landscape of relations between embodied minds and the artifactual world. Second, having identified categories of artifacts and techniques, this essay then outlines some of the trends in the design and use of artifacts, focussing on neuroprosthetics, brain-computer interfaces, and personalisation algorithms nudging their users towards particular epistemic paths of information consumption

    Debiased-CAM for bias-agnostic faithful visual explanations of deep convolutional networks

    Get PDF
    Class activation maps (CAMs) explain convolutional neural network predictions by identifying salient pixels, but they become misaligned and misleading when explaining predictions on images under bias, such as images blurred accidentally or deliberately for privacy protection, or images with improper white balance. Despite model fine-tuning to improve prediction performance on these biased images, we demonstrate that CAM explanations become more deviated and unfaithful with increased image bias. We present Debiased-CAM to recover explanation faithfulness across various bias types and levels by training a multi-input, multi-task model with auxiliary tasks for CAM and bias level predictions. With CAM as a prediction task, explanations are made tunable by retraining the main model layers and made faithful by self-supervised learning from CAMs of unbiased images. The model provides representative, bias-agnostic CAM explanations about the predictions on biased images as if generated from their unbiased form. In four simulation studies with different biases and prediction tasks, Debiased-CAM improved both CAM faithfulness and task performance. We further conducted two controlled user studies to validate its truthfulness and helpfulness, respectively. Quantitative and qualitative analyses of participant responses confirmed Debiased-CAM as more truthful and helpful. Debiased-CAM thus provides a basis to generate more faithful and relevant explanations for a wide range of real-world applications with various sources of bias

    Composing with Matter: Interdisciplinary Explorations Between the Natural and the Artificial

    Get PDF
    This practice-based research, which includes a written thesis and a portfolio of creative practice, represents the interdisciplinary exploration of co-composition between natural and artificial matter as otherworldly phenomena. Accelerated by the application of recent technologies to control natural materials, matter has become merged between nature and artefacts, offering new potentials, where the boundaries are becoming increasingly blurred. This thesis presents a series of complementing sound art works, including transition [systemic], transition [characteristic], and moment, which were devised through co-composing towards a creative outcome that combines sonic and visual elements by integrating natural and artificial matter as co-authors and co-makers within the creative process to generate multiple perspectives. Raising questions of the boundary between nature and artificiality, it aims to consider a new methodology for sound art in the human-dominated, Anthropocene epoch. This research employs natural elements and processes to engage with sonic and visual anthropomorphism. It is focused on generative processes in the organisation of matter, here analysed and harnessed for sound expression, using acoustic phenomena including the inaudible range that can be perceived through matter. Through a laboratory-based study made in collaboration with scientists, three ‘life-like’ features of the generative processes of materials are discussed: 1) fusion and division, 2) network formation, and 3) pulse and rhythm. The practice explores these features to develop a new methodology of authorship and making, examining the following two questions. How can life-like behaviours of matter be portrayed through sonic and visual modes of expression? And in what ways might the expression of life-like behaviours be grasped by human perception? In conclusion, by integrating the agency of matter into the compositional processes, life-like features – as described by current theories in art, design, science, and philosophy – are made apparent
    corecore