31 research outputs found

    A lifelogging system supporting multimodal access

    Get PDF
    Today, technology has progressed to allow us to capture our lives digitally such as taking pictures, recording videos and gaining access to WiFi to share experiences using smartphones. People’s lifestyles are changing. One example is from the traditional memo writing to the digital lifelog. Lifelogging is the process of using digital tools to collect personal data in order to illustrate the user’s daily life (Smith et al., 2011). The availability of smartphones embedded with different sensors such as camera and GPS has encouraged the development of lifelogging. It also has brought new challenges in multi-sensor data collection, large volume data storage, data analysis and appropriate representation of lifelog data across different devices. This study is designed to address the above challenges. A lifelogging system was developed to collect, store, analyse, and display multiple sensors’ data, i.e. supporting multimodal access. In this system, the multi-sensor data (also called data streams) is firstly transmitted from smartphone to server only when the phone is being charged. On the server side, six contexts are detected namely personal, time, location, social, activity and environment. Events are then segmented and a related narrative is generated. Finally, lifelog data is presented differently on three widely used devices which are the computer, smartphone and E-book reader. Lifelogging is likely to become a well-accepted technology in the coming years. Manual logging is not possible for most people and is not feasible in the long-term. Automatic lifelogging is needed. This study presents a lifelogging system which can automatically collect multi-sensor data, detect contexts, segment events, generate meaningful narratives and display the appropriate data on different devices based on their unique characteristics. The work in this thesis therefore contributes to automatic lifelogging development and in doing so makes a valuable contribution to the development of the field

    Knowledge Extraction and Summarization for Textual Case-Based Reasoning: A Probabilistic Task Content Modeling Approach

    Get PDF
    Case-Based Reasoning (CBR) is an Artificial Intelligence (AI) technique that has been successfully used for building knowledge systems for tasks/domains where different knowledge sources are easily available, particularly in the form of problem solving situations, known as cases. Cases generally display a clear distinction between different components of problem solving, for instance, components of the problem description and of the problem solution. Thus, an existing and explicit structure of cases is presumed. However, when problem solving experiences are stored in the form of textual narratives (in natural language), there is no explicit case structure, so that CBR cannot be applied directly. This thesis presents a novel approach for authoring cases from episodic textual narratives and organizing these cases in a case base structure that permits a better support for user goals. The approach is based on the following fundamental ideas: - CBR as a problem solving technique is goal-oriented and goals are realized by means of task strategies. - Tasks have an internal structure that can be represented in terms of participating events and event components. - Episodic textual narratives are not random containers of domain concept terms. Rather, the text can be considered as generated by the underlying task structure whose content they describe. The presented case base authoring process combines task knowledge with Natural Language Processing (NLP) techniques to perform the needed knowledge extraction and summarization
    corecore