4 research outputs found

    A multimodal approach for event detection from lifelogs

    Get PDF
    This paper analyzes how personal lifelog data which contains biometric, visual, activity data, can be leveraged to detect points in time where the individual is partaking in an eating activity. To answer this question, three artificial neural network models were introduced. Firstly, a image object detection model trained to detect eating related objects using the YOLO framework. Secondly, a feed-forward neural network (FANN) and a Long-Short-Term-Memory (LSTM) neural network model which attempts to detect ‘eating moments’ in the lifelog data. The results show promise, with F1-score and AUC score of 0.489 and 0.796 for the FANN model, and F1-score of 0.74 and AUC score of 0.835 respectively. However, there are clear rooms for improvement on all models. The models and methods introduced can help individuals monitor their nutrition habits so they are empowered to make healthy lifestyle decisions. Additionally, several methods for streamlining event detection in lifelog data are introduced.Masteroppgave i informasjonsvitenskapINFO390MASV-INFOMASV-IK

    Lifelogging As An Extreme Form of Personal Information Management -- What Lessons To Learn

    Full text link
    Personal data includes the digital footprints that we leave behind as part of our everyday activities, both online and offline in the real world. It includes data we collect ourselves, such as from wearables, as well as the data collected by others about our online behaviour and activities. Sometimes we are able to use the personal data we ourselves collect, in order to examine some parts of our lives but for the most part, our personal data is leveraged by third parties including internet companies, for services like targeted advertising and recommendations. Lifelogging is a form of extreme personal data gathering and in this article we present an overview of the tools used to manage access to lifelogs as demonstrated at the most recent of the annual Lifelog Search Challenge benchmarking workshops. Here, experimental systems are showcased in live, real time information seeking tasks by real users. This overview of these systems' capabilities show the range of possibilities for accessing our own personal data which may, in time, become more easily available as consumer-level services

    Myscéal: an experimental interactive lifelog retrieval system for LSC'20

    Get PDF
    The Lifelog Search Challenge (LSC), is an annual comparative benchmarking activity for comparing approaches to interactive retrieval from multi-modal lifelogs. Being an interactive search challenge, issues such as retrieval accuracy, search speed and usability of interfaces are key challenges that must be addressed by every participant. In this paper, we introduce Myscéal, an interactive lifelog retrieval engine designed to support novice users to retrieve items of interest from a large multimodal lifelog. Additionally, we also introduce a new similarity measure called “aTFIDF”, to match a user’s free-text information need with the multimodal lifelog index

    Lifelog: moments retrieval algorithm

    Get PDF
    O aumento da variedade e quantidade de dispositivos sensoriais portáteis ocasionou um paralelo crescimento da diversidade e quantidade de dados produzidos. Hoje em dia, qualquer individuo com recurso ao smartphone pessoal produz uma panóplia de registos diários de momentos. Esta tipologia de dados resulta de cenários quotidianos que são registados em imagem e frequentemente detalhados com dados biométricos bem como registos de actividades, localização e tempo. Ao armazenarmos esta diversidade de dados impõe-se a questão: como identificar e recuperar um momento exacto em largos arquivos de dados? A recuperação de um momento pode atender à simples acção de revisitar um episódio longínquo, mas também pode auxiliar pessoas com problemas de memória. A aplicação de sistemas computacionais para este fim é a principal resposta. Para além de identificarem e recuperarem um momento, são aplicados com o principal objectivo de melhorar a qualidade de vida humana. Estes factos exigem a estes sistemas uma redução de distâncias comunicacionais entre a linguagem natural e a linguagem computacional. Para tal, são constituídos por algoritmos de processamento e análise de texto que visam estabelecer uma ligação interactiva entre utilizadores e sistema. Neste sentido, a solução proposta nesta dissertação é baseada num algoritmo que recebe e entende o momento que o utilizador descreve e tenta devolver esse instante sob a forma de imagens retiradas da base de dados do utilizador onde esse momento possa estar representado. O seu desenvolvimento passa pela aplicação de metodologias descritas no estado de arte e novas abordagens no sistema de classificação de resultados. O algoritmo é incorporado por ferramentas NLP que são fundamentais na comunicação entre ambas as partes. Além disso, engloba a função matemática TFIDF com acções de vectorização auxiliada pela similaridade de cosseno que é responsável por seleccionar os momentos que mais se identificam com a descrição do utilizador. Também a função BM25 foi introduzida no algoritmo visando reforçar a análise de similaridades entre pergunta e respostas. A coligação de ambas as técnicas atribuem ao algoritmo uma maior probabilidade na devolução do momento correcto. O mecanismo desenvolvido mostra resultados bastante satisfatórios e interessantes uma vez que em várias interacções devolve o momento correcto ou pelo menos identifica episódios similares á descrição do utilizador. O conhecimento adquirido ao longo desta dissertação permite-me concluir que o algoritmo teria uma maior valorização com um redobrado ênfase na descrição textual de um momento introduzida pelo utilizador. A identificação automática de campos chave, permitiria que o sistema de filtragem, aplicado no algoritmo, se tornasse totalmente automatizado.The increase of the variety and quantity of the wearable devices brought a parallel growth of the diversity and amount of data produced. Nowadays any individual using a personal smartphone produces a large amount of daily moments records. These data typology results from daily scenarios recorded in image and detailed with biometric data as well activities, location and time records. When storing this diversity and amount of data, a question arises: how can we identify and retrieve an exact moment in large data archives? A moment retrieval can serve the simple action of revisiting a distant episode, but it can also support a person with memory disorders. The application of computer systems for this purpose is the main answer. In addition to identifying and retrieving a moment, they are applied with the main objective of improving the quality of human life. These facts require these systems to reduce communicational distances between natural language and computer language. Therefore, they consist of processing and text analysis algorithms that aim to establish an interactive link between the users and the system. In this sense, the proposed solution in this dissertation is based on an algorithm that receives and understands the moment described by the user and tries to return that moment in the form of images taken from the user’s database where that moment can be represented. Its development involves the application of methodologies described in the state of the art and new approaches in the results ranking system. The algorithm is incorporated by NLP tools that are fundamental in the communication between both parties. Moreover it incompasses TFIDF math function with vectorization tasks supported by cosine similarity responsible for selecting identical moments to the user description. Also the BM25 function was introduced in the algorithm aiming to reinforce the analysis of similarities between question and answers. The combination of both techniques gives the algorithm a greater probability of returning the correct moment. The developed mechanism shows very satisfactory and interesting results, considering the fact that in several interactions they return the correct moment or at least identify similar episodes comparing to the user’s description. The knowledge acquired throughout this dissertation allows me to conclude that the algorithm would have a greater value with an emphasis on the textual moment description introduced by the user. The automatic identification of key fields would allow the filtering system, applied in the algorithm, to become fully automated.Mestrado em Engenharia Eletrónica e Telecomunicaçõe
    corecore