117 research outputs found

    iMind: Uma ferramenta inteligente para suporte de compreensão de conteúdo

    Get PDF
    Usually while reading, content comprehension difficulty affects individual performance. Comprehension difficulties, e. g., could lead to a slow learning process, lower work quality, and inefficient decision-making. This thesis introduces an intelligent tool called “iMind” which uses wearable devices (e.g., smartwatches) to evaluate user comprehension difficulties and engagement levels while reading digital content. Comprehension difficulty can occur when there are not enough mental resources available for mental processing. The mental resource for mental processing is the cognitive load (CL). Fluctuations of CL lead to physiological manifestation of the autonomic nervous system (ANS), which can be measured by wearables, like smartwatches. ANS manifestations are, e. g., an increase in heart rate. With low-cost eye trackers, it is possible to correlate content regions to the measurements of ANS manifestation. In this sense, iMind uses a smartwatch and an eye tracker to identify comprehension difficulty at content regions level (where the user is looking). The tool uses machine learning techniques to classify content regions as difficult or non-difficult based on biometric and non-biometric features. The tool classified regions with a 75% accuracy and 80% f-score with Linear regression (LR). With the classified regions, it will be possible, in the future, to create contextual support for the reader in real-time by, e.g., translating the sentences that induced comprehension difficulty.Normalmente durante a leitura, a dificuldade de compreensão pode afetar o desempenho da leitura. A dificuldade de compreensão pode levar a um processo de aprendizagem mais lento, menor qualidade de trabalho ou uma ineficiente tomada de decisão. Esta tese apresenta uma ferramenta inteligente chamada “iMind” que usa dispositivos vestíveis (por exemplo, smartwatches) para avaliar a dificuldade de compreensão do utilizador durante a leitura de conteúdo digital. A dificuldade de compreensão pode ocorrer quando não há recursos mentais disponíveis suficientes para o processamento mental. O recurso usado para o processamento mental é a carga cognitiva (CL). As flutuações de CL levam a manifestações fisiológicas do sistema nervoso autônomo (ANS), manifestações essas, que pode ser medido por dispositivos vestíveis, como smartwatches. As manifestações do ANS são, por exemplo, um aumento da frequência cardíaca. Com eye trackers de baixo custo, é possível correlacionar manifestação do ANS com regiões do texto, por exemplo. Neste sentido, a ferramenta iMind utiliza um smartwatch e um eye tracker para identificar dificuldades de compreensão em regiões de conteúdo (para onde o utilizador está a olhar). Adicionalmente a ferramenta usa técnicas de machine learning para classificar regiões de conteúdo como difíceis ou não difíceis com base em features biométricos e não biométricos. A ferramenta classificou regiões com uma precisão de 75% e f-score de 80% usando regressão linear (LR). Com a classificação das regiões em tempo real, será possível, no futuro, criar suporte contextual para o leitor em tempo real onde, por exemplo, as frases que induzem dificuldade de compreensão são traduzidas

    Integrating passive ubiquitous surfaces into human-computer interaction

    Get PDF
    Mobile technologies enable people to interact with computers ubiquitously. This dissertation investigates how ordinary, ubiquitous surfaces can be integrated into human-computer interaction to extend the interaction space beyond the edge of the display. It turns out that acoustic and tactile features generated during an interaction can be combined to identify input events, the user, and the surface. In addition, it is shown that a heterogeneous distribution of different surfaces is particularly suitable for realizing versatile interaction modalities. However, privacy concerns must be considered when selecting sensors, and context can be crucial in determining whether and what interaction to perform.Mobile Technologien ermöglichen den Menschen eine allgegenwärtige Interaktion mit Computern. Diese Dissertation untersucht, wie gewöhnliche, allgegenwärtige Oberflächen in die Mensch-Computer-Interaktion integriert werden können, um den Interaktionsraum über den Rand des Displays hinaus zu erweitern. Es stellt sich heraus, dass akustische und taktile Merkmale, die während einer Interaktion erzeugt werden, kombiniert werden können, um Eingabeereignisse, den Benutzer und die Oberfläche zu identifizieren. Darüber hinaus wird gezeigt, dass eine heterogene Verteilung verschiedener Oberflächen besonders geeignet ist, um vielfältige Interaktionsmodalitäten zu realisieren. Bei der Auswahl der Sensoren müssen jedoch Datenschutzaspekte berücksichtigt werden, und der Kontext kann entscheidend dafür sein, ob und welche Interaktion durchgeführt werden soll

    Inferences from Interactions with Smart Devices: Security Leaks and Defenses

    Get PDF
    We unlock our smart devices such as smartphone several times every day using a pin, password, or graphical pattern if the device is secured by one. The scope and usage of smart devices\u27 are expanding day by day in our everyday life and hence the need to make them more secure. In the near future, we may need to authenticate ourselves on emerging smart devices such as electronic doors, exercise equipment, power tools, medical devices, and smart TV remote control. While recent research focuses on developing new behavior-based methods to authenticate these smart devices, pin and password still remain primary methods to authenticate a user on a device. Although the recent research exposes the observation-based vulnerabilities, the popular belief is that the direct observation attacks can be thwarted by simple methods that obscure the attacker\u27s view of the input console (or screen). In this dissertation, we study the users\u27 hand movement pattern while they type on their smart devices. The study concentrates on the following two factors; (1) finding security leaks from the observed hand movement patterns (we showcase that the user\u27s hand movement on its own reveals the user\u27s sensitive information) and (2) developing methods to build lightweight, easy to use, and more secure authentication system. The users\u27 hand movement patterns were captured through video camcorder and inbuilt motion sensors such as gyroscope and accelerometer in the user\u27s device

    Interdisciplinary views of fNIRS: Current advancements, equity challenges, and an agenda for future needs of a diverse fNIRS research community

    Get PDF
    Functional Near-Infrared Spectroscopy (fNIRS) is an innovative and promising neuroimaging modality for studying brain activity in real-world environments. While fNIRS has seen rapid advancements in hardware, software, and research applications since its emergence nearly 30 years ago, limitations still exist regarding all three areas, where existing practices contribute to greater bias within the neuroscience research community. We spotlight fNIRS through the lens of different end-application users, including the unique perspective of a fNIRS manufacturer, and report the challenges of using this technology across several research disciplines and populations. Through the review of different research domains where fNIRS is utilized, we identify and address the presence of bias, specifically due to the restraints of current fNIRS technology, limited diversity among sample populations, and the societal prejudice that infiltrates today's research. Finally, we provide resources for minimizing bias in neuroscience research and an application agenda for the future use of fNIRS that is equitable, diverse, and inclusive

    The 2023 wearable photoplethysmography roadmap

    Get PDF
    Photoplethysmography is a key sensing technology which is used in wearable devices such as smartwatches and fitness trackers. Currently, photoplethysmography sensors are used to monitor physiological parameters including heart rate and heart rhythm, and to track activities like sleep and exercise. Yet, wearable photoplethysmography has potential to provide much more information on health and wellbeing, which could inform clinical decision making. This Roadmap outlines directions for research and development to realise the full potential of wearable photoplethysmography. Experts discuss key topics within the areas of sensor design, signal processing, clinical applications, and research directions. Their perspectives provide valuable guidance to researchers developing wearable photoplethysmography technology

    Wellness, Fitness, and Lifestyle Sensing Applications

    Get PDF

    Sensor Technologies to Manage the Physiological Traits of Chronic Pain: A Review

    Get PDF
    Non-oncologic chronic pain is a common high-morbidity impairment worldwide and acknowledged as a condition with significant incidence on quality of life. Pain intensity is largely perceived as a subjective experience, what makes challenging its objective measurement. However, the physiological traces of pain make possible its correlation with vital signs, such as heart rate variability, skin conductance, electromyogram, etc., or health performance metrics derived from daily activity monitoring or facial expressions, which can be acquired with diverse sensor technologies and multisensory approaches. As the assessment and management of pain are essential issues for a wide range of clinical disorders and treatments, this paper reviews different sensor-based approaches applied to the objective evaluation of non-oncological chronic pain. The space of available technologies and resources aimed at pain assessment represent a diversified set of alternatives that can be exploited to address the multidimensional nature of pain.Ministerio de Economía y Competitividad (Instituto de Salud Carlos III) PI15/00306Junta de Andalucía PIN-0394-2017Unión Europea "FRAIL

    Use of Augmented Reality in Human Wayfinding: A Systematic Review

    Full text link
    Augmented reality technology has emerged as a promising solution to assist with wayfinding difficulties, bridging the gap between obtaining navigational assistance and maintaining an awareness of one's real-world surroundings. This article presents a systematic review of research literature related to AR navigation technologies. An in-depth analysis of 65 salient studies was conducted, addressing four main research topics: 1) current state-of-the-art of AR navigational assistance technologies, 2) user experiences with these technologies, 3) the effect of AR on human wayfinding performance, and 4) impacts of AR on human navigational cognition. Notably, studies demonstrate that AR can decrease cognitive load and improve cognitive map development, in contrast to traditional guidance modalities. However, findings regarding wayfinding performance and user experience were mixed. Some studies suggest little impact of AR on improving outdoor navigational performance, and certain information modalities may be distracting and ineffective. This article discusses these nuances in detail, supporting the conclusion that AR holds great potential in enhancing wayfinding by providing enriched navigational cues, interactive experiences, and improved situational awareness.Comment: 52 page
    corecore