19 research outputs found

    Requirement analysis and sensor specifications – First version

    Get PDF
    In this first version of the deliverable, we make the following contributions: to design the WEKIT capturing platform and the associated experience capturing API, we use a methodology for system engineering that is relevant for different domains such as: aviation, space, and medical and different professions such as: technicians, astronauts, and medical staff. Furthermore, in the methodology, we explore the system engineering process and how it can be used in the project to support the different work packages and more importantly the different deliverables that will follow the current. Next, we provide a mapping of high level functions or tasks (associated with experience transfer from expert to trainee) to low level functions such as: gaze, voice, video, body posture, hand gestures, bio-signals, fatigue levels, and location of the user in the environment. In addition, we link the low level functions to their associated sensors. Moreover, we provide a brief overview of the state-of-the-art sensors in terms of their technical specifications, possible limitations, standards, and platforms. We outline a set of recommendations pertaining to the sensors that are most relevant for the WEKIT project taking into consideration the environmental, technical and human factors described in other deliverables. We recommend Microsoft Hololens (for Augmented reality glasses), MyndBand and Neurosky chipset (for EEG), Microsoft Kinect and Lumo Lift (for body posture tracking), and Leapmotion, Intel RealSense and Myo armband (for hand gesture tracking). For eye tracking, an existing eye-tracking system can be customised to complement the augmented reality glasses, and built-in microphone of the augmented reality glasses can capture the expert’s voice. We propose a modular approach for the design of the WEKIT experience capturing system, and recommend that the capturing system should have sufficient storage or transmission capabilities. Finally, we highlight common issues associated with the use of different sensors. We consider that the set of recommendations can be useful for the design and integration of the WEKIT capturing platform and the WEKIT experience capturing API to expedite the time required to select the combination of sensors which will be used in the first prototype.WEKI

    Brain-Computer Interface and Silent Speech Recognition on Decentralized Messaging Applications

    Get PDF
    Online communications have been increasingly gaining prevalence in people’s daily lives, with its widespread adoption being catalyzed by technological advances, especially in instant messaging platforms. Although there have been strides for the inclusion of disabled individuals to ease communication between peers, people who suffer hand/arm impairments have little to no support in regular mainstream applications to efficiently communicate with other individuals. Moreover, a problem with the current solutions that fall back on speech-to-text techniques is the lack of privacy when the usage of these alternatives is conducted in public. Additionally, as centralized systems have come into scrutiny regarding privacy and security, the development of alternative decentralized solutions has increased by the use of blockchain technology and its variants. Within the inclusivity paradigm, this project showcases an alternative on human-computer interaction with support for the aforementioned disabled people, through the use of a braincomputer interface allied to a silent speech recognition system, for application navigation and text input purposes, respectively. A brain-computer interface allows a user to interact with the platform just by though, while the silent speech recognition system enables the input of text by reading activity from articulatory muscles without the need of actually speaking audibly. Therefore, the combination of both techniques creates a full hands-free interaction with the platform, empowering hand/arm disabled users in daily life communications. Furthermore, the users of the application will be inserted in a decentralized system that is designed for secure communication and exchange of data between peers, enforcing the privacy concern that is a cornerstone of the platform.Comunicações online têm cada vez mais ganhado prevalência na vida contemporânea de pessoas, tendo a sua adoção sido catalisada pelos avanços tecnológicos, especialmente em plataformas de mensagens instantâneas. Embora tenham havido desenvolvimentos relativamente à inclusão de indivíduos com deficiência para facilitar a comunicação entre pessoas, as que sofrem de incapacidades motoras nas mãos/braços têm um suporte escasso em aplicações convencionais para comunicar de forma eficiente com outros sujeitos. Além disso, um problema com as soluções atuais que recorrem a técnicas de voz-para-texto é a falta de privacidade nas comunicações quando usadas em público. Adicionalmente, há medida que sistemas centralizados têm atraído ceticismo relativamente à privacidade e segurança, o desenvolvimento de soluções descentralizadas e alternativas têm aumentado pelo uso de tecnologias de blockchain e as suas variantes. Dentro do paradigma de inclusão, este projeto demonstras uma alternativa na interação humano-computador com suporte para os indivíduos referidos anteriormente, através do uso de uma interface cérebro-computador aliada a um sistema de reconhecimento de fala silenciosa, para navegação na aplicação e introdução de texto, respetivamente. Uma interface cérebro-computador permite o utilizador interagir com a plataforma apenas através do pensamento, enquanto que um sistema de reconhecimento de fala silenciosa possibilita a introdução de texto pela leitura da atividade dos músculos articulatórios, sem a necessidade de falar em voz alta. Assim, a combinação de ambas as técnicas criam uma interação totalmente de mãos-livres com a plataforma, melhorando as comunicações do dia-a-dia de pessoas com incapacidades nas mãos/braços. Além disso, os utilizadores serão inseridos num sistema descentralizado, desenhado para comunicações e trocas de dados seguras entre pares, reforçando, assim, a preocupação com a privacidade, que é um conceito base da plataforma

    Ubiquitous computing and natural interfaces for environmental information

    Get PDF
    Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do Grau de Mestre em Engenharia do Ambiente, perfil Gestão e Sistemas AmbientaisThe next computing revolution‘s objective is to embed every street, building, room and object with computational power. Ubiquitous computing (ubicomp) will allow every object to receive and transmit information, sense its surroundings and act accordingly, be located from anywhere in the world, connect every person. Everyone will have the possibility to access information, despite their age, computer knowledge, literacy or physical impairment. It will impact the world in a profound way, empowering mankind, improving the environment, but will also create new challenges that our society, economy, health and global environment will have to overcome. Negative impacts have to be identified and dealt with in advance. Despite these concerns, environmental studies have been mostly absent from discussions on the new paradigm. This thesis seeks to examine ubiquitous computing, its technological emergence, raise awareness towards future impacts and explore the design of new interfaces and rich interaction modes. Environmental information is approached as an area which may greatly benefit from ubicomp as a way to gather, treat and disseminate it, simultaneously complying with the Aarhus convention. In an educational context, new media are poised to revolutionize the way we perceive, learn and interact with environmental information. cUbiq is presented as a natural interface to access that information

    Affective Computing for Emotion Detection using Vision and Wearable Sensors

    Get PDF
    The research explores the opportunities, challenges, limitations, and presents advancements in computing that relates to, arises from, or deliberately influences emotions (Picard, 1997). The field is referred to as Affective Computing (AC) and is expected to play a major role in the engineering and development of computationally and cognitively intelligent systems, processors and applications in the future. Today the field of AC is bolstered by the emergence of multiple sources of affective data and is fuelled on by developments under various Internet of Things (IoTs) projects and the fusion potential of multiple sensory affective data streams. The core focus of this thesis involves investigation into whether the sensitivity and specificity (predictive performance) of AC, based on the fusion of multi-sensor data streams, is fit for purpose? Can such AC powered technologies and techniques truly deliver increasingly accurate emotion predictions of subjects in the real world? The thesis begins by presenting a number of research justifications and AC research questions that are used to formulate the original thesis hypothesis and thesis objectives. As part of the research conducted, a detailed state of the art investigations explored many aspects of AC from both a scientific and technological perspective. The complexity of AC as a multi-sensor, multi-modality, data fusion problem unfolded during the state of the art research and this ultimately led to novel thinking and origination in the form of the creation of an AC conceptualised architecture that will act as a practical and theoretical foundation for the engineering of future AC platforms and solutions. The AC conceptual architecture developed as a result of this research, was applied to the engineering of a series of software artifacts that were combined to create a prototypical AC multi-sensor platform known as the Emotion Fusion Server (EFS) to be used in the thesis hypothesis AC experimentation phases of the research. The thesis research used the EFS platform to conduct a detailed series of AC experiments to investigate if the fusion of multiple sensory sources of affective data from sensory devices can significantly increase the accuracy of emotion prediction by computationally intelligent means. The research involved conducting numerous controlled experiments along with the statistical analysis of the performance of sensors for the purposes of AC, the findings of which serve to assess the feasibility of AC in various domains and points to future directions for the AC field. The AC experiments data investigations conducted in relation to the thesis hypothesis used applied statistical methods and techniques, and the results, analytics and evaluations are presented throughout the two thesis research volumes. The thesis concludes by providing a detailed set of formal findings, conclusions and decisions in relation to the overarching research hypothesis on the sensitivity and specificity of the fusion of vision and wearables sensor modalities and offers foresights and guidance into the many problems, challenges and projections for the AC field into the future

    Decentralization in messaging applications with support for contactless interaction

    Get PDF
    Peer-to-peer communication has increasingly been gaining prevalence in people’s daily lives, with its widespread adoption being catalysed by technological advances. Although there have been strides for the inclusion of disabled individuals to ease communication between peers, people who suffer arm/hand impairments have little to no support in regular mainstream applications to efficiently communicate with other individuals. Additionally, as centralized systems have come into scrutiny regarding privacy and security, the development of alternative, decentralized solutions have increased, a movement pioneered by Bitcoin that culminated in the blockchain technology and its variants. Aiming towards expanding inclusivity in the messaging applications panorama, this project showcases an alternative on contactless human-computer interaction with support for disabled individuals with focus on the decentralized backend counterpart. Users of the application partake in a decentralized network based on a distributed hash table that is designed for secure communication (granted by a custom cryptographic messaging protocol) and exchange of data between peers. Such system is both resilient to tampering attacks and central points of failure (akin to blockchains), as well as having no long-term restrictions regarding scalability prospects, something that is a recurring issue in blockchain-based platforms. The conducted experiments showcase a level of performance similar to mainstream centralized approaches, outperforming blockchain-based decentralized applications on the delay between sending and receiving messages.A comunicação ponto-a-ponto tem cada vez mais ganhado prevalência na vida contemporânea de pessoas, tendo a sua adoção sido catalisada pelos avanços tecnológicos. Embora tenham havido desenvolvimentos relativamente à inclusão de indivíduos com deficiência para facilitar a comunicação entre pessoas, as que sofrem imparidades no braço/mão têm um suporte escasso em aplicações convencionais para comunicar de forma eficiente com outros sujeitos. Adicionalmente, à medida que sistemas centralizados têm atraído ceticismo relativamente à sua privacidade e segurança, o desenvolvimento de soluções descentralizadas e alternativas têm aumentado, um movimento iniciado pela Bitcoin que culminou na tecnologia de blockchain e as suas variantes. Tendo como objectivo expandir a inclusão no panorama de aplicações de messaging, este projeto pretende demonstrar uma alternativa na interação humano-computador sem contacto direto físico e com suporte para indivíduos com deficiência, com foco no componente backend decentralizado. Utilizadores da aplicação são inseridos num sistema decentralizado baseado numa hash table distribuída que foi desenhado para comunicação segura (providenciado por um protocolo de messaging criptográfico customizado) e para troca de dados entre utilizadores. Tal sistema é tanto resiliente a ataques de adulteração de dados como também a pontos centrais de falha (presente em blockains), não tendo adicionalmente restrições ao nível de escabilidade a longo-prazo, algo que é um problem recorrente em plataformas baseadas em blockchain. As avaliações e experiências realizadas neste projeto demonstram um nível de performance semelhante a abordagens centralizadas convencionais, tendo uma melhor prestação que aplicações descentralizadas baseadas em blockchain no que toca à diferença no tempo entre enviar e receber mensagens

    Wearable sensors for learning enhancement in higher education

    Get PDF
    Wearable sensors have traditionally been used to measure and monitor vital human signs for well-being and healthcare applications. However, there is a growing interest in using and deploying these technologies to facilitate teaching and learning, particularly in a higher education environment. The aim of this paper is therefore to systematically review the range of wearable devices that have been used for enhancing the teaching and delivery of engineering curricula in higher education. Moreover, we compare the advantages and disadvantages of these devices according to the location in which they are worn on the human body. According to our survey, wearable devices for enhanced learning have mainly been worn on the head (e.g., eyeglasses), wrist (e.g., watches) and chest (e.g., electrocardiogram patch). In fact, among those locations, head-worn devices enable better student engagement with the learning materials, improved student attention as well as higher spatial and visual awareness. We identify the research questions and discuss the research inclusion and exclusion criteria to present the challenges faced by researchers in implementing learning technologies for enhanced engineering education. Furthermore, we provide recommendations on using wearable devices to improve the teaching and learning of engineering courses in higher education

    Workload-aware systems and interfaces for cognitive augmentation

    Get PDF
    In today's society, our cognition is constantly influenced by information intake, attention switching, and task interruptions. This increases the difficulty of a given task, adding to the existing workload and leading to compromised cognitive performances. The human body expresses the use of cognitive resources through physiological responses when confronted with a plethora of cognitive workload. This temporarily mobilizes additional resources to deal with the workload at the cost of accelerated mental exhaustion. We predict that recent developments in physiological sensing will increasingly create user interfaces that are aware of the user’s cognitive capacities, hence able to intervene when high or low states of cognitive workload are detected. In this thesis, we initially focus on determining opportune moments for cognitive assistance. Subsequently, we investigate suitable feedback modalities in a user-centric design process which are desirable for cognitive assistance. We present design requirements for how cognitive augmentation can be achieved using interfaces that sense cognitive workload. We then investigate different physiological sensing modalities to enable suitable real-time assessments of cognitive workload. We provide empirical evidence that the human brain is sensitive to fluctuations in cognitive resting states, hence making cognitive effort measurable. Firstly, we show that electroencephalography is a reliable modality to assess the mental workload generated during the user interface operation. Secondly, we use eye tracking to evaluate changes in eye movements and pupil dilation to quantify different workload states. The combination of machine learning and physiological sensing resulted in suitable real-time assessments of cognitive workload. The use of physiological sensing enables us to derive when cognitive augmentation is suitable. Based on our inquiries, we present applications that regulate cognitive workload in home and work settings. We deployed an assistive system in a field study to investigate the validity of our derived design requirements. Finding that workload is mitigated, we investigated how cognitive workload can be visualized to the user. We present an implementation of a biofeedback visualization that helps to improve the understanding of brain activity. A final study shows how cognitive workload measurements can be used to predict the efficiency of information intake through reading interfaces. Here, we conclude with use cases and applications which benefit from cognitive augmentation. This thesis investigates how assistive systems can be designed to implicitly sense and utilize cognitive workload for input and output. To do so, we measure cognitive workload in real-time by collecting behavioral and physiological data from users and analyze this data to support users through assistive systems that adapt their interface according to the currently measured workload. Our overall goal is to extend new and existing context-aware applications by the factor cognitive workload. We envision Workload-Aware Systems and Workload-Aware Interfaces as an extension in the context-aware paradigm. To this end, we conducted eight research inquiries during this thesis to investigate how to design and create workload-aware systems. Finally, we present our vision of future workload-aware systems and workload-aware interfaces. Due to the scarce availability of open physiological data sets, reference implementations, and methods, previous context-aware systems were limited in their ability to utilize cognitive workload for user interaction. Together with the collected data sets, we expect this thesis to pave the way for methodical and technical tools that integrate workload-awareness as a factor for context-aware systems.Tagtäglich werden unsere kognitiven Fähigkeiten durch die Verarbeitung von unzähligen Informationen in Anspruch genommen. Dies kann die Schwierigkeit einer Aufgabe durch mehr oder weniger Arbeitslast beeinflussen. Der menschliche Körper drückt die Nutzung kognitiver Ressourcen durch physiologische Reaktionen aus, wenn dieser mit kognitiver Arbeitsbelastung konfrontiert oder überfordert wird. Dadurch werden weitere Ressourcen mobilisiert, um die Arbeitsbelastung vorübergehend zu bewältigen. Wir prognostizieren, dass die derzeitige Entwicklung physiologischer Messverfahren kognitive Leistungsmessungen stets möglich machen wird, um die kognitive Arbeitslast des Nutzers jederzeit zu messen. Diese sind in der Lage, einzugreifen wenn eine zu hohe oder zu niedrige kognitive Belastung erkannt wird. Wir konzentrieren uns zunächst auf die Erkennung passender Momente für kognitive Unterstützung welche sich der gegenwärtigen kognitiven Arbeitslast bewusst sind. Anschließend untersuchen wir in einem nutzerzentrierten Designprozess geeignete Feedbackmechanismen, die zur kognitiven Assistenz beitragen. Wir präsentieren Designanforderungen, welche zeigen wie Schnittstellen eine kognitive Augmentierung durch die Messung kognitiver Arbeitslast erreichen können. Anschließend untersuchen wir verschiedene physiologische Messmodalitäten, welche Bewertungen der kognitiven Arbeitsbelastung in Realzeit ermöglichen. Zunächst validieren wir empirisch, dass das menschliche Gehirn auf kognitive Arbeitslast reagiert. Es zeigt sich, dass die Ableitung der kognitiven Arbeitsbelastung über Elektroenzephalographie eine geeignete Methode ist, um den kognitiven Anspruch neuartiger Assistenzsysteme zu evaluieren. Anschließend verwenden wir Eye-Tracking, um Veränderungen in den Augenbewegungen und dem Durchmesser der Pupille unter verschiedenen Intensitäten kognitiver Arbeitslast zu bewerten. Das Anwenden von maschinellem Lernen führt zu zuverlässigen Echtzeit-Bewertungen kognitiver Arbeitsbelastung. Auf der Grundlage der bisherigen Forschungsarbeiten stellen wir Anwendungen vor, welche die Kognition im häuslichen und beruflichen Umfeld unterstützen. Die physiologischen Messungen stellen fest, wann eine kognitive Augmentierung sich als günstig erweist. In einer Feldstudie setzen wir ein Assistenzsystem ein, um die erhobenen Designanforderungen zur Reduktion kognitiver Arbeitslast zu validieren. Unsere Ergebnisse zeigen, dass die Arbeitsbelastung durch den Einsatz von Assistenzsystemen reduziert wird. Im Anschluss untersuchen wir, wie kognitive Arbeitsbelastung visualisiert werden kann. Wir stellen eine Implementierung einer Biofeedback-Visualisierung vor, die das Nutzerverständnis zum Verlauf und zur Entstehung von kognitiver Arbeitslast unterstützt. Eine abschließende Studie zeigt, wie Messungen kognitiver Arbeitslast zur Vorhersage der aktuellen Leseeffizienz benutzt werden können. Wir schließen hierbei mit einer Reihe von Applikationen ab, welche sich kognitive Arbeitslast als Eingabe zunutze machen. Die vorliegende wissenschaftliche Arbeit befasst sich mit dem Design von Assistenzsystemen, welche die kognitive Arbeitslast der Nutzer implizit erfasst und diese bei der Durchführung alltäglicher Aufgaben unterstützt. Dabei werden physiologische Daten erfasst, um Rückschlüsse in Realzeit auf die derzeitige kognitive Arbeitsbelastung zu erlauben. Anschließend werden diese Daten analysiert, um dem Nutzer strategisch zu assistieren. Das Ziel dieser Arbeit ist die Erweiterung neuartiger und bestehender kontextbewusster Benutzerschnittstellen um den Faktor kognitive Arbeitslast. Daher werden in dieser Arbeit arbeitslastbewusste Systeme und arbeitslastbewusste Benutzerschnittstellen als eine zusätzliche Dimension innerhalb des Paradigmas kontextbewusster Systeme präsentiert. Wir stellen acht Forschungsstudien vor, um die Designanforderungen und die Implementierung von kognitiv arbeitslastbewussten Systemen zu untersuchen. Schließlich stellen wir unsere Vision von zukünftigen kognitiven arbeitslastbewussten Systemen und Benutzerschnittstellen vor. Durch die knappe Verfügbarkeit öffentlich zugänglicher Datensätze, Referenzimplementierungen, und Methoden, waren Kontextbewusste Systeme in der Auswertung kognitiver Arbeitslast bezüglich der Nutzerinteraktion limitiert. Ergänzt durch die in dieser Arbeit gesammelten Datensätze erwarten wir, dass diese Arbeit den Weg für methodische und technische Werkzeuge ebnet, welche kognitive Arbeitslast als Faktor in das Kontextbewusstsein von Computersystemen integriert

    The use of extended reality (XR), wearable, and haptic technologies for learning across engineering disciplines

    Get PDF
    According to the literature, the majority of engineering degrees are still taught using traditional 19th-century teaching and learning methods. Technology has recently been introduced to help improve the way these degrees are taught. Therefore, this chapter discusses the state-of-the-art and applications of extended reality (XR) technologies, including virtual and augmented realities (VR and AR), as well as wearable and haptic devices, in engineering education. These technologies have demonstrated great potential for application in engineering education and practice. Empirical research supports that pedagogical modalities provide additional channels for information presentation and delivery, facilitating the sensemaking process in learning and teaching. The integration of VR, AR, wearable, and haptic devices into the learning environments can enhance user engagement and create immersive user experiences. This chapter explores their potential for increasing learning-based applicability in teaching and learning engineering

    Multimodal Wearable Sensors for Human-Machine Interfaces

    Get PDF
    Certain areas of the body, such as the hands, eyes and organs of speech production, provide high-bandwidth information channels from the conscious mind to the outside world. The objective of this research was to develop an innovative wearable sensor device that records signals from these areas more conveniently than has previously been possible, so that they can be harnessed for communication. A novel bioelectrical and biomechanical sensing device, the wearable endogenous biosignal sensor (WEBS), was developed and tested in various communication and clinical measurement applications. One ground-breaking feature of the WEBS system is that it digitises biopotentials almost at the point of measurement. Its electrode connects directly to a high-resolution analog-to-digital converter. A second major advance is that, unlike previous active biopotential electrodes, the WEBS electrode connects to a shared data bus, allowing a large or small number of them to work together with relatively few physical interconnections. Another unique feature is its ability to switch dynamically between recording and signal source modes. An accelerometer within the device captures real-time information about its physical movement, not only facilitating the measurement of biomechanical signals of interest, but also allowing motion artefacts in the bioelectrical signal to be detected. Each of these innovative features has potentially far-reaching implications in biopotential measurement, both in clinical recording and in other applications. Weighing under 0.45 g and being remarkably low-cost, the WEBS is ideally suited for integration into disposable electrodes. Several such devices can be combined to form an inexpensive digital body sensor network, with shorter set-up time than conventional equipment, more flexible topology, and fewer physical interconnections. One phase of this study evaluated areas of the body as communication channels. The throat was selected for detailed study since it yields a range of voluntarily controllable signals, including laryngeal vibrations and gross movements associated with vocal tract articulation. A WEBS device recorded these signals and several novel methods of human-to-machine communication were demonstrated. To evaluate the performance of the WEBS system, recordings were validated against a high-end biopotential recording system for a number of biopotential signal types. To demonstrate an application for use by a clinician, the WEBS system was used to record 12‑lead electrocardiogram with augmented mechanical movement information
    corecore