476 research outputs found

    Physiological-based Driver Monitoring Systems: A Scoping Review

    Get PDF
    A physiological-based driver monitoring system (DMS) has attracted research interest and has great potential for providing more accurate and reliable monitoring of the driver’s state during a driving experience. Many driving monitoring systems are driver behavior-based or vehicle-based. When these non-physiological based DMS are coupled with physiological-based data analysis from electroencephalography (EEG), electrooculography (EOG), electrocardiography (ECG), and electromyography (EMG), the physical and emotional state of the driver may also be assessed. Drivers’ wellness can also be monitored, and hence, traffic collisions can be avoided. This paper highlights work that has been published in the past five years related to physiological-based DMS. Specifically, we focused on the physiological indicators applied in DMS design and development. Work utilizing key physiological indicators related to driver identification, driver alertness, driver drowsiness, driver fatigue, and drunk driver is identified and described based on the PRISMA Extension for Scoping Reviews (PRISMA-Sc) Framework. The relationship between selected papers is visualized using keyword co-occurrence. Findings were presented using a narrative review approach based on classifications of DMS. Finally, the challenges of physiological-based DMS are highlighted in the conclusion. Doi: 10.28991/CEJ-2022-08-12-020 Full Text: PD

    Menetelmiä lasten näkötiedon käsittelyn arvioimiseksi katseenseurannan avulla

    Get PDF
    Cortical visual processing and mechanism under eye movements and visiospatial attention undergo prominent developmental changes during the first 12 months of infancy. At that time, these key functions of vision are tightly connected to the early brain development in general. Thus, they are favourable targets for new research methods that can be used in treatment, prediction, or detection of various adverse visual of neurocognitive conditions. This thesis presents two eye tracker assisted test paradigms that may be used to evaluate and quantify different functions of infants’ visual processing. The first study concentrates on the analysis of the gaze patterns in classic face-distractor competition paradigm known to tap mechanisms under infant’s attention disengagement and visuospatial orienting. A novel stimuli over a given period of time. In further evaluation, the metric is shown to be sensitive to developmental changes in infants’ face processing between 5 and 7 months of age. The second study focuses on the visual evoked potentials (VEPs) elicited by orientation reversal, global form, and clobal motion stimulation known to measure distinct aspects of visual processing at the cortical level. To improve the reality of such methods, an eye tracker is integrated to the recording setup, which can be used to control stimulus presentation to capture the attention of the infant, and in the analysis to exclude the electroencephalography (EEG) segments with disorientated gaze. With this setup, VEPs can be detected from the vast majority of the tested 3-month-old infants (N=39) using circular variant of Hotelling’s T2 test statistic and two developed power spectrum based metrics. After further development already in progress, the presented methods are ready to be used clinically in assessments of neurocognitive development, preferably alongside other similar biomarker tests of infancy.Näkötiedon käsittely aivokuorella sekä silmänliikkeiden ja visuospatiaalisen tarkkaavaisuuden mekanismit kehittyvät valtavasti lapsen ensimmäisen 12 elinkuukauden kuluessa. Nämä näön avaintoiminnot ovat tiukasti sidoksissa aivojen yleiseen varhaiskehitykseen, jonka vuoksi ne ovat suotuisia kohteita uusille tutkimusmenetelmille käytettäväksi visuaalisen tai neurologisten ongelmien hoidossa, ennustuksessa ja löytämisessä. Tämä työ esittelee kaksi katseenseurantaa hyödyntävää koeasetelmaa, joita voidaan käyttää lasten kortikaalisen näkötiedon käsittelyn arvioinnissa ja kvantifionnissa. Ensimmäisessä tutkimuksessa kehitettiin mitattujen katsekuvioiden analyysiä klassisessa kasvokuva-distraktori-koeasetelmassa, jonka tiedetään koskettavan lasten tarkkavaisuuden vapauttamiseen ja katseen siirtoon liittyviä mekanismeja. Työssä kehitetyllä laskennallisella mittarilla pystytään määrittämään tarkkavaisuuden jakautuminen ruudun keskellä ja raunalla esitettyjen ärsykkeiden välillä haluttuna aikana. Jatkotarkastelu osoittaa mittarin olevan herkkä kasvokuvien käsittelyn kehityksen muutoksille 5 ja 7 kuukauden ikäisten lasten välillä. Toinen osatyö keskittyy näkötiedon kortikaalista käsittelyä heijastavien, suunnan kääntämisen, globaalin muodon tai liikkeen tuottamien näköherätepotentiaalien mittaamiseen ja analyysiin. Parantaakseen menetelmien luotettavuutta laitteistoon liitetään silmänliikekamera, joka mahdollistaa sekä ärsyketoiston ohjaamisen lapsen tarkkaavaisuuden mukaisesti että kerätyn aivosähkökäyrän karsimisen niiltä osin, jolloin lapsen katse oli harhautunut esityksestä. Käyttäen muunnelmaa Hotellingin T2 statistiikasta ja kahta työssä kehitettyä, tehospektriin pohjautuvaa analyysimenetelmää herätevasteet pystytään löytämään valtaosalta 3 kuukauden ikäisistä lapsista (N=39). Meneillään olevan jatkokehityksen jälkeen esitetyt menetelmät ovat valmiita kliiniseen käyttöön neurokognitiivisen kehityksen arvioinnissa muiden vastaavien biomarkkeritutkimuksen rinnalla

    Electrophysiological investigations of brain function in coma, vegetative and minimally conscious patients.

    Full text link
    Electroencephalographic activity in the context of disorders of consciousness is a swiss knife like tool that can evaluate different aspects of cognitive residual function, detect consciousness and provide a mean to communicate with the outside world without using muscular channels. Standard recordings in the neurological department offer a first global view of the electrogenesis of a patient and can spot abnormal epileptiform activity and therefore guide treatment. Although visual patterns have a prognosis value, they are not sufficient to provide a diagnosis between vegetative state/unresponsive wakefulness syndrome (VS/UWS) and minimally conscious state (MCS) patients. Quantitative electroencephalography (qEEG) processes the data and retrieves features, not visible on the raw traces, which can then be classified. Current results using qEEG show that MCS can be differentiated from VS/UWS patients at the group level. Event Related Potentials (ERP) are triggered by varying stimuli and reflect the time course of information processing related to the stimuli from low-level peripheral receptive structures to high-order associative cortices. It is hence possible to assess auditory, visual, or emotive pathways. Different stimuli elicit positive or negative components with different time signatures. The presence of these components when observed in passive paradigms is usually a sign of good prognosis but it cannot differentiate VS/UWS and MCS patients. Recently, researchers have developed active paradigms showing that the amplitude of the component is modulated when the subject's attention is focused on a task during stimulus presentation. Hence significant differences between ERPs of a patient in a passive compared to an active paradigm can be a proof of consciousness. An EEG-based brain-computer interface (BCI) can then be tested to provide the patient with a communication tool. BCIs have considerably improved the past two decades. However they are not easily adaptable to comatose patients as they can have visual or auditory impairments or different lesions affecting their EEG signal. Future progress will require large databases of resting state-EEG and ERPs experiment of patients of different etiologies. This will allow the identification of specific patterns related to the diagnostic of consciousness. Standardized procedures in the use of BCIs will also be needed to find the most suited technique for each individual patient.Peer reviewe

    Biosignal‐based human–machine interfaces for assistance and rehabilitation : a survey

    Get PDF
    As a definition, Human–Machine Interface (HMI) enables a person to interact with a device. Starting from elementary equipment, the recent development of novel techniques and unobtrusive devices for biosignals monitoring paved the way for a new class of HMIs, which take such biosignals as inputs to control various applications. The current survey aims to review the large literature of the last two decades regarding biosignal‐based HMIs for assistance and rehabilitation to outline state‐of‐the‐art and identify emerging technologies and potential future research trends. PubMed and other databases were surveyed by using specific keywords. The found studies were further screened in three levels (title, abstract, full‐text), and eventually, 144 journal papers and 37 conference papers were included. Four macrocategories were considered to classify the different biosignals used for HMI control: biopotential, muscle mechanical motion, body motion, and their combinations (hybrid systems). The HMIs were also classified according to their target application by considering six categories: prosthetic control, robotic control, virtual reality control, gesture recognition, communication, and smart environment control. An ever‐growing number of publications has been observed over the last years. Most of the studies (about 67%) pertain to the assistive field, while 20% relate to rehabilitation and 13% to assistance and rehabilitation. A moderate increase can be observed in studies focusing on robotic control, prosthetic control, and gesture recognition in the last decade. In contrast, studies on the other targets experienced only a small increase. Biopotentials are no longer the leading control signals, and the use of muscle mechanical motion signals has experienced a considerable rise, especially in prosthetic control. Hybrid technologies are promising, as they could lead to higher performances. However, they also increase HMIs’ complex-ity, so their usefulness should be carefully evaluated for the specific application

    Interactive voice response system and eye-tracking interface in assistive technology for disabled

    Get PDF
    Abstract. The development of ICT has been very fast in the last few decades and it is important that everyone can benefit from this progress. It is essential for designing user interfaces to keep up on this progress and ensure the usability and accessibility of new innovations. The purpose of this academic literature review has been to study the basics of multimodal interaction, emphasizing on context with multimodal assistive technology for disabled people. From various modalities, interactive voice response and eye-tracking were chosen for analysis. The motivation for this work is to study how technology can be harnessed for assisting disabled people in daily life

    Applied Visualization in the Neurosciences and the Enhancement of Visualization through Computer Graphics

    Get PDF
    The complexity and size of measured and simulated data in many fields of science is increasing constantly. The technical evolution allows for capturing smaller features and more complex structures in the data. To make this data accessible by the scientists, efficient and specialized visualization techniques are required. Maximum efficiency and value for the user can only be achieved by adapting visualization to the specific application area and the specific requirements of the scientific field. Part I: In the first part of my work, I address the visualization in the neurosciences. The neuroscience tries to understand the human brain; beginning at its smallest parts, up to its global infrastructure. To achieve this ambitious goal, the neuroscience uses a combination of three-dimensional data from a myriad of sources, like MRI, CT, or functional MRI. To handle this diversity of different data types and sources, the neuroscience need specialized and well evaluated visualization techniques. As a start, I will introduce an extensive software called \"OpenWalnut\". It forms the common base for developing and using visualization techniques with our neuroscientific collaborators. Using OpenWalnut, standard and novel visualization approaches are available to the neuroscientific researchers too. Afterwards, I am introducing a very specialized method to illustrate the causal relation of brain areas, which was, prior to that, only representable via abstract graph models. I will finalize the first part of my work with an evaluation of several standard visualization techniques in the context of simulated electrical fields in the brain. The goal of this evaluation was clarify the advantages and disadvantages of the used visualization techniques to the neuroscientific community. We exemplified these, using clinically relevant scenarios. Part II: Besides the data preprocessing, which plays a tremendous role in visualization, the final graphical representation of the data is essential to understand structure and features in the data. The graphical representation of data can be seen as the interface between the data and the human mind. The second part of my work is focused on the improvement of structural and spatial perception of visualization -- the improvement of the interface. Unfortunately, visual improvements using computer graphics methods of the computer game industry is often seen sceptically. In the second part, I will show that such methods can be applied to existing visualization techniques to improve spatiality and to emphasize structural details in the data. I will use a computer graphics paradigm called \"screen space rendering\". Its advantage, amongst others, is its seamless applicability to nearly every visualization technique. I will start with two methods that improve the perception of mesh-like structures on arbitrary surfaces. Those mesh structures represent second-order tensors and are generated by a method named \"TensorMesh\". Afterwards I show a novel approach to optimally shade line and point data renderings. With this technique it is possible for the first time to emphasize local details and global, spatial relations in dense line and point data.In vielen Bereichen der Wissenschaft nimmt die Größe und Komplexität von gemessenen und simulierten Daten zu. Die technische Entwicklung erlaubt das Erfassen immer kleinerer Strukturen und komplexerer Sachverhalte. Um solche Daten dem Menschen zugänglich zu machen, benötigt man effiziente und spezialisierte Visualisierungswerkzeuge. Nur die Anpassung der Visualisierung auf ein Anwendungsgebiet und dessen Anforderungen erlaubt maximale Effizienz und Nutzen für den Anwender. Teil I: Im ersten Teil meiner Arbeit befasse ich mich mit der Visualisierung im Bereich der Neurowissenschaften. Ihr Ziel ist es, das menschliche Gehirn zu begreifen; von seinen kleinsten Teilen bis hin zu seiner Gesamtstruktur. Um dieses ehrgeizige Ziel zu erreichen nutzt die Neurowissenschaft vor allem kombinierte, dreidimensionale Daten aus vielzähligen Quellen, wie MRT, CT oder funktionalem MRT. Um mit dieser Vielfalt umgehen zu können, benötigt man in der Neurowissenschaft vor allem spezialisierte und evaluierte Visualisierungsmethoden. Zunächst stelle ich ein umfangreiches Softwareprojekt namens \"OpenWalnut\" vor. Es bildet die gemeinsame Basis für die Entwicklung und Nutzung von Visualisierungstechniken mit unseren neurowissenschaftlichen Kollaborationspartnern. Auf dieser Basis sind klassische und neu entwickelte Visualisierungen auch für Neurowissenschaftler zugänglich. Anschließend stelle ich ein spezialisiertes Visualisierungsverfahren vor, welches es ermöglicht, den kausalen Zusammenhang zwischen Gehirnarealen zu illustrieren. Das war vorher nur durch abstrakte Graphenmodelle möglich. Den ersten Teil der Arbeit schließe ich mit einer Evaluation verschiedener Standardmethoden unter dem Blickwinkel simulierter elektrischer Felder im Gehirn ab. Das Ziel dieser Evaluation war es, der neurowissenschaftlichen Gemeinde die Vor- und Nachteile bestimmter Techniken zu verdeutlichen und anhand klinisch relevanter Fälle zu erläutern. Teil II: Neben der eigentlichen Datenvorverarbeitung, welche in der Visualisierung eine enorme Rolle spielt, ist die grafische Darstellung essenziell für das Verständnis der Strukturen und Bestandteile in den Daten. Die grafische Repräsentation von Daten bildet die Schnittstelle zum Gehirn des Menschen. Der zweite Teile meiner Arbeit befasst sich mit der Verbesserung der strukturellen und räumlichen Wahrnehmung in Visualisierungsverfahren -- mit der Verbesserung der Schnittstelle. Leider werden viele visuelle Verbesserungen durch Computergrafikmethoden der Spieleindustrie mit Argwohn beäugt. Im zweiten Teil meiner Arbeit werde ich zeigen, dass solche Methoden in der Visualisierung angewendet werden können um den räumlichen Eindruck zu verbessern und Strukturen in den Daten hervorzuheben. Dazu nutze ich ein in der Computergrafik bekanntes Paradigma: das \"Screen Space Rendering\". Dieses Paradigma hat den Vorteil, dass es auf nahezu jede existierende Visualiserungsmethode als Nachbearbeitunsgschritt angewendet werden kann. Zunächst führe ich zwei Methoden ein, die die Wahrnehmung von gitterartigen Strukturen auf beliebigen Oberflächen verbessern. Diese Gitter repräsentieren die Struktur von Tensoren zweiter Ordnung und wurden durch eine Methode namens \"TensorMesh\" erzeugt. Anschließend zeige ich eine neuartige Technik für die optimale Schattierung von Linien und Punktdaten. Mit dieser Technik ist es erstmals möglich sowohl lokale Details als auch globale räumliche Zusammenhänge in dichten Linien- und Punktdaten zu erfassen

    High-Density Diffuse Optical Tomography During Passive Movie Viewing: A Platform for Naturalistic Functional Brain Mapping

    Get PDF
    Human neuroimaging techniques enable researchers and clinicians to non-invasively study brain function across the lifespan in both healthy and clinical populations. However, functional brain imaging methods such as functional magnetic resonance imaging (fMRI) are expensive, resource-intensive, and require dedicated facilities, making these powerful imaging tools generally unavailable for assessing brain function in settings demanding open, unconstrained, and portable neuroimaging assessments. Tools such as functional near-infrared spectroscopy (fNIRS) afford greater portability and wearability, but at the expense of cortical field-of-view and spatial resolution. High-Density Diffuse Optical Tomography (HD-DOT) is an optical neuroimaging modality directly addresses the image quality limitations associated with traditional fNIRS techniques through densely overlapping optical measurements. This thesis aims to establish the feasibility of using HD-DOT in a novel application demanding exceptional portability and flexibility: mapping disrupted cortical activity in chronically malnourished children. I first motivate the need for dense optical measurements of brain tissue to achieve fMRI-comparable localization of brain function (Chapter 2). Then, I present imaging work completed in Cali, Colombia, where a cohort of chronically malnourished children were imaged using a custom HD-DOT instrument to establish feasibility of performing field-based neuroimaging in this population (Chapter 3). Finally, in order to meet the need for age appropriate imaging paradigms in this population, I develop passive movie viewing paradigms for use in optical neuroimaging, a flexible and rich stimulation paradigm that is suitable for both adults and children (Chapter 4)

    Modelling human emotions using immersive virtual reality, physiological signals and behavioural responses

    Full text link
    Tesis por compendio[ES] El uso de la realidad virtual (RV) se ha incrementado notablemente en la comunidad científica para la investigación del comportamiento humano. En particular, la RV inmersiva ha crecido debido a la democratización de las gafas de realidad virtual o head mounted displays (HMD), que ofrecen un alto rendimiento con una inversión económica. Uno de los campos que ha emergido con fuerza en la última década es el Affective Computing, que combina psicofisiología, informática, ingeniería biomédica e inteligencia artificial, desarrollando sistemas que puedan reconocer emociones automáticamente. Su progreso es especialmente importante en el campo de la investigación del comportamiento humano, debido al papel fundamental que las emociones juegan en muchos procesos psicológicos como la percepción, la toma de decisiones, la creatividad, la memoria y la interacción social. Muchos estudios se han centrado en intentar obtener una metodología fiable para evocar y automáticamente identificar estados emocionales, usando medidas fisiológicas objetivas y métodos de aprendizaje automático. Sin embargo, la mayoría de los estudios previos utilizan imágenes, audios o vídeos para generar los estados emocionales y, hasta donde llega nuestro conocimiento, ninguno de ellos ha desarrollado un sistema de reconocimiento emocional usando RV inmersiva. Aunque algunos trabajos anteriores sí analizan las respuestas fisiológicas en RV inmersivas, estos no presentan modelos de aprendizaje automático para procesamiento y clasificación automática de bioseñales. Además, un concepto crucial cuando se usa la RV en investigación del comportamiento humano es la validez: la capacidad de evocar respuestas similares en un entorno virtual a las evocadas por el espacio físico. Aunque algunos estudios previos han usado dimensiones psicológicas y cognitivas para comparar respuestas entre entornos reales y virtuales, las investigaciones que analizan respuestas fisiológicas o comportamentales están mucho menos extendidas. Según nuestros conocimientos, este es el primer trabajo que compara entornos físicos con su réplica en RV, empleando respuestas fisiológicas y algoritmos de aprendizaje automático y analizando la capacidad de la RV de transferir y extrapolar las conclusiones obtenidas al entorno real que se está simulando. El objetivo principal de la tesis es validar el uso de la RV inmersiva como una herramienta de estimulación emocional usando respuestas psicofisiológicas y comportamentales en combinación con algoritmos de aprendizaje automático, así como realizar una comparación directa entre un entorno real y virtual. Para ello, se ha desarrollado un protocolo experimental que incluye entornos emocionales 360º, un museo real y una virtualización 3D altamente realista del mismo museo. La tesis presenta novedosas contribuciones del uso de la RV inmersiva en la investigación del comportamiento humano, en particular en lo relativo al estudio de las emociones. Esta ayudará a aplicar metodologías a estímulos más realistas para evaluar entornos y situaciones de la vida diaria, superando las actuales limitaciones de la estimulación emocional que clásicamente ha incluido imágenes, audios o vídeos. Además, en ella se analiza la validez de la RV realizando una comparación directa usando una simulación altamente realista. Creemos que la RV inmersiva va a revolucionar los métodos de estimulación emocional en entornos de laboratorio. Además, su sinergia junto a las medidas fisiológicas y las técnicas de aprendizaje automático, impactarán transversalmente en muchas áreas de investigación como la arquitectura, la salud, la evaluación psicológica, el entrenamiento, la educación, la conducción o el marketing, abriendo un nuevo horizonte de oportunidades para la comunidad científica. La presente tesis espera contribuir a caminar en esa senda.[EN] In recent years the scientific community has significantly increased its use of virtual reality (VR) technologies in human behaviour research. In particular, the use of immersive VR has grown due to the introduction of affordable, high performance head mounted displays (HMDs). Among the fields that has strongly emerged in the last decade is affective computing, which combines psychophysiology, computer science, biomedical engineering and artificial intelligence in the development of systems that can automatically recognize emotions. The progress of affective computing is especially important in human behaviour research due to the central role that emotions play in many background processes, such as perception, decision-making, creativity, memory and social interaction. Several studies have tried to develop a reliable methodology to evoke and automatically identify emotional states using objective physiological measures and machine learning methods. However, the majority of previous studies used images, audio or video to elicit emotional statements; to the best of our knowledge, no previous research has developed an emotion recognition system using immersive VR. Although some previous studies analysed physiological responses in immersive VR, they did not use machine learning techniques for biosignal processing and classification. Moreover, a crucial concept when using VR for human behaviour research is validity: the capacity to evoke a response from the user in a simulated environment similar to the response that might be evoked in a physical environment. Although some previous studies have used psychological and cognitive dimensions to compare responses in real and virtual environments, few have extended this research to analyse physiological or behavioural responses. Moreover, to our knowledge, this is the first study to compare VR scenarios with their real-world equivalents using physiological measures coupled with machine learning algorithms, and to analyse the ability of VR to transfer and extrapolate insights obtained from VR environments to real environments. The main objective of this thesis is, using psycho-physiological and behavioural responses in combination with machine learning methods, and by performing a direct comparison between a real and virtual environment, to validate immersive VR as an emotion elicitation tool. To do so we develop an experimental protocol involving emotional 360º environments, an art exhibition in a real museum, and a highly-realistic 3D virtualization of the same art exhibition. This thesis provides novel contributions to the use of immersive VR in human behaviour research, particularly in relation to emotions. VR can help in the application of methodologies designed to present more realistic stimuli in the assessment of daily-life environments and situations, thus overcoming the current limitations of affective elicitation, which classically uses images, audio and video. Moreover, it analyses the validity of VR by performing a direct comparison using highly-realistic simulation. We believe that immersive VR will revolutionize laboratory-based emotion elicitation methods. Moreover, its synergy with physiological measurement and machine learning techniques will impact transversely in many other research areas, such as architecture, health, assessment, training, education, driving and marketing, and thus open new opportunities for the scientific community. The present dissertation aims to contribute to this progress.[CA] L'ús de la realitat virtual (RV) s'ha incrementat notablement en la comunitat científica per a la recerca del comportament humà. En particular, la RV immersiva ha crescut a causa de la democratització de les ulleres de realitat virtual o head mounted displays (HMD), que ofereixen un alt rendiment amb una reduïda inversió econòmica. Un dels camps que ha emergit amb força en l'última dècada és el Affective Computing, que combina psicofisiologia, informàtica, enginyeria biomèdica i intel·ligència artificial, desenvolupant sistemes que puguen reconéixer emocions automàticament. El seu progrés és especialment important en el camp de la recerca del comportament humà, a causa del paper fonamental que les emocions juguen en molts processos psicològics com la percepció, la presa de decisions, la creativitat, la memòria i la interacció social. Molts estudis s'han centrat en intentar obtenir una metodologia fiable per a evocar i automàticament identificar estats emocionals, utilitzant mesures fisiològiques objectives i mètodes d'aprenentatge automàtic. No obstant això, la major part dels estudis previs utilitzen imatges, àudios o vídeos per a generar els estats emocionals i, fins on arriba el nostre coneixement, cap d'ells ha desenvolupat un sistema de reconeixement emocional mitjançant l'ús de la RV immersiva. Encara que alguns treballs anteriors sí que analitzen les respostes fisiològiques en RV immersives, aquests no presenten models d'aprenentatge automàtic per a processament i classificació automàtica de biosenyals. A més, un concepte crucial quan s'utilitza la RV en la recerca del comportament humà és la validesa: la capacitat d'evocar respostes similars en un entorn virtual a les evocades per l'espai físic. Encara que alguns estudis previs han utilitzat dimensions psicològiques i cognitives per a comparar respostes entre entorns reals i virtuals, les recerques que analitzen respostes fisiològiques o comportamentals estan molt menys esteses. Segons els nostres coneixements, aquest és el primer treball que compara entorns físics amb la seua rèplica en RV, emprant respostes fisiològiques i algorismes d'aprenentatge automàtic i analitzant la capacitat de la RV de transferir i extrapolar les conclusions obtingudes a l'entorn real que s'està simulant. L'objectiu principal de la tesi és validar l'ús de la RV immersiva com una eina d'estimulació emocional usant respostes psicofisiològiques i comportamentals en combinació amb algorismes d'aprenentatge automàtic, així com realitzar una comparació directa entre un entorn real i virtual. Per a això, s'ha desenvolupat un protocol experimental que inclou entorns emocionals 360º, un museu real i una virtualització 3D altament realista del mateix museu. La tesi presenta noves contribucions de l'ús de la RV immersiva en la recerca del comportament humà, en particular quant a l'estudi de les emocions. Aquesta ajudarà a aplicar metodologies a estímuls més realistes per a avaluar entorns i situacions de la vida diària, superant les actuals limitacions de l'estimulació emocional que clàssicament ha inclòs imatges, àudios o vídeos. A més, en ella s'analitza la validesa de la RV realitzant una comparació directa usant una simulació altament realista. Creiem que la RV immersiva revolucionarà els mètodes d'estimulació emocional en entorns de laboratori. A més, la seua sinergia al costat de les mesures fisiològiques i les tècniques d'aprenentatge automàtic, impactaran transversalment en moltes àrees de recerca com l'arquitectura, la salut, l'avaluació psicològica, l'entrenament, l'educació, la conducció o el màrqueting, obrint un nou horitzó d'oportunitats per a la comunitat científica. La present tesi espera contribuir a caminar en aquesta senda.Marín Morales, J. (2020). Modelling human emotions using immersive virtual reality, physiological signals and behavioural responses [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/148717TESISCompendi

    Augmentative and alternative communication (AAC) advances: A review of configurations for individuals with a speech disability

    Get PDF
    High-tech augmentative and alternative communication (AAC) methods are on a constant rise; however, the interaction between the user and the assistive technology is still challenged for an optimal user experience centered around the desired activity. This review presents a range of signal sensing and acquisition methods utilized in conjunction with the existing high-tech AAC platforms for individuals with a speech disability, including imaging methods, touch-enabled systems, mechanical and electro-mechanical access, breath-activated methods, and brain–computer interfaces (BCI). The listed AAC sensing modalities are compared in terms of ease of access, affordability, complexity, portability, and typical conversational speeds. A revelation of the associated AAC signal processing, encoding, and retrieval highlights the roles of machine learning (ML) and deep learning (DL) in the development of intelligent AAC solutions. The demands and the affordability of most systems hinder the scale of usage of high-tech AAC. Further research is indeed needed for the development of intelligent AAC applications reducing the associated costs and enhancing the portability of the solutions for a real user’s environment. The consolidation of natural language processing with current solutions also needs to be further explored for the amelioration of the conversational speeds. The recommendations for prospective advances in coming high-tech AAC are addressed in terms of developments to support mobile health communicative applications
    corecore