12 research outputs found

    PRESTK : situation-aware presentation of messages and infotainment content for drivers

    Get PDF
    The amount of in-car information systems has dramatically increased over the last few years. These potentially mutually independent information systems presenting information to the driver increase the risk of driver distraction. In a first step, orchestrating these information systems using techniques from scheduling and presentation planning avoid conflicts when competing for scarce resources such as screen space. In a second step, the cognitive capacity of the driver as another scarce resource has to be considered. For the first step, an algorithm fulfilling the requirements of this situation is presented and evaluated. For the second step, I define the concept of System Situation Awareness (SSA) as an extension of Endsley’s Situation Awareness (SA) model. I claim that not only the driver needs to know what is happening in his environment, but also the system, e.g., the car. In order to achieve SSA, two paths of research have to be followed: (1) Assessment of cognitive load of the driver in an unobtrusive way. I propose to estimate this value using a model based on environmental data. (2) Developing model of cognitive complexity induced by messages presented by the system. Three experiments support the claims I make in my conceptual contribution to this field. A prototypical implementation of the situation-aware presentation management toolkit PRESTK is presented and shown in two demonstrators.In den letzten Jahren hat die Menge der informationsanzeigenden Systeme im Auto drastisch zugenommen. Da sie potenziell unabhängig voneinander ablaufen, erhöhen sie die Gefahr, die Aufmerksamkeit des Fahrers abzulenken. Konflikte entstehen, wenn zwei oder mehr Systeme zeitgleich auf limitierte Ressourcen wie z. B. den Bildschirmplatz zugreifen. Ein erster Schritt, diese Konflikte zu vermeiden, ist die Orchestrierung dieser Systeme mittels Techniken aus dem Bereich Scheduling und Präsentationsplanung. In einem zweiten Schritt sollte die kognitive Kapazität des Fahrers als ebenfalls limitierte Ressource berücksichtigt werden. Der Algorithmus, den ich zu Schritt 1 vorstelle und evaluiere, erfüllt alle diese Anforderungen. Zu Schritt 2 definiere ich das Konzept System Situation Awareness (SSA), basierend auf Endsley’s Konzept der Situation Awareness (SA). Dadurch wird erreicht, dass nicht nur der Fahrer sich seiner Umgebung bewusst ist, sondern auch das System (d.h. das Auto). Zu diesem Zweck m¨ussen zwei Bereiche untersucht werden: (1) Die kognitive Belastbarkeit des Fahrers unaufdringlich ermitteln. Dazu schlage ich ein Modell vor, das auf Umgebungsinformationen basiert. (2) Ein weiteres Modell soll die Komplexität der präsentierten Informationen bestimmen. Drei Experimente stützen die Behauptungen in meinem konzeptuellen Beitrag. Ein Prototyp des situationsbewussten Präsentationsmanagement-Toolkits PresTK wird vorgestellt und in zwei Demonstratoren gezeigt

    Automotive user interfaces for the support of non-driving-related activities

    Get PDF
    Driving a car has changed a lot since the first car was invented. Today, drivers do not only maneuver the car to their destination but also perform a multitude of additional activities in the car. This includes for instance activities related to assistive functions that are meant to increase driving safety and reduce the driver’s workload. However, since drivers spend a considerable amount of time in the car, they often want to perform non-driving-related activities as well. In particular, these activities are related to entertainment, communication, and productivity. The driver’s need for such activities has vastly increased, particularly due to the success of smart phones and other mobile devices. As long as the driver is in charge of performing the actual driving task, such activities can distract the driver and may result in severe accidents. Due to these special requirements of the driving environment, the driver ideally performs such activities by using appropriately designed in-vehicle systems. The challenge for such systems is to enable flexible and easily usable non-driving-related activities while maintaining and increasing driving safety at the same time. The main contribution of this thesis is a set of guidelines and exemplary concepts for automotive user interfaces that offer safe, diverse, and easy-to-use means to perform non-driving-related activities besides the regular driving tasks. Using empirical methods that are commonly used in human-computer interaction, we investigate various aspects of automotive user interfaces with the goal to support the design and development of future interfaces that facilitate non-driving-related activities. The first aspect is related to using physiological data in order to infer information about the driver’s workload. As a second aspect, we propose a multimodal interaction style to facilitate the interaction with multiple activities in the car. In addition, we introduce two concepts for the support of commonly used and demanded non-driving-related activities: For communication with the outside world, we investigate the driver’s needs with regard to sharing ride details with remote persons in order to increase driving safety. Finally, we present a concept of time-adjusted activities (e.g., entertainment and productivity) which enable the driver to make use of times where only little attention is required. Starting with manual, non-automated driving, we also consider the rise of automated driving modes.When cars were invented, they allowed the driver and potential passengers to get to a distant location. The only activities the driver was able and supposed to perform were related to maneuvering the vehicle, i.e., accelerate, decelerate, and steer the car. Today drivers perform many activities that go beyond these driving tasks. This includes for example activities related to driving assistance, location-based information and navigation, entertainment, communication, and productivity. To perform these activities, drivers use functions that are provided by in-vehicle information systems in the car. Many of these functions are meant to increase driving safety or to make the ride more enjoyable. The latter is important since people spend a considerable amount of time in their cars and want to perform similar activities like those to which they are accustomed to from using mobile devices. However, as long as the driver is responsible for driving, these activities can be distracting and pose driver, passengers, and the environment at risk. One goal for the development of automotive user interfaces is therefore to enable an easy and appropriate operation of in-vehicle systems such that driving tasks and non-driving-related activities can be performed easily and safely. The main contribution of this thesis is a set of guidelines and exemplary concepts for automotive user interfaces that offer safe, diverse, and easy-to-use means to perform also non-driving-related activities while driving. Using empirical methods that are commonly used in human-computer interaction, we approach various aspects of automotive user interfaces in order to support the design and development of future interfaces that also enable non-driving-related activities. Starting with manual, non-automated driving, we also consider the transition towards automated driving modes. As a first part, we look at the prerequisites that enable non-driving-related activities in the car. We propose guidelines for the design and development of automotive user interfaces that also support non-driving-related activities. This includes for instance rules on how to adapt or interrupt activities when the level of automation changes. To enable activities in the car, we propose a novel interaction concept that facilitates multimodal interaction in the car by combining speech interaction and touch gestures. Moreover, we reveal aspects on how to infer information about the driver's state (especially mental workload) by using physiological data. We conducted a real-world driving study to extract a data set with physiological and context data. This can help to better understand the driver state, to adapt interfaces to the driver and driving situations, and to adapt the route selection process. Second, we propose two concepts for supporting non-driving-related activities that are frequently used and demanded in the car. For telecommunication, we propose a concept to increase driving safety when communicating with the outside world. This concept enables the driver to share different types of information with remote parties. Thereby, the driver can choose between different levels of details ranging from abstract information such as ``Alice is driving right now'' up to sharing a video of the driving scene. We investigated the drivers' needs on the go and derived guidelines for the design of communication-related functions in the car through an online survey and in-depth interviews. As a second aspect, we present an approach to offer time-adjusted entertainment and productivity tasks to the driver. The idea is to allow time-adjusted tasks during periods where the demand for the driver's attention is low, for instance at traffic lights or during a highly automated ride. Findings from a web survey and a case study demonstrate the feasibility of this approach. With the findings of this thesis we envision to provide a basis for future research and development in the domain of automotive user interfaces and non-driving-related activities in the transition from manual driving to highly and fully automated driving.Als das Auto erfunden wurde, ermöglichte es den Insassen hauptsächlich, entfernte Orte zu erreichen. Die einzigen Tätigkeiten, die Fahrerinnen und Fahrer während der Fahrt erledigen konnten und sollten, bezogen sich auf die Steuerung des Fahrzeugs. Heute erledigen die Fahrerinnen und Fahrer diverse Tätigkeiten, die über die ursprünglichen Aufgaben hinausgehen und sich nicht unbedingt auf die eigentliche Fahraufgabe beziehen. Dies umfasst unter anderem die Bereiche Fahrerassistenz, standortbezogene Informationen und Navigation, Unterhaltung, Kommunikation und Produktivität. Informationssysteme im Fahrzeug stellen den Fahrerinnen und Fahrern Funktionen bereit, um diese Aufgaben auch während der Fahrt zu erledigen. Viele dieser Funktionen verbessern die Fahrsicherheit oder dienen dazu, die Fahrt angenehm zu gestalten. Letzteres wird immer wichtiger, da man inzwischen eine beträchtliche Zeit im Auto verbringt und dabei nicht mehr auf die Aktivitäten und Funktionen verzichten möchte, die man beispielsweise durch die Benutzung von Smartphone und Tablet gewöhnt ist. Solange der Fahrer selbst fahren muss, können solche Aktivitäten von der Fahrtätigkeit ablenken und eine Gefährdung für die Insassen oder die Umgebung darstellen. Ein Ziel bei der Entwicklung automobiler Benutzungsschnittstellen ist daher eine einfache, adäquate Bedienung solcher Systeme, damit Fahraufgabe und Nebentätigkeiten gut und vor allem sicher durchgeführt werden können. Der Hauptbeitrag dieser Arbeit umfasst einen Leitfaden und beispielhafte Konzepte für automobile Benutzungsschnittstellen, die eine sichere, abwechslungsreiche und einfache Durchführung von Tätigkeiten jenseits der eigentlichen Fahraufgabe ermöglichen. Basierend auf empirischen Methoden der Mensch-Computer-Interaktion stellen wir verschiedene Lösungen vor, die die Entwicklung und Gestaltung solcher Benutzungsschnittstellen unterstützen. Ausgehend von der heute üblichen nicht automatisierten Fahrt betrachten wir dabei auch Aspekte des automatisierten Fahrens. Zunächst betrachten wir die notwendigen Voraussetzungen, um Tätigkeiten jenseits der Fahraufgabe zu ermöglichen. Wir stellen dazu einen Leitfaden vor, der die Gestaltung und Entwicklung von automobilen Benutzungsschnittstellen unterstützt, die das Durchführen von Nebenaufgaben erlauben. Dies umfasst zum Beispiel Hinweise, wie Aktivitäten angepasst oder unterbrochen werden können, wenn sich der Automatisierungsgrad während der Fahrt ändert. Um Aktivitäten im Auto zu unterstützen, stellen wir ein neuartiges Interaktionskonzept vor, das eine multimodale Interaktion im Fahrzeug mit Sprachbefehlen und Touch-Gesten ermöglicht. Für automatisierte Fahrzeugsysteme und zur Anpassung der Interaktionsmöglichkeiten an die Fahrsituation stellt der Fahrerzustand (insbesondere die mentale Belastung) eine wichtige Information dar. Durch eine Fahrstudie im realen Straßenverkehr haben wir einen Datensatz generiert, der physiologische Daten und Kontextinformationen umfasst und damit Rückschlüsse auf den Fahrerzustand ermöglicht. Mit diesen Informationen über Fahrerinnen und Fahrer wird es möglich, den Fahrerzustand besser zu verstehen, Benutzungsschnittstellen an die aktuelle Fahrsituation anzupassen und die Routenwahl anzupassen. Außerdem stellen wir zwei konkrete Konzepte zur Unterstützung von Nebentätigkeiten vor, die schon heute regelmäßig bei der Fahrt getätigt oder verlangt werden. Im Bereich der Telekommunikation stellen wir dazu ein Konzept vor, das die Fahrsicherheit beim Kommunizieren mit Personen außerhalb des Autos erhöht. Das Konzept erlaubt es dem Fahrer, unterschiedliche Arten von Kontextinformationen mit Kommunikationspartnern zu teilen. Dies reicht von der abstrakten Information, dass man derzeit im Auto unterwegs ist bis hin zum Teilen eines Live-Videos der aktuellen Fahrsituation. Diesbezüglich haben wir über eine Web-Umfrage und detaillierte Interviews die Bedürfnisse der Nutzer(innen) erhoben und ausgewertet. Zudem stellen wir ein prototypisches Konzept sowie Richtlinien vor, wie künftige Kommunikationsaufgaben im Fahrzeug gestaltet werden sollen. Als ein zweites Konzept betrachten wir zeitbeschränkte Aufgaben zur Unterhaltung und Produktivität im Fahrzeug. Die Idee ist hier, zeitlich begrenzte Aufgaben in Zeiten niedriger Belastung zuzulassen, wie zum Beispiel beim Warten an einer Ampel oder während einer hochautomatisierten (Teil-) Fahrt. Ergebnisse aus einer Web-Umfrage und einer Fallstudie zeigen die Machbarkeit dieses Ansatzes auf. Mit den Ergebnissen dieser Arbeit soll eine Basis für künftige Forschung und Entwicklung gelegt werden, um im Bereich automobiler Benutzungsschnittstellen insbesondere nicht-fahr-bezogene Aufgaben im Übergang zwischen manuellem Fahren und einer hochautomatisierten Autofahrt zu unterstützen

    Der verteilte Fahrerinteraktionsraum

    Get PDF
    Fahrrelevante und unterhaltungsbezogene Informationen werden, historisch betrachtet, räumlich getrennt im Fahrzeuginnenraum angeordnet: Für die Fahraufgabe notwendige Anzeigen befinden sich direkt vor dem Fahrer (Kombiinstrument und Head-Up Display) und Inhalte des Fahrerinformationssystems in der Mittelkonsole (zentrales Informationsdisplay). Aktuell ist eine Auflösung dieser strikten Trennung zu beobachten. Beispielsweise werden im Kombiinstrument Teilumfänge der Infotainmentinhalte abgerufen und bedient. Um dem Fahrer einen sicheren Umgang mit den zunehmenden Infotainmentinhalten zu ermöglichen, die Komplexität des Fahrerinteraktionsraumes zu reduzieren und den Kundennutzen zu steigern, betrachtet die vorliegende Arbeit die derzeit isolierten Displays ganzheitlich und lotet die Grenzen der momentan strikten Informationsverteilung neu aus. Es werden Grundlagen für die verkehrsgerechte Bedienung und Darstellung verteilter Informationen abhängig von deren Anzeigefläche gelegt, Konzepte zur nutzerinitiierten Individualisierung entwickelt und das Zusammenspiel von unterschiedlichen Anzeigeflächen evaluiert. Die in dieser Arbeit durchgeführten Studien zeigen, dass der räumlich verteilte Fahrerinteraktionsraum die Bedienung des Fahrerinformationssystems für den Nutzer sicherer und attraktiver gestaltet

    The 3rd International Conference on the Challenges, Opportunities, Innovations and Applications in Electronic Textiles

    Get PDF
    This reprint is a collection of papers from the E-Textiles 2021 Conference and represents the state-of-the-art from both academia and industry in the development of smart fabrics that incorporate electronic and sensing functionality. The reprint presents a wide range of applications of the technology including wearable textile devices for healthcare applications such as respiratory monitoring and functional electrical stimulation. Manufacturing approaches include printed smart materials, knitted e-textiles and flexible electronic circuit assembly within fabrics and garments. E-textile sustainability, a key future requirement for the technology, is also considered. Supplying power is a constant challenge for all wireless wearable technologies and the collection includes papers on triboelectric energy harvesting and textile-based water-activated batteries. Finally, the application of textiles antennas in both sensing and 5G wireless communications is demonstrated, where different antenna designs and their response to stimuli are presented

    Earth as Interface: Exploring chemical senses with Multisensory HCI Design for Environmental Health Communication

    Get PDF
    As environmental problems intensify, the chemical senses -that is smell and taste, are the most relevantsenses to evidence them.As such, environmental exposure vectors that can reach human beings comprise air,food, soil and water[1].Within this context, understanding the link between environmental exposures andhealth[2]is crucial to make informed choices, protect the environment and adapt to new environmentalconditions[3].Smell and taste lead therefore to multi-sensorial experiences which convey multi-layered information aboutlocal and global events[4]. However, these senses are usually absent when those problems are represented indigital systems. The multisensory HCIdesign framework investigateschemical sense inclusion withdigital systems[5]. Ongoing efforts tackledigitalization of smell and taste for digital delivery, transmission or substitution [6]. Despite experimentsproved technological feasibility, its dissemination depends on relevant applicationdevelopment[7].This thesis aims to fillthose gaps by demonstratinghow chemical senses provide the means to link environment and health based on scientific andgeolocation narratives [8], [9],[10]. We present a Multisensory HCI design process which accomplished symbolicdisplaying smell and taste and led us to a new multi-sensorial interaction system presented herein. We describe the conceptualization, design and evaluation of Earthsensum, an exploratory case study project.Earthsensumoffered to 16 participants in the study, environmental smell and taste experiences about real geolocations to participants of the study. These experiences were represented digitally using mobilevirtual reality (MVR) and mobile augmented reality (MAR). Its technologies bridge the real and digital Worlds through digital representations where we can reproduce the multi-sensorial experiences. Our study findings showed that the purposed interaction system is intuitive and can lead not only to a betterunderstanding of smell and taste perception as also of environmental problems. Participants comprehensionabout the link between environmental exposures and health was successful and they would recommend thissystem as education tools. Our conceptual design approach was validated and further developments wereencouraged.In this thesis,we demonstratehow to applyMultisensory HCI methodology to design with chemical senses. Weconclude that the presented symbolic representation model of smell and taste allows communicatingtheseexperiences on digital platforms. Due to its context-dependency, MVR and MAR platforms are adequatetechnologies to be applied for this purpose.Future developments intend to explore further the conceptual approach. These developments are centredon the use of the system to induce hopefully behaviourchange. Thisthesisopens up new application possibilities of digital chemical sense communication,Multisensory HCI Design and environmental health communication.À medida que os problemas ambientais se intensificam, os sentidos químicos -isto é, o cheiroe sabor, são os sentidos mais relevantes para evidenciá-los. Como tais, os vetores de exposição ambiental que podem atingir os seres humanos compreendem o ar, alimentos, solo e água [1]. Neste contexto, compreender a ligação entre as exposições ambientais e a saúde [2] é crucial para exercerescolhas informadas, proteger o meio ambiente e adaptar a novas condições ambientais [3]. O cheiroe o saborconduzemassima experiências multissensoriais que transmitem informações de múltiplas camadas sobre eventos locais e globais [4]. No entanto, esses sentidos geralmente estão ausentes quando esses problemas são representados em sistemas digitais. A disciplina do design de Interação Humano-Computador(HCI)multissensorial investiga a inclusão dossentidos químicos em sistemas digitais [9]. O seu foco atual residena digitalização de cheirose sabores para o envio, transmissão ou substituiçãode sentidos[10]. Apesar dasexperimentaçõescomprovarem a viabilidade tecnológica, a sua disseminação está dependentedo desenvolvimento de aplicações relevantes [11]. Estatese pretendepreencher estas lacunas ao demonstrar como os sentidos químicos explicitama interconexãoentre o meio ambiente e a saúde, recorrendo a narrativas científicas econtextualizadasgeograficamente[12], [13], [14]. Apresentamos uma metodologiade design HCImultissensorial que concretizouum sistema de representação simbólica de cheiro e sabor e nos conduziu a um novo sistema de interação multissensorial, que aqui apresentamos. Descrevemos o nosso estudo exploratório Earthsensum, que integra aconceptualização, design e avaliação. Earthsensumofereceu a 16participantes do estudo experiências ambientais de cheiro e sabor relacionadas com localizações geográficasreais. Essas experiências foram representadas digitalmente através derealidade virtual(VR)e realidade aumentada(AR).Estas tecnologias conectamo mundo real e digital através de representações digitais onde podemos reproduzir as experiências multissensoriais. Os resultados do nosso estudo provaramque o sistema interativo proposto é intuitivo e pode levar não apenas a uma melhor compreensão da perceção do cheiroe sabor, como também dos problemas ambientais. O entendimentosobre a interdependência entre exposições ambientais e saúde teve êxitoe os participantes recomendariam este sistema como ferramenta para aeducação. A nossa abordagem conceptual foi positivamentevalidadae novos desenvolvimentos foram incentivados. Nesta tese, demonstramos como aplicar metodologiasde design HCImultissensorialpara projetar com ossentidos químicos. Comprovamosque o modelo apresentado de representação simbólica do cheiroe do saborpermite comunicar essas experiênciasem plataformas digitais. Por serem dependentesdocontexto, as plataformas de aplicações emVR e AR são tecnologias adequadaspara este fim.Desenvolvimentos futuros pretendem aprofundar a nossa abordagemconceptual. Em particular, aspiramos desenvolvera aplicaçãodo sistema para promover mudanças de comportamento. Esta tese propõenovas possibilidades de aplicação da comunicação dos sentidos químicos em plataformas digitais, dedesign multissensorial HCI e de comunicação de saúde ambiental

    A speaker classification framework for non-intrusive user modeling : speech-based personalization of in-car services

    Get PDF
    Speaker Classification, i.e. the automatic detection of certain characteristics of a person based on his or her voice, has a variety of applications in modern computer technology and artificial intelligence: As a non-intrusive source for user modeling, it can be employed for personalization of human-machine interfaces in numerous domains. This dissertation presents a principled approach to the design of a novel Speaker Classification system for automatic age and gender recognition which meets these demands. Based on literature studies, methods and concepts dealing with the underlying pattern recognition task are developed. The final system consists of an incremental GMM-SVM supervector architecture with several optimizations. An extensive data-driven experiment series explores the parameter space and serves as evaluation of the component. Further experiments investigate the language-independence of the approach. As an essential part of this thesis, a framework is developed that implements all tasks associated with the design and evaluation of Speaker Classification in an integrated development environment that is able to generate efficient runtime modules for multiple platforms. Applications from the automotive field and other domains demonstrate the practical benefit of the technology for personalization, e.g. by increasing local danger warning lead time for elderly drivers.Die Sprecherklassifikation, also die automatische Erkennung bestimmter Merkmale einer Person anhand ihrer Stimme, besitzt eine Vielzahl von Anwendungsmöglichkeiten in der modernen Computertechnik und Künstlichen Intelligenz: Als nicht-intrusive Wissensquelle für die Benutzermodellierung kann sie zur Personalisierung in vielen Bereichen eingesetzt werden. In dieser Dissertation wird ein fundierter Ansatz zum Entwurf eines neuartigen Sprecherklassifikationssystems zur automatischen Bestimmung von Alter und Geschlecht vorgestellt, welches diese Anforderungen erfüllt. Ausgehend von Literaturstudien werden Konzepte und Methoden zur Behandlung des zugrunde liegenden Mustererkennungsproblems entwickelt, welche zu einer inkrementell arbeitenden GMM-SVM-Supervector-Architektur mit diversen Optimierungen führen. Eine umfassende datengetriebene Experimentalreihe dient der Erforschung des Parameterraumes und zur Evaluierung der Komponente. Weitere Studien untersuchen die Sprachunabhängigkeit des Ansatzes. Als wesentlicher Bestandteil der Arbeit wird ein Framework entwickelt, das alle im Zusammenhang mit Entwurf und Evaluierung von Sprecherklassifikation anfallenden Aufgaben in einer integrierten Entwicklungsumgebung implementiert, welche effiziente Laufzeitmodule für verschiedene Plattformen erzeugen kann. Anwendungen aus dem Automobilbereich und weiteren Domänen demonstrieren den praktischen Nutzen der Technologie zur Personalisierung, z.B. indem die Vorlaufzeit von lokalen Gefahrenwarnungen für ältere Fahrer erhöht wird

    Human-in-the-Loop Methods for Data-Driven and Reinforcement Learning Systems

    Get PDF
    Recent successes combine reinforcement learning algorithms and deep neural networks, despite reinforcement learning not being widely applied to robotics and real world scenarios. This can be attributed to the fact that current state-of-the-art, end-to-end reinforcement learning approaches still require thousands or millions of data samples to converge to a satisfactory policy and are subject to catastrophic failures during training. Conversely, in real world scenarios and after just a few data samples, humans are able to either provide demonstrations of the task, intervene to prevent catastrophic actions, or simply evaluate if the policy is performing correctly. This research investigates how to integrate these human interaction modalities to the reinforcement learning loop, increasing sample efficiency and enabling real-time reinforcement learning in robotics and real world scenarios. This novel theoretical foundation is called Cycle-of-Learning, a reference to how different human interaction modalities, namely, task demonstration, intervention, and evaluation, are cycled and combined to reinforcement learning algorithms. Results presented in this work show that the reward signal that is learned based upon human interaction accelerates the rate of learning of reinforcement learning algorithms and that learning from a combination of human demonstrations and interventions is faster and more sample efficient when compared to traditional supervised learning algorithms. Finally, Cycle-of-Learning develops an effective transition between policies learned using human demonstrations and interventions to reinforcement learning. The theoretical foundation developed by this research opens new research paths to human-agent teaming scenarios where autonomous agents are able to learn from human teammates and adapt to mission performance metrics in real-time and in real world scenarios.Comment: PhD thesis, Aerospace Engineering, Texas A&M (2020). For more information, see https://vggoecks.com

    Aprendizagem e manuais de utilizador dos QDAS: o caso do webQDA

    Get PDF
    No atual contexto social o recurso à tecnologia digital deixou de ser uma mera opção, tornando-se em muitos casos uma obrigatoriedade. A sua utilização está presente nos mais diversos campos, tais como: saúde, indústria, economia, educação ou ciência. Neste campo, a investigação científica também tem assistido à integração das tecnologias digitais nos processos de análise de dados, sendo os Qualitative Data Analysis Software (QDAS) um bom exemplo disso. Todavia, a utilização destas tecnologias digitais implica um processo de aprendizagem das mesmas, para que elas possam ser usadas com eficiência e eficácia pelos seus utilizadores. Este processo, por vezes, pode ser pacífico para alguns, ou mais exigente para outros, dependendo em muitos casos da literacia digital dos utilizadores, ou das preferências de aprendizagem dos mesmos. Como apoio ao processo de aprendizagem, as empresas de desenvolvimento de QDAS disponibilizam um conjunto de ferramentas de ajuda à aquisição de conhecimentos, que possibilitem que os seus produtos possam ser utilizados de forma correta pelos utilizadores. Contudo, essas ferramentas por vezes não estão sistematizadas de acordo com a preferência de aprendizagem do investigador, o que pode dificultar o processo de (auto)aprendizagem. Assim, este estudo propõe um conjunto de diretrizes gerais para o desenvolvimento de um Ambiente de Autoaprendizagem online (Apo) do software de análise qualitativa webQDA®, que possibilite uma sistematização de ferramentas de aprendizagem de QDAS suportado em quatro dimensões: i) Suporte Tecnológico; ii) Conteúdo de Aprendizagem; iii) Utilizador; e iv) Design de Interação. O processo foi desenvolvido em três fases: i) Fase 1 - “Desenvolvimento e Análise do Manual de Utilização Rápida (PDF); ii) Fase 2 - “Desenvolvimento e Análise do Protótipo de Aprendizagem online; e iii) Fase 3 - “Proposta de Diretrizes Gerais”. Este projeto tese caracteriza-se por ser um estudo descritivo e exploratório, de natureza mista, com predominância para a dimensão fenomenológica e por adotar uma metodologia de Design Base Research, recorrendo em parte ao método do Percurso Cognitivo. Os dados apresentados e discutidos neste estudo foram obtidos através: i) da análise de corpus de dados latentes na Internet; ii) aplicação de três inquéritos por questionário; iii) realização de dois grupos focais; e iv) realização de um workshop de aprendizagem do webQDA. A análise dos dados permitiu apurar que não existe uma sistematização das ferramentas de aprendizagem, por parte dos desenvolvedores dos QDAS, de acordo com o perfil de aprendizagem dos utilizadores. Foi igualmente possível verificar que as estratégias e rotinas de aprendizagem variam de utilizador para utilizador, evidenciando que cada um procura aprender segundo o seu próprio estilo de aprendizagem. Ficou patente neste estudo a preferência dos utilizadores dos QDAS pela Usabilidade, como característica mais valorizada nas ferramentas de aprendizagem, bem como a eficiência do Protótipo de Autoaprendizagem online (PAo) entre utilizadores iniciais do webQDA, revelando ser um instrumento válido e proveitoso para o processo de autoaprendizagem dos QDAS. Em suma, a análise deste conjunto de dados permitiu criar uma série de diretrizes gerais para o desenvolvimento de um Ambiente de Autoaprendizagem online (APo) do webQDA, o qual se propõe como uma solução de aprendizagem autónoma, ajustado ao perfil de aprendizagem de cada utilizador, caracterizado por uma acessível linguagem escrita/visual e suportado por princípios de usabilidade que proporcionem uma melhor Experiência de Utilizador.In the current social context, the use of digital technology is no longer just an option, in many cases it's a requirement. It is used in several fields, such as health, industry, economy, education and science. Considering this, scientific research has also assisted the integration of digital technologies in data analysis processes, with Qualitative Data Analysis Software (QDAS) being one of the examples. However, the use of these digital technologies entails a process of learning how to use them, in order to be an efficient and effective tool for their users. This process is naturally easy for some users and more demanding for others, highly depending on their digital literacy and learning preferences. To support the learning process, QDAS development companies provide a set of tools with the purpose of facilitating knowledge-acquisition. Although, not all tools are meet the researcher's learning preferences, which can hinder the (self) learning process. Therefore, this study proposes a set of general guidelines for the development of an online Self-Learning Environment (Apo) of the webQDA® qualitative analysis software, which enables the systematization of QDAS learning tools supported in four dimensions: i) Technology Support; ii) Learning Content; iii) User; and iv) Interaction Design. The process was developed in three phases: i) Phase 1 - Development and Analysis of the Quick User Manual (PDF); ii) Phase 2 - Development and Analysis of the Online Learning Prototype; and iii) Phase 3 - Proposal of General Guidelines. This thesis is characterized by being a descriptive and exploratory study, of mixed nature, predominantly for the phenomenological dimension by adopting a Design Base Research methodology, resorting in part to the Cognitive Path method. The data presented and discussed in this study were obtained through: i) analysis of latent data corpus on the Internet; ii) application of three surveys per questionnaire; iii) conducting two focus groups; and iv) conducting a webQDA learning workshop. The analysis of the data showed that there is no systematization of learning tools by QDAS developers, according to the learning profile of users. It was also found that learning strategies and routines vary from user to user, showing that each one seeks to learn according to their own learning style. This study showed the preference of QDAS users for Usability, as the most valued feature in learning tools, as well as the efficiency of the Online Self-Learning Prototype (PAo) among early webQDA users, proving to be a valid and useful tool for the self-learning process. In conclusion, the analysis of this dataset has led to the creation of a series of general guidelines for the development of a webQDA Online Self-Learning Environment (APo). This is a proposed self-learning solution, tailored to each user's learning profile, characterized by anaccessible written/visual language and supported by usability principles that provide a improved User Experience (UX).Programa Doutoral em Multimédia em Educaçã
    corecore