60 research outputs found

    Comparison of engagement and emotional responses of older and younger adults interacting with 3D cultural heritage artefacts on personal devices

    Get PDF
    The availability of advanced software and less expensive hardware allows museums to preserve and share artefacts digitally. As a result, museums are frequently making their collections accessible online as interactive, 3D models. This could lead to the unique situation of viewing the digital artefact before the physical artefact. Experiencing artefacts digitally outside of the museum on personal devices may affect the user's ability to emotionally connect to the artefacts. This study examines how two target populations of young adults (18–21 years) and the elderly (65 years and older) responded to seeing cultural heritage artefacts in three different modalities: augmented reality on a tablet, 3D models on a laptop, and then physical artefacts. Specifically, the time spent, enjoyment, and emotional responses were analysed. Results revealed that regardless of age, the digital modalities were enjoyable and encouraged emotional responses. Seeing the physical artefacts after the digital ones did not lessen their enjoyment or emotions felt. These findings aim to provide an insight into the effectiveness of 3D artefacts viewed on personal devices and artefacts shown outside of the museum for encouraging emotional responses from older and younger people

    Developing a Framework for Heterotopias as Discursive Playgrounds: A Comparative Analysis of Non-Immersive and Immersive Technologies

    Full text link
    The discursive space represents the reordering of knowledge gained through accumulation. In the digital age, multimedia has become the language of information, and the space for archival practices is provided by non-immersive technologies, resulting in the disappearance of several layers from discursive activities. Heterotopias are unique, multilayered epistemic contexts that connect other systems through the exchange of information. This paper describes a process to create a framework for Virtual Reality, Mixed Reality, and personal computer environments based on heterotopias to provide absent layers. This study provides virtual museum space as an informational terrain that contains a "world within worlds" and presents place production as a layer of heterotopia and the subject of discourse. Automation for the individual multimedia content is provided via various sorting and grouping algorithms, and procedural content generation algorithms such as Binary Space Partitioning, Cellular Automata, Growth Algorithm, and Procedural Room Generation. Versions of the framework were comparatively evaluated through a user study involving 30 participants, considering factors such as usability, technology acceptance, and presence. The results of the study show that the framework can serve diverse contexts to construct multilayered digital habitats and is flexible for integration into professional and daily life practices

    Integrating 3D Objects and Pose Estimation for Multimodal Video Annotations

    Get PDF
    With the recent technological advancements, using video has become a focal point on many ubiquitous activities, from presenting ideas to our peers to studying specific events or even simply storing relevant video clips. As a result, taking or making notes can become an invaluable tool in this process by helping us to retain knowledge, document information, or simply reason about recorded contents. This thesis introduces new features for a pre-existing Web-Based multimodal anno- tation tool, namely the integration of 3D components in the current system and pose estimation algorithms aimed at the moving elements in the multimedia content. There- fore, the 3D developments will allow the user to experience a more immersive interaction with the tool by being able to visualize 3D objects either in a neutral or 360Âș background to then use them as traditional annotations. Afterwards, mechanisms for successfully integrating these 3D models on the currently loaded video will be explored, along with a detailed overview of the use of keypoints (pose estimation) to highlight details in this same setting. The goal of this thesis will thus be the development and evaluation of these features seeking the construction of a virtual environment in which a user can successfully work on a video by combining different types of annotations.Ao longo dos anos, a utilização de video tornou-se um aspecto fundamental em vĂĄrias das atividades realizadas no quotidiano como seja em demonstraçÔes e apresentaçÔes profissionais, para a anĂĄlise minuciosa de detalhes visuais ou atĂ© simplesmente para preservar videos considerados relevantes. Deste modo, o uso de anotaçÔes no decorrer destes processos e semelhantes, constitui um fator de elevada importĂąncia ao melhorar potencialmente a nossa compreensĂŁo relativa aos conteĂșdos em causa e tambĂ©m a ajudar a reter caracterĂ­sticas importantes ou a documentar informação pertinente. Efetivamente, nesta tese pretende-se introduzir novas funcionalidades para uma fer- ramenta de anotação multimodal, nomeadamente, a integração de componentes 3D no sistema atual e algorĂ­tmos de Pose Estimation com vista Ă  deteção de elementos em mo- vimento em video. Assim, com estas features procura-se proporcionar um experiĂȘncia mais imersiva ao utilizador ao permitir, por exemplo, a visualização preliminar de objec- tos num plano tridimensional em fundos neutros ou atĂ© 360Âș antes de os utilizar como elementos de anotação tradicionais. Com efeito, serĂŁo explorados mecanismos para a integração eficiente destes modelos 3D em video juntamente com o uso de keypoints (pose estimation) permitindo acentuar pormenores neste ambiente de visualização. O objetivo desta tese serĂĄ, assim, o desenvol- vimento e avaliação continuada destas funcionalidades de modo a potenciar o seu uso em ambientes virtuais em simultaneo com as diferentes tipos de anotaçÔes jĂĄ existentes

    Conceitos e métodos para apoio ao desenvolvimento e avaliação de colaboração remota utilizando realidade aumentada

    Get PDF
    Remote Collaboration using Augmented Reality (AR) shows great potential to establish a common ground in physically distributed scenarios where team-members need to achieve a shared goal. However, most research efforts in this field have been devoted to experiment with the enabling technology and propose methods to support its development. As the field evolves, evaluation and characterization of the collaborative process become an essential, but difficult endeavor, to better understand the contributions of AR. In this thesis, we conducted a critical analysis to identify the main limitations and opportunities of the field, while situating its maturity and proposing a roadmap of important research actions. Next, a human-centered design methodology was adopted, involving industrial partners to probe how AR could support their needs during remote maintenance. These outcomes were combined with literature methods into an AR-prototype and its evaluation was performed through a user study. From this, it became clear the necessity to perform a deep reflection in order to better understand the dimensions that influence and must/should be considered in Collaborative AR. Hence, a conceptual model and a humancentered taxonomy were proposed to foster systematization of perspectives. Based on the model proposed, an evaluation framework for contextualized data gathering and analysis was developed, allowing support the design and performance of distributed evaluations in a more informed and complete manner. To instantiate this vision, the CAPTURE toolkit was created, providing an additional perspective based on selected dimensions of collaboration and pre-defined measurements to obtain “in situ” data about them, which can be analyzed using an integrated visualization dashboard. The toolkit successfully supported evaluations of several team-members during tasks of remote maintenance mediated by AR. Thus, showing its versatility and potential in eliciting a comprehensive characterization of the added value of AR in real-life situations, establishing itself as a generalpurpose solution, potentially applicable to a wider range of collaborative scenarios.Colaboração Remota utilizando Realidade Aumentada (RA) apresenta um enorme potencial para estabelecer um entendimento comum em cenĂĄrios onde membros de uma equipa fisicamente distribuĂ­dos precisam de atingir um objetivo comum. No entanto, a maioria dos esforços de investigação tem-se focado nos aspetos tecnolĂłgicos, em fazer experiĂȘncias e propor mĂ©todos para apoiar seu desenvolvimento. À medida que a ĂĄrea evolui, a avaliação e caracterização do processo colaborativo tornam-se um esforço essencial, mas difĂ­cil, para compreender as contribuiçÔes da RA. Nesta dissertação, realizĂĄmos uma anĂĄlise crĂ­tica para identificar as principais limitaçÔes e oportunidades da ĂĄrea, ao mesmo tempo em que situĂĄmos a sua maturidade e propomos um mapa com direçÔes de investigação importantes. De seguida, foi adotada uma metodologia de Design Centrado no Humano, envolvendo parceiros industriais de forma a compreender como a RA poderia responder Ă s suas necessidades em manutenção remota. Estes resultados foram combinados com mĂ©todos da literatura num protĂłtipo de RA e a sua avaliação foi realizada com um caso de estudo. Ficou entĂŁo clara a necessidade de realizar uma reflexĂŁo profunda para melhor compreender as dimensĂ”es que influenciam e devem ser consideradas na RA Colaborativa. Foram entĂŁo propostos um modelo conceptual e uma taxonomia centrada no ser humano para promover a sistematização de perspetivas. Com base no modelo proposto, foi desenvolvido um framework de avaliação para recolha e anĂĄlise de dados contextualizados, permitindo apoiar o desenho e a realização de avaliaçÔes distribuĂ­das de forma mais informada e completa. Para instanciar esta visĂŁo, o CAPTURE toolkit foi criado, fornecendo uma perspetiva adicional com base em dimensĂ”es de colaboração e medidas predefinidas para obter dados in situ, que podem ser analisados utilizando o painel de visualização integrado. O toolkit permitiu avaliar com sucesso vĂĄrios colaboradores durante a realização de tarefas de manutenção remota apoiada por RA, permitindo mostrar a sua versatilidade e potencial em obter uma caracterização abrangente do valor acrescentado da RA em situaçÔes da vida real. Sendo assim, estabelece-se como uma solução genĂ©rica, potencialmente aplicĂĄvel a uma gama diversificada de cenĂĄrios colaborativos.Programa Doutoral em Engenharia InformĂĄtic

    Enhancing the E-Commerce Experience through Haptic Feedback Interaction

    Get PDF
    The sense of touch is important in our everyday lives and its absence makes it difficult to explore and manipulate everyday objects. Existing online shopping practice lacks the opportunity for physical evaluation, that people often use and value when making product choices. However, with recent advances in haptic research and technology, it is possible to simulate various physical properties such as heaviness, softness, deformation, and temperature. The research described here investigates the use of haptic feedback interaction to enhance e-commerce product evaluation, particularly haptic weight and texture evaluation. While other properties are equally important, besides being fundamental to the shopping experience of many online products, weight and texture can be simulated using cost-effective devices. Two initial psychophysical experiments were conducted using free motion haptic exploration in order to more closely resemble conventional shopping. One experiment was to measure weight force thresholds and another to measure texture force thresholds. The measurements can provide better understanding of haptic device limitation for online shopping in terms of the availability of different stimuli to represent physical products. The outcomes of the initial psychophysical experimental studies were then used to produce various absolute stimuli that were used in a comparative experimental study to evaluate user experience of haptic product evaluation. Although free haptic exploration was exercised on both psychophysical experiments, results were relatively consistent with previous work on haptic discrimination. The threshold for weight force discrimination represented as downward forces was 10 percent. The threshold for texture force discrimination represented as friction forces was 14.1 percent, when using dynamic coefficient of friction at any level of static coefficient of friction. On the other hand, the comparative experimental study to evaluate user experience of haptic product information indicated that haptic product evaluation does not change user performance significantly. However, although there was an increase in the time taken to complete the task, the number of button click actions tended to decrease. The results showed that haptic product evaluation could significantly increase the confidence of shopping decision. Nevertheless, the availability of haptic product evaluation does not necessarily impose different product choices but it complements other selection criteria such as price and appearance. The research findings from this work are a first step towards exploring haptic-based environments in e-commerce environments. The findings not only lay the foundation for designing online haptic shopping but also provide empirical support to research in this direction

    Gesture Recognition System Application to early childhood education

    Get PDF
    One of the most socially and culturally advantageous uses of human-computer interaction is enhancing playing and learning for children. In this study, gesture interactive game-based learning (GIGL) is tested to see if these kinds of applications are suitable to stimulate working memory (WM) and basic mathematical skills (BMS) in early childhood (5-6 years old) using a hand gesture recognition system. Hand gesture is being performed by the user and to control a computer system by that incoming information. We can conclude that the children who used GIGL technology showed a significant increase in their learning performance in WM and BMS, surpassing those who did normal school activities

    Intelligent tutoring in virtual reality for highly dynamic pedestrian safety training

    Get PDF
    This thesis presents the design, implementation, and evaluation of an Intelligent Tutoring System (ITS) with a Virtual Reality (VR) interface for child pedestrian safety training. This system enables children to train practical skills in a safe and realistic virtual environment without the time and space dependencies of traditional roadside training. This system also employs Domain and Student Modelling techniques to analyze user data during training automatically and to provide appropriate instructions and feedback. Thus, the traditional requirement of constant monitoring from teaching personnel is greatly reduced. Compared to previous work, especially the second aspect is a principal novelty for this domain. To achieve this, a novel Domain and Student Modeling method was developed in addition to a modular and extensible virtual environment for the target domain. While the Domain and Student Modeling framework is designed to handle the highly dynamic nature of training in traffic and the ill-defined characteristics of pedestrian tasks, the modular virtual environment supports different interaction methods and a simple and efficient way to create and adapt exercises. The thesis is complemented by two user studies with elementary school children. These studies testify great overall user acceptance and the system’s potential for improving key pedestrian skills through autonomous learning. Last but not least, the thesis presents experiments with different forms of VR input and provides directions for future work.Diese Arbeit behandelt den Entwurf, die Implementierung sowie die Evaluierung eines intelligenten Tutorensystems (ITS) mit einer Virtual Reality (VR) basierten BenutzeroberflĂ€che zum Zwecke von Verkehrssicherheitstraining fĂŒr Kinder. Dieses System ermöglicht es Kindern praktische FĂ€higkeiten in einer sicheren und realistischen Umgebung zu trainieren, ohne den örtlichen und zeitlichen AbhĂ€ngigkeiten des traditionellen, straßenseitigen Trainings unterworfen zu sein. Dieses System macht außerdem von Domain und Student Modelling Techniken gebrauch, um Nutzerdaten wĂ€hrend des Trainings zu analysieren und daraufhin automatisiert geeignete Instruktionen und RĂŒckmeldung zu generieren. Dadurch kann die bisher erforderliche, stĂ€ndige Überwachung durch Lehrpersonal drastisch reduziert werden. Verglichen mit bisherigen Lösungen ist insbesondere der zweite Aspekt eine grundlegende Neuheit fĂŒr diesen Bereich. Um dies zu erreichen wurde ein neuartiges Framework fĂŒr Domain und Student Modelling entwickelt, sowie eine modulare und erweiterbare virtuelle Umgebung fĂŒr diese Art von Training. WĂ€hrend das Domain und Student Modelling Framework so entworfen wurde, um mit der hohen Dynamik des Straßenverkehrs sowie den vage definierten FußgĂ€ngeraufgaben zurecht zu kommen, unterstĂŒtzt die modulare Umgebung unterschiedliche Eingabeformen sowie eine unkomplizierte und effiziente Methode, um Übungen zu erstellen und anzupassen. Die Arbeit beinhaltet außerdem zwei Nutzerstudien mit Grundschulkindern. Diese Studien belegen dem System eine hohe Benutzerakzeptanz und stellt das Potenzial des Systems heraus, wichtige FĂ€higkeiten fĂŒr FußgĂ€ngersicherheit durch autodidaktisches Training zu verbessern. Nicht zuletzt beschreibt die Arbeit Experimente mit verschiedenen Formen von VR Eingaben und zeigt die Richtung fĂŒr zukĂŒnftige Arbeit auf
    • 

    corecore