425 research outputs found

    Experiencing AR in retail: The influence of moment marketing and avatars on consumer behaviour

    Get PDF
    The development of augmented reality experiences is growing, as its adoption from companies and consumers registers a steady rise. As research is still catching up with the fast adoption of augmented reality solutions, the aim of this study was to investigate the effects of using a shopping assistant, through augmented reality technology, on the consumers’ emotional and cognitive responses, and how it would affect their buying behaviours. A prototype of an application to assist consumers inside a supermarket store was developed, applying a moment marketing strategy, and using HoloLens glasses. By studying the reactions to a number of product suggestions, it was found that whilst the level of brand-moment fit is not yet a big influencer on consumers’ responses and behaviours, the presence of the avatar as the assistant impacts their decisions and heightens their cognitive responses. The results show that a media rich augmented reality experience influences how customers behave in a retail store, and how they make their purchase decisions, ultimately changing how consumers relate to the brands involved in such experiences. At a time when managers in every industry work to capture the attention of consumers, the present study shows how relevant content remains important in every communication activity, even in an innovative augmented reality retail shopping experience.O desenvolvimento de experiências de realidade aumentada tem vindo a crescer, ao mesmo tempo que empresas e consumidores têm vindo a adotar esta tecnologia. Enquanto os investigadores estão ainda a tentar acompanhar a rápida adoção de soluções que envolvem tecnologia de realidade aumentada, o objetivo deste estudo foi investigar que efeitos se poderiam verificar nas respostas emocionais e cognitivas dos consumidores, tal como nos seus comportamentos de compra, ao usar um assistente de compras, através de tecnologia de realidade aumentada. Recorrendo aos óculos HoloLens, foi desenvolvido um protótipo de uma aplicação que assiste os consumidores dentro de um supermercado, aplicando também uma estratégia de marketing em tempo-real. Ao estudar as reações às sugestões de vários produtos, concluiu-se que, enquanto uma alteração nos níveis de brand-moment fit não têm ainda influência nas respostas emocionais e cognitivas dos consumidores, a presença de um avatar como assistente tem um impacto nas decisões de compra, tal como estimula as respostas cognitivas das pessoas. Numa altura em que gestores de qualquer indústria trabalham para captar a atenção dos consumidores, o presente estudo mostra como o conteúdo permanece uma parte importante de qualquer forma de comunicação de marketing, mesmo numa experiência inovadora de realidade aumentada

    An Eye for AI: A Multimodal Bottleneck Transformer Approach for Predicting Individual Eye Movements : Towards Foundation Models for Human Factors & Neuroscience

    Get PDF
    Human perception has been a subject of study for centuries. Various eye tracking methods in many study designs have shed light on individual differences in perception and visual navigation. However, accurately identifying individuals based on gaze behaviour remains a challenge. Artificial intelligence (AI) based methods have led to large successes in domains such as vision and language; they are also making their introduction in human factors & neuroscience (HFN). Leveraging AI for HFN requires quantities of data several orders of magnitude larger than the field is used to organising; there exists a clear discrepancy in the standardisation of data publication. In this work, we work towards foundation models (FM) for HFN by highlighting important data insights from AI. A multimodal bottleneck transformer is proposed, a model architecture that can effectively and efficiently represent and work with the varying modalities encountered in HFN. Results indicate that classification of individuals and prediction of gaze is possible, given more training data

    The Development and Evaluation of a Learning Electronic Medical Record System

    Get PDF
    Electronic medical record (EMR) systems are capturing increasing amounts of data per patient. For clinicians to efficiently and accurately understand a patient’s clinical state, better ways are needed to determine when and how to display patient data. The American Medical Association envisions EMR systems that manage information flow and adjust for context, environment, and user preferences. We developed, implemented, and evaluated a prototype Learning EMR (LEMR) system with the aim of helping make this vision a reality. A LEMR system, as we employ the term, observes clinician information seeking behavior and applies it to direct the future display of patient data. The development of this system was divided into five phases. First, we developed a prototype LEMR interface that served as a testing bed for LEMR experimentation. The LEMR interface was evaluated in two studies: a think aloud study and a usability study. The results from these studies were used to iteratively improve the interface. Second, we tested the accuracy of an inexpensive eye-tracking device and developed an automatic method for mapping eye gaze to patient data displayed in the LEMR interface. In the two studies we showed that an inexpensive eye-tracking device can perform as well as a costlier device intended for research and that the automatic mapping method accurately captures the patient information a user is viewing. Third, we collected observations of clinician information seeking behavior in the LEMR system. In three studies we evaluated different observation methods and applied those methods to collect training data. Fourth, we used machine learning on the training data to model clinician information seeking behavior. The models predict information that clinicians will seek in a given clinical context. Fifth, we applied the models to direct the display of patient data in a prospective evaluation of the LEMR system. The evaluation found that the system reduced the amount of time it takes for clinicians to prepare for morning rounds and highlighted about half of the patient data that clinicians seek

    Real-time generation and adaptation of social companion robot behaviors

    Get PDF
    Social robots will be part of our future homes. They will assist us in everyday tasks, entertain us, and provide helpful advice. However, the technology still faces challenges that must be overcome to equip the machine with social competencies and make it a socially intelligent and accepted housemate. An essential skill of every social robot is verbal and non-verbal communication. In contrast to voice assistants, smartphones, and smart home technology, which are already part of many people's lives today, social robots have an embodiment that raises expectations towards the machine. Their anthropomorphic or zoomorphic appearance suggests they can communicate naturally with speech, gestures, or facial expressions and understand corresponding human behaviors. In addition, robots also need to consider individual users' preferences: everybody is shaped by their culture, social norms, and life experiences, resulting in different expectations towards communication with a robot. However, robots do not have human intuition - they must be equipped with the corresponding algorithmic solutions to these problems. This thesis investigates the use of reinforcement learning to adapt the robot's verbal and non-verbal communication to the user's needs and preferences. Such non-functional adaptation of the robot's behaviors primarily aims to improve the user experience and the robot's perceived social intelligence. The literature has not yet provided a holistic view of the overall challenge: real-time adaptation requires control over the robot's multimodal behavior generation, an understanding of human feedback, and an algorithmic basis for machine learning. Thus, this thesis develops a conceptual framework for designing real-time non-functional social robot behavior adaptation with reinforcement learning. It provides a higher-level view from the system designer's perspective and guidance from the start to the end. It illustrates the process of modeling, simulating, and evaluating such adaptation processes. Specifically, it guides the integration of human feedback and social signals to equip the machine with social awareness. The conceptual framework is put into practice for several use cases, resulting in technical proofs of concept and research prototypes. They are evaluated in the lab and in in-situ studies. These approaches address typical activities in domestic environments, focussing on the robot's expression of personality, persona, politeness, and humor. Within this scope, the robot adapts its spoken utterances, prosody, and animations based on human explicit or implicit feedback.Soziale Roboter werden Teil unseres zukünftigen Zuhauses sein. Sie werden uns bei alltäglichen Aufgaben unterstützen, uns unterhalten und uns mit hilfreichen Ratschlägen versorgen. Noch gibt es allerdings technische Herausforderungen, die zunächst überwunden werden müssen, um die Maschine mit sozialen Kompetenzen auszustatten und zu einem sozial intelligenten und akzeptierten Mitbewohner zu machen. Eine wesentliche Fähigkeit eines jeden sozialen Roboters ist die verbale und nonverbale Kommunikation. Im Gegensatz zu Sprachassistenten, Smartphones und Smart-Home-Technologien, die bereits heute Teil des Lebens vieler Menschen sind, haben soziale Roboter eine Verkörperung, die Erwartungen an die Maschine weckt. Ihr anthropomorphes oder zoomorphes Aussehen legt nahe, dass sie in der Lage sind, auf natürliche Weise mit Sprache, Gestik oder Mimik zu kommunizieren, aber auch entsprechende menschliche Kommunikation zu verstehen. Darüber hinaus müssen Roboter auch die individuellen Vorlieben der Benutzer berücksichtigen. So ist jeder Mensch von seiner Kultur, sozialen Normen und eigenen Lebenserfahrungen geprägt, was zu unterschiedlichen Erwartungen an die Kommunikation mit einem Roboter führt. Roboter haben jedoch keine menschliche Intuition - sie müssen mit entsprechenden Algorithmen für diese Probleme ausgestattet werden. In dieser Arbeit wird der Einsatz von bestärkendem Lernen untersucht, um die verbale und nonverbale Kommunikation des Roboters an die Bedürfnisse und Vorlieben des Benutzers anzupassen. Eine solche nicht-funktionale Anpassung des Roboterverhaltens zielt in erster Linie darauf ab, das Benutzererlebnis und die wahrgenommene soziale Intelligenz des Roboters zu verbessern. Die Literatur bietet bisher keine ganzheitliche Sicht auf diese Herausforderung: Echtzeitanpassung erfordert die Kontrolle über die multimodale Verhaltenserzeugung des Roboters, ein Verständnis des menschlichen Feedbacks und eine algorithmische Basis für maschinelles Lernen. Daher wird in dieser Arbeit ein konzeptioneller Rahmen für die Gestaltung von nicht-funktionaler Anpassung der Kommunikation sozialer Roboter mit bestärkendem Lernen entwickelt. Er bietet eine übergeordnete Sichtweise aus der Perspektive des Systemdesigners und eine Anleitung vom Anfang bis zum Ende. Er veranschaulicht den Prozess der Modellierung, Simulation und Evaluierung solcher Anpassungsprozesse. Insbesondere wird auf die Integration von menschlichem Feedback und sozialen Signalen eingegangen, um die Maschine mit sozialem Bewusstsein auszustatten. Der konzeptionelle Rahmen wird für mehrere Anwendungsfälle in die Praxis umgesetzt, was zu technischen Konzeptnachweisen und Forschungsprototypen führt, die in Labor- und In-situ-Studien evaluiert werden. Diese Ansätze befassen sich mit typischen Aktivitäten in häuslichen Umgebungen, wobei der Schwerpunkt auf dem Ausdruck der Persönlichkeit, dem Persona, der Höflichkeit und dem Humor des Roboters liegt. In diesem Rahmen passt der Roboter seine Sprache, Prosodie, und Animationen auf Basis expliziten oder impliziten menschlichen Feedbacks an

    Making better recommendations with online profiling agents

    Get PDF
    Master'sMASTER OF SCIENC

    Toward Simulation-Based Training Validation Protocols: Exploring 3d Stereo with Incremental Rehearsal and Partial Occlusion to Instigate and Modulate Smooth Pursuit and Saccade Responses in Baseball Batting

    Get PDF
    “Keeping your eye on the ball” is a long-standing tenet in baseball batting. And yet, there are no protocols for objectively conditioning, measuring, and/or evaluating eye-on-ball coordination performance relative to baseball-pitch trajectories. Although video games and other virtual simulation technologies offer alternatives for training and obtaining objective measures, baseball batting instruction has relied on traditional eye-pitch coordination exercises with qualitative “face validation”, statistics of whole-task batting performance, and/or subjective batter-interrogation methods, rather than on direct, quantitative eye-movement performance evaluations. Further, protocols for validating transfer-of-training (ToT) for video games and other simulation-based training have not been established in general ― or for eye-movement training, specifically. An exploratory research study was conducted to consider the ecological and ToT validity of a part-task, virtual-fastball simulator implemented in 3D stereo along with a rotary pitching machine standing as proxy for the live-pitch referent. The virtual-fastball and live-pitch simulation couple was designed to facilitate objective eye-movement response measures to live and virtual stimuli. The objective measures 1) served to assess the ecological validity of virtual fastballs, 2) informed the characterization and comparison of eye-movement strategies employed by expert and novice batters, 3) enabled a treatment protocol relying on repurposed incremental-rehearsal and partial-occlusion methods intended to instigate and modulate strategic eye movements, and 4) revealed whether the simulation-based treatment resulted in positive (or negative) ToT in the real task. Results indicated that live fastballs consistently elicited different saccade onset time responses than virtual fastballs. Saccade onset times for live fastballs were consistent with catch-up saccades that follow the smooth-pursuit maximum velocity threshold of approximately 40-70˚/sec while saccade onset times for virtual fastballs lagged in the order of 13%. More experienced batters employed more deliberate and timely combinations of smooth pursuit and catch-up saccades than less experienced batters, enabling them to position their eye to meet the ball near the front edge of home plate. Smooth pursuit and saccade modulation from treatment was inconclusive from virtual-pitch pre- and post-treatment comparisons, but comparisons of live-pitch pre- and post-treatment indicate ToT improvements. Lagging saccade onset times from virtual-pitch suggest possible accommodative-vergence impairment due to accommodation-vergence conflict inherent to 3D stereo displays

    The Effectiveness Of Virtual Humans Vs. Pre-recorded Humans In A Standardized Patient Performance Assessment

    Get PDF
    A Standardized Patient (SP) is a trained actor who portrays a particular illness to provide training to medical students and professionals. SPs primarily use written scripts and additional paper-based training for preparation of practical and board exams. Many institutions use various methods for training such as hiring preceptors for reenactment of scenarios, viewing archived videos, and computer based training. Currently, the training that is available can be enhanced to improve the level of quality of standardized patients. The following research is examining current processes in standardized patient training and investigating new methods for clinical skills education in SPs. The modality that is selected for training can possibly affect the performance of the actual SP case. This paper explains the results of a study that investigates if there is a difference in the results of an SP performance assessment. This difference can be seen when comparing a virtual human modality to that of a pre-recorded human modality for standardized patient training. The sample population navigates through an interactive computer based training module which provides informational content on what the roles of an SP are, training objectives, a practice session, and an interactive performance assessment with a simulated Virtual Human medical student. Half of the subjects interact with an animated virtual human medical student while the other half interacts with a pre-recorded human. The interactions from this assessment are audio-recorded, transcribed, and then graded to see how the two modalities compare. If the performance when using virtual humans for standardized patients is equal to or superior to pre-recorded humans, this can be utilized as a part task trainer that brings standardized patients to a higher level of effectiveness and standardization. In addition, if executed properly, this tool could potentially be used as a part task trainer which could provide savings in training time, resources, budget, and staff to military and civilian healthcare facilities

    Designing social cues for effective persuasive robots

    Get PDF
    corecore