355 research outputs found

    Spectral Visualization Sharpening

    Full text link
    In this paper, we propose a perceptually-guided visualization sharpening technique. We analyze the spectral behavior of an established comprehensive perceptual model to arrive at our approximated model based on an adapted weighting of the bandpass images from a Gaussian pyramid. The main benefit of this approximated model is its controllability and predictability for sharpening color-mapped visualizations. Our method can be integrated into any visualization tool as it adopts generic image-based post-processing, and it is intuitive and easy to use as viewing distance is the only parameter. Using highly diverse datasets, we show the usefulness of our method across a wide range of typical visualizations.Comment: Symposium of Applied Perception'1

    The influence of the viewpoint in a self-avatar on body part and self-localization

    Get PDF
    The goal of this study is to determine how a self-avatar in virtual reality, experienced from different viewpoints on the body (at eye- or chest-height), might influence body part localization, as well as self-localization within the body. Previous literature shows that people do not locate themselves in only one location, but rather primarily in the face and the upper torso. Therefore, we aimed to determine if manipulating the viewpoint to either the height of the eyes or to the height of the chest would influence self-location estimates towards these commonly identified locations of self. In a virtual reality (VR) headset, participants were asked to point at sev- eral of their body parts (body part localization) as well as "directly at you" (self-localization) with a virtual pointer. Both pointing tasks were performed before and after a self-avatar adaptation phase where participants explored a co-located, scaled, gender-matched, and animated self-avatar. We hypothesized that experiencing a self-avatar might reduce inaccuracies in body part localization, and that viewpoint would influence pointing responses for both body part and self-localization. Participants overall pointed relatively accurately to some of their body parts (shoulders, chin, and eyes), but very inaccurately to others, with large undershooting for the hips, knees, and feet, and large overshooting for the top of the head. Self-localization was spread across the body (as well as above the head) with the following distribution: the upper face (25%), the up- per torso (25%), above the head (15%) and below the torso (12%). We only found an influence of viewpoint (eye- vs chest-height) during the self-avatar adaptation phase for body part localization and not for self-localization. The overall change in error distance for body part localization for the viewpoint at eye-height was small (M = –2.8 cm), while the overall change in error distance for the viewpoint at chest-height was significantly larger, and in the upwards direction relative to the body parts (M = 21.1 cm). In a post-questionnaire, there was no significant difference in embodiment scores between the viewpoint conditions. Most interestingly, having a self-avatar did not change the results on the self-localization pointing task, even with a novel viewpoint (chest-height). Possibly, body-based cues, or memory, ground the self when in VR. However, the present results caution the use of altered viewpoints in applications where veridical position sense of body parts is required

    Designing passenger experiences for in-car Mixed Reality

    Get PDF
    In day-to-day life, people spend a considerable amount of their time on the road. People seek to invest travel time for work and well-being through interaction with mobile and multimedia applications on personal devices such as smartphones and tablets. However, for new computing paradigms, such as mobile mixed reality (MR), their usefulness in this everyday transport context, in-car MR remains challenging. When future passengers immerse in three-dimensional virtual environments, they become increasingly disconnected from the cabin space, vehicle motion, and other people around them. This degraded awareness of the real environment endangers the passenger experience on the road, which initially motivates this thesis to question: can immersive technology become useful in the everyday transport context, such as for in-car scenarios? If so, how should we design in-car MR technology to foster passenger access and connectedness to both physical and virtual worlds, ensuring ride safety, comfort, and joy? To this aim, this thesis contributes via three aspects: 1) Understanding passenger use of in-car MR —first, I present a model for in-car MR interaction through user research. As interviews with daily commuters reveal, passengers are concerned with their physical integrity when facing spatial conflicts between borderless virtual environments and the confined cabin space. From this, the model aims to help researchers spatially organize information and how user interfaces vary in the proximity of the user. Additionally, a field experiment reveals contextual feedback about motion sickness when using immersive technology on the road. This helps refine the model and instruct the following experiments. 2) Mixing realities in car rides —second, this thesis explores a series of prototypes and experiments to examine how in-car MR technology can enable passengers to feel present in virtual environments while maintaining awareness of the real environment. The results demonstrate technical solutions for physical integrity and situational awareness by incorporating essential elements of the RE into virtual reality. Empirical evidence provides a set of dimensions into the in-car MR model, guiding the design decisions of mixing realities. 3) Transcending the transport context —third, I extend the model to other everyday contexts beyond transport that share spatial and social constraints, such as the confined and shared living space at home. A literature review consolidates leveraging daily physical objects as haptic feedback for MR interaction across spatial scales. A laboratory experiment discovers how context-aware MR systems that consider physical configurations can support social interaction with copresent others in close shared spaces. These results substantiate the scalability of the in-car MR model to other contexts. Finally, I conclude with a holistic model for mobile MR interaction across everyday contexts, from home to on the road. With my user research, prototypes, empirical evaluation, and model, this thesis paves the way for understanding the future passenger use of immersive technology, addressing today’s technical limitations of MR in mobile interaction, and ultimately fostering mobile users’ ubiquitous access and close connectedness to MR anytime and anywhere in their daily lives.Im modernen Leben verbringen die Menschen einen betrĂ€chtlichen Teil ihrer Zeit mit dem tĂ€glichen Pendeln. Die Menschen versuchen, die Reisezeit fĂŒr ihre Arbeit und ihr Wohlbefinden durch die Interaktion mit mobilen und multimedialen Anwendungen auf persönlichen GerĂ€ten wie Smartphones und Tablets zu nutzen. Doch fĂŒr neue Computing-Paradigmen, wie der mobilen Mixed Reality (MR), bleibt ihre NĂŒtzlichkeit in diesem alltĂ€glichen Verkehrskontext, der MR im Auto, eine Herausforderung. Wenn kĂŒnftige Passagiere in dreidimensionale virtuelle Umgebungen eintauchen, werden sie zunehmend von der Kabine, der Fahrzeugbewegung und den Menschen in ihrer Umgebung abgekoppelt. Diese verminderte Wahrnehmung der realen Umgebung gefĂ€hrdet das Fahrverhalten der Passagiere im Straßenverkehr, was diese Arbeit zunĂ€chst zu der Frage motiviert: Können immersive Systeme im alltĂ€glichen Verkehrskontext, z.B. in Fahrzeugszenarien, nĂŒtzlich werden? Wenn ja, wie sollten wir die MR-Technologie im Auto gestalten, um den Zugang und die Verbindung der Passagiere mit der physischen und der virtuellen Welt zu fördern und dabei Sicherheit, Komfort und Freude an der Fahrt zu gewĂ€hrleisten? Zu diesem Zweck trĂ€gt diese Arbeit zu drei Aspekten bei: 1) VerstĂ€ndnis der Nutzung von MR im Auto durch die Passagiere - ZunĂ€chst wird ein Modell fĂŒr die MR-Interaktion im Auto durch user research vorgestellt. Wie aus Interviews mit tĂ€glichen Pendlern hervorgeht, sind die Passagiere um ihre körperliche Unversehrtheit besorgt, wenn sie mit rĂ€umlichen Konflikten zwischen grenzenlosen virtuellen Umgebungen und dem begrenzten Kabinenraum konfrontiert werden. Das Modell soll Forschern dabei helfen, Informationen und Benutzerschnittstellen rĂ€umlich zu organisieren, die in der NĂ€he des Benutzers variieren. DarĂŒber hinaus zeigt ein Feldexperiment kontextbezogenes Feedback zur Reisekrankheit bei der Nutzung immersiver Technologien auf der Straße. Dies hilft, das Modell zu verfeinern und die folgenden Experimente zu instruieren. 2) Vermischung von RealitĂ€ten bei Autofahrten - Zweitens wird in dieser Arbeit anhand einer Reihe von Prototypen und Experimenten untersucht, wie die MR-Technologie im Auto es den Passagieren ermöglichen kann, sich in virtuellen Umgebungen prĂ€sent zu fĂŒhlen und gleichzeitig das Bewusstsein fĂŒr die reale Umgebung zu behalten. Die Ergebnisse zeigen technische Lösungen fĂŒr rĂ€umliche BeschrĂ€nkungen und Situationsbewusstsein, indem wesentliche Elemente der realen Umgebung in VR integriert werden. Die empirischen Erkenntnisse bringen eine Reihe von Dimensionen in das Modell der MR im Auto ein, die die Designentscheidungen fĂŒr gemischte RealitĂ€ten leiten. 3) Über den Verkehrskontext hinaus - Drittens erweitere ich das Modell auf andere Alltagskontexte jenseits des Verkehrs, in denen rĂ€umliche und soziale ZwĂ€nge herrschen, wie z.B. in einem begrenzten und gemeinsam genutzten Wohnbereich zu Hause. Eine Literaturrecherche konsolidiert die Nutzung von AlltagsgegenstĂ€nden als haptisches Feedback fĂŒr MR-Interaktion ĂŒber rĂ€umliche Skalen hinweg. Ein Laborexperiment zeigt, wie kontextbewusste MR-Systeme, die physische Konfigurationen berĂŒcksichtigen, soziale Interaktion mit anderen Personen in engen gemeinsamen RĂ€umen ermöglichen. Diese Ergebnisse belegen die Übertragbarkeit des MR-Modells im Auto auf andere Kontexte. Schließlich schließe ich mit einem ganzheitlichen Modell fĂŒr mobile MR-Interaktion in alltĂ€glichen Kontexten, von zu Hause bis unterwegs. Mit meiner user research, meinen Prototypen und Evaluierungsexperimenten sowie meinem Modell ebnet diese Dissertation den Weg fĂŒr das VerstĂ€ndnis der zukĂŒnftigen Nutzung immersiver Technologien durch Passagiere, fĂŒr die Überwindung der heutigen technischen BeschrĂ€nkungen von MR in der mobilen Interaktion und schließlich fĂŒr die Förderung des allgegenwĂ€rtigen Zugangs und der engen Verbindung der mobilen Nutzer zu MR jederzeit und ĂŒberall in ihrem tĂ€glichen Leben

    SafetyKit: first aid for measuring safety in open-domain conversational systems

    Get PDF
    The social impact of natural language processing and its applications has received increasing attention. In this position paper, we focus on the problem of safety for end-to-end conversational AI. We survey the problem landscape therein, introducing a taxonomy of three observed phenomena: the Instigator, Yea-Sayer, and Impostor effects. We then empirically assess the extent to which current tools can measure these effects and current systems display them. We release these tools as part of a “first aid kit” (SafetyKit) to quickly assess apparent safety concerns. Our results show that, while current tools are able to provide an estimate of the relative safety of systems in various settings, they still have several shortcomings. We suggest several future directions and discuss ethical considerations

    Can adas distract driver’s attention? An rgb-d camera and deep learning-based analysis

    Get PDF
    Driver inattention is the primary cause of vehicle accidents; hence, manufacturers have introduced systems to support the driver and improve safety; nonetheless, advanced driver assistance systems (ADAS) must be properly designed not to become a potential source of distraction for the driver due to the provided feedback. In the present study, an experiment involving auditory and haptic ADAS has been conducted involving 11 participants, whose attention has been monitored during their driving experience. An RGB-D camera has been used to acquire the drivers’ face data. Subsequently, these images have been analyzed using a deep learning-based approach, i.e., a convolutional neural network (CNN) specifically trained to perform facial expression recognition (FER). Analyses to assess possible relationships between these results and both ADAS activations and event occurrences, i.e., accidents, have been carried out. A correlation between attention and accidents emerged, whilst facial expressions and ADAS activations resulted to be not correlated, thus no evidence that the designed ADAS are a possible source of distraction has been found. In addition to the experimental results, the proposed approach has proved to be an effective tool to monitor the driver through the usage of non-invasive techniques

    Mining social media data for biomedical signals and health-related behavior

    Full text link
    Social media data has been increasingly used to study biomedical and health-related phenomena. From cohort level discussions of a condition to planetary level analyses of sentiment, social media has provided scientists with unprecedented amounts of data to study human behavior and response associated with a variety of health conditions and medical treatments. Here we review recent work in mining social media for biomedical, epidemiological, and social phenomena information relevant to the multilevel complexity of human health. We pay particular attention to topics where social media data analysis has shown the most progress, including pharmacovigilance, sentiment analysis especially for mental health, and other areas. We also discuss a variety of innovative uses of social media data for health-related applications and important limitations in social media data access and use.Comment: To appear in the Annual Review of Biomedical Data Scienc

    Digital Twin in the IoT context: a survey on technical features, scenarios and architectural models

    Get PDF
    Digital Twin is an emerging concept that is gaining attention in various industries. It refers to the ability to clone a physical object into a software counterpart. The softwarized object, termed logical object, reflects all the important properties and characteristics of the original object within a specific application context. To fully determine the expected properties of the Digital Twin, this paper surveys the state of the art starting from the original definition within the manufacturing industry. It takes into account related proposals emerging in other fields, namely, Augmented and Virtual Reality (e.g., avatars), Multi-agent systems, and virtualization. This survey thereby allows for the identification of an extensive set of Digital Twin features that point to the “softwarization” of physical objects. To properly consolidate a shared Digital Twin definition, a set of foundational properties is identified and proposed as a common ground outlining the essential characteristics (must-haves) of a Digital Twin. Once the Digital Twin definition has been consolidated, its technical and business value is discussed in terms of applicability and opportunities. Four application scenarios illustrate how the Digital Twin concept can be used and how some industries are applying it. The scenarios also lead to a generic DT architectural Model. This analysis is then complemented by the identification of software architecture models and guidelines in order to present a general functional framework for the Digital Twin. The paper, eventually, analyses a set of possible evolution paths for the Digital Twin considering its possible usage as a major enabler for the softwarization process

    Balancing User Experience for Mobile One-to-One Interpersonal Telepresence

    Get PDF
    The COVID-19 virus disrupted all aspects of our daily lives, and though the world is finally returning to normalcy, the pandemic has shown us how ill-prepared we are to support social interactions when expected to remain socially distant. Family members missed major life events of their loved ones; face-to-face interactions were replaced with video chat; and the technologies used to facilitate interim social interactions caused an increase in depression, stress, and burn-out. It is clear that we need better solutions to address these issues, and one avenue showing promise is that of Interpersonal Telepresence. Interpersonal Telepresence is an interaction paradigm in which two people can share mobile experiences and feel as if they are together, even though geographically distributed. In this dissertation, we posit that this paradigm has significant value in one-to-one, asymmetrical contexts, where one user can live-stream their experiences to another who remains at home. We discuss a review of the recent Interpersonal Telepresence literature, highlighting research trends and opportunities that require further examination. Specifically, we show how current telepresence prototypes do not meet the social needs of the streamer, who often feels socially awkward when using obtrusive devices. To combat this negative finding, we present a qualitative co-design study in which end users worked together to design their ideal telepresence systems, overcoming value tensions that naturally arise between Viewer and Streamer. Expectedly, virtual reality techniques are desired to provide immersive views of the remote location; however, our participants noted that the devices to facilitate this interaction need to be hidden from the public eye. This suggests that 360∘^\circ cameras should be used, but the lenses need to be embedded in wearable systems, which might affect the viewing experience. We thus present two quantitative studies in which we examine the effects of camera placement and height on the viewing experience, in an effort to understand how we can better design telepresence systems. We found that camera height is not a significant factor, meaning wearable cameras do not need to be positioned at the natural eye-level of the viewer; the streamer is able to place them according to their own needs. Lastly, we present a qualitative study in which we deploy a custom interpersonal telepresence prototype on the co-design findings. Our participants preferred our prototype instead of simple video chat, even though it caused a somewhat increased sense of self-consciousness. Our participants indicated that they have their own preferences, even with simple design decisions such as style of hat, and we as a community need to consider ways to allow customization within our devices. Overall, our work contributes new knowledge to the telepresence field and helps system designers focus on the features that truly matter to users, in an effort to let people have richer experiences and virtually bridge the distance to their loved ones
    • 

    corecore