1,019 research outputs found

    DRLViz: Understanding Decisions and Memory in Deep Reinforcement Learning

    Full text link
    We present DRLViz, a visual analytics interface to interpret the internal memory of an agent (e.g. a robot) trained using deep reinforcement learning. This memory is composed of large temporal vectors updated when the agent moves in an environment and is not trivial to understand due to the number of dimensions, dependencies to past vectors, spatial/temporal correlations, and co-correlation between dimensions. It is often referred to as a black box as only inputs (images) and outputs (actions) are intelligible for humans. Using DRLViz, experts are assisted to interpret decisions using memory reduction interactions, and to investigate the role of parts of the memory when errors have been made (e.g. wrong direction). We report on DRLViz applied in the context of video games simulators (ViZDoom) for a navigation scenario with item gathering tasks. We also report on experts evaluation using DRLViz, and applicability of DRLViz to other scenarios and navigation problems beyond simulation games, as well as its contribution to black box models interpretability and explainability in the field of visual analytics

    MUVTIME: a Multivariate time series visualizer for behavioral science

    Get PDF
    As behavioral science becomes progressively more data driven, the need is increasing for appropriate tools for visual exploration and analysis of large datasets, often formed by multivariate time series. This paper describes MUVTIME, a multimodal time series visualization tool, developed in Matlab that allows a user to load a time series collection (a multivariate time series dataset) and an associated video. The user can plot several time series on MUVTIME and use one of them to do brushing on the displayed data, i.e. select a time range dynamically and have it updated on the display. The tool also features a categorical visualization of two binary time series that works as a high-level descriptor of the coordination between two interacting partners. The paper reports the successful use of MUVTIME under the scope of project TURNTAKE, which was intended to contribute to the improvement of human-robot interaction systems by studying turn- taking dynamics (role interchange) in parent-child dyads during joint action.Marie Curie International Incoming Fellowship PIIF-GA-2011- 301155; Portuguese Foundation for Science and Technology (FCT) project PTDC/PSI- PCO/121494/2010; AFP was also partially funded by the FCT project (IF/00217/2013)This research was supported by: Marie Curie International Incoming Fellowship PIIF-GA-2011301155; Portuguese Foundation for Science and Technology (FCT) Strategic program FCT UID/EEA/00066/2013; FCT project PTDC/PSIPCO/121494/2010. AFP was also partially funded by the FCT project (IF/00217/2013). REFERENCE

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio

    Anchorage: Visual Analysis of Satisfaction in Customer Service Videos via Anchor Events

    Full text link
    Delivering customer services through video communications has brought new opportunities to analyze customer satisfaction for quality management. However, due to the lack of reliable self-reported responses, service providers are troubled by the inadequate estimation of customer services and the tedious investigation into multimodal video recordings. We introduce Anchorage, a visual analytics system to evaluate customer satisfaction by summarizing multimodal behavioral features in customer service videos and revealing abnormal operations in the service process. We leverage the semantically meaningful operations to introduce structured event understanding into videos which help service providers quickly navigate to events of their interest. Anchorage supports a comprehensive evaluation of customer satisfaction from the service and operation levels and efficient analysis of customer behavioral dynamics via multifaceted visualization views. We extensively evaluate Anchorage through a case study and a carefully-designed user study. The results demonstrate its effectiveness and usability in assessing customer satisfaction using customer service videos. We found that introducing event contexts in assessing customer satisfaction can enhance its performance without compromising annotation precision. Our approach can be adapted in situations where unlabelled and unstructured videos are collected along with sequential records.Comment: 13 pages. A preprint version of a publication at IEEE Transactions on Visualization and Computer Graphics (TVCG), 202

    VOICE: Visual Oracle for Interaction, Conversation, and Explanation

    Full text link
    We present VOICE, a novel approach for connecting large language models' (LLM) conversational capabilities with interactive exploratory visualization. VOICE introduces several innovative technical contributions that drive our conversational visualization framework. Our foundation is a pack-of-bots that can perform specific tasks, such as assigning tasks, extracting instructions, and generating coherent content. We employ fine-tuning and prompt engineering techniques to tailor bots' performance to their specific roles and accurately respond to user queries, and a new prompt-based iterative scene-tree generation establishes a coupling with a structural model. Our text-to-visualization method generates a flythrough sequence matching the content explanation. Finally, 3D natural language interaction provides capabilities to navigate and manipulate the 3D models in real-time. The VOICE framework can receive arbitrary voice commands from the user and responds verbally, tightly coupled with corresponding visual representation with low latency and high accuracy. We demonstrate the effectiveness and high generalizability potential of our approach by applying it to two distinct domains: analyzing three 3D molecular models with multi-scale and multi-instance attributes, and showcasing its effectiveness on a cartographic map visualization. A free copy of this paper and all supplemental materials are available at https://osf.io/g7fbr/

    Social signal processing for studying parent–infant interaction

    Get PDF
    International audienceStudying early interactions is a core issue of infant development and psychopathology. Automatic social signal processing theoretically offers the possibility to extract and analyze communication by taking an integrative perspective, considering the multimodal nature and dynamics of behaviors (including synchrony).This paper proposes an explorative method to acquire and extract relevant social signals from a naturalistic early parent–infant interaction. An experimental setup is proposed based on both clinical and technical requirements. We extracted various cues from body postures and speech productions of partners using the IMI2S (Interaction, Multimodal Integration, and Social Signal) Framework. Preliminary clinical and computational results are reported for two dyads (one pathological in a situation of severe emotional neglect and one normal control) as an illustration of our cross-disciplinary protocol. The results from both clinical and computational analyzes highlight similar differences: the pathological dyad shows dyssynchronic interaction led by the infant whereas the control dyad shows synchronic interaction and a smooth interactive dialog.The results suggest that the current method might be promising for future studies

    An immersive system for browsing and visualizing surveillance video

    Get PDF
    HouseFly is an interactive data browsing and visualization system that synthesizes audio-visual recordings from multiple sensors, as well as the meta-data derived from those recordings, into a unified viewing experience. The system is being applied to study human behavior in both domestic and retail situations grounded in longitudinal video recordings. HouseFly uses an immersive video technique to display multiple streams of high resolution video using a realtime warping procedure that projects the video onto a 3D model of the recorded space. The system interface provides the user with simultaneous control over both playback rate and vantage point, enabling the user to navigate the data spatially and temporally. Beyond applications in video browsing, this system serves as an intuitive platform for visualizing patterns over time in a variety of multi-modal data, including person tracks and speech transcripts.United States. Office of Naval Research (Award no. N000140910187

    From Keyword Search to Exploration: How Result Visualization Aids Discovery on the Web

    No full text
    A key to the Web's success is the power of search. The elegant way in which search results are returned is usually remarkably effective. However, for exploratory search in which users need to learn, discover, and understand novel or complex topics, there is substantial room for improvement. Human computer interaction researchers and web browser designers have developed novel strategies to improve Web search by enabling users to conveniently visualize, manipulate, and organize their Web search results. This monograph offers fresh ways to think about search-related cognitive processes and describes innovative design approaches to browsers and related tools. For instance, while key word search presents users with results for specific information (e.g., what is the capitol of Peru), other methods may let users see and explore the contexts of their requests for information (related or previous work, conflicting information), or the properties that associate groups of information assets (group legal decisions by lead attorney). We also consider the both traditional and novel ways in which these strategies have been evaluated. From our review of cognitive processes, browser design, and evaluations, we reflect on the future opportunities and new paradigms for exploring and interacting with Web search results
    • 

    corecore