9 research outputs found

    DRLViz: Understanding Decisions and Memory in Deep Reinforcement Learning

    Full text link
    We present DRLViz, a visual analytics interface to interpret the internal memory of an agent (e.g. a robot) trained using deep reinforcement learning. This memory is composed of large temporal vectors updated when the agent moves in an environment and is not trivial to understand due to the number of dimensions, dependencies to past vectors, spatial/temporal correlations, and co-correlation between dimensions. It is often referred to as a black box as only inputs (images) and outputs (actions) are intelligible for humans. Using DRLViz, experts are assisted to interpret decisions using memory reduction interactions, and to investigate the role of parts of the memory when errors have been made (e.g. wrong direction). We report on DRLViz applied in the context of video games simulators (ViZDoom) for a navigation scenario with item gathering tasks. We also report on experts evaluation using DRLViz, and applicability of DRLViz to other scenarios and navigation problems beyond simulation games, as well as its contribution to black box models interpretability and explainability in the field of visual analytics

    EagleView:A Video Analysis Tool for Visualising and Querying Spatial Interactions of People and Devices

    Get PDF
    To study and understand group collaborations involving multiple handheld devices and large interactive displays, researchers frequently analyse video recordings of interaction studies to interpret people's interactions with each other and/or devices. Advances in ubicomp technologies allow researchers to record spatial information through sensors in addition to video material. However, the volume of video data and high number of coding parameters involved in such an interaction analysis makes this a time-consuming and labour-intensive process. We designed EagleView, which provides analysts with real-time visualisations during playback of videos and an accompanying data-stream of tracked interactions. Real-time visualisations take into account key proxemic dimensions, such as distance and orientation. Overview visualisations show people's position and movement over longer periods of time. EagleView also allows the user to query people's interactions with an easy-to-use visual interface. Results are highlighted on the video player's timeline, enabling quick review of relevant instances. Our evaluation with expert users showed that EagleView is easy to learn and use, and the visualisations allow analysts to gain insights into collaborative activities

    A Review of Temporal Data Visualizations Based on Space-Time Cube Operations

    Get PDF
    International audienceWe review a range of temporal data visualization techniques through a new lens, by describing them as series of op- erations performed on a conceptual space-time cube. These operations include extracting subparts of a space-time cube, flattening it across space or time, or transforming the cube's geometry or content. We introduce a taxonomy of elementary space-time cube operations, and explain how they can be combined to turn a three-dimensional space-time cube into an easily-readable two-dimensional visualization. Our model captures most visualizations showing two or more data dimensions in addition to time, such as geotemporal visualizations, dynamic networks, time-evolving scatterplots, or videos. We finally review interactive systems that support a range of operations. By introducing this conceptual framework we hope to facilitate the description, criticism and comparison of existing temporal data visualizations, as well as encourage the exploration of new techniques and systems

    Exploring video streams using slit-tear visualizations

    No full text
    Video slicing—a variant of slit scanning in photography— extracts a scan line from a video frame and successively adds that line to a composite image over time. The composite image becomes a time line, where its visual patterns reflect changes in a particular area of the video stream. We extend this idea of video slicing by allowing users to draw marks anywhere on the source video to capture areas of interest. These marks, which we call slittears, are used in place of a scan line, and the resulting composite timeline image provides a much richer visualization of the video data. Depending on how tears are placed, they can accentuate motion, small changes, directional movement, and relational patterns

    Edição e visualização criativa de vídeo

    Get PDF
    Tese de mestrado, Engenharia Informática (Arquitectura, Sistemas e Redes de Computadores), Universidade de Lisboa, Faculdade de Ciências, 2009Este trabalho pretende contribuir para as áreas da visualização e edição criativa de vídeo, criando novas formas de visualização de vídeos. Os vídeos são constituídos por imagens, texto e áudio que variam ao longo do tempo, constituindo informação muito rica mas que ao mesmo tempo é muito complexa. Esta complexidade oferece um desafio a explorar, e a visualização surge como uma forma de exploração e ajuda para simplificar o acesso à informação contida nos vídeos. Com essa informação podem ser criados espaços de vídeo que podem ser usados como plataforma de suporte à expressão da criatividade e como suporte a tarefas de edição, através de funcionalidades como pesquisa e organização de vídeos. Nesse sentido foi desenvolvido um ambiente interactivo para visualizar e explorar espaços de vídeo com ênfase em aspectos da cor e movimento dos vídeos, por serem propriedades visuais importantes, tanto a um nível individual como colectivo – o ColorsInMotion. Este sistema é constituído por dois módulos: o Video Analyzer e o Viewer. No Video Analyzer são postas em prática técnicas de processamento e análise de vídeo e são criadas visualizações em diferentes espaços de cor, permitindo ver diferentes perspectivas sobre os resultados. No Viewer dá-se ênfase à visualização interactiva, permitindo ao utilizador navegar num espaço de vídeos e explorá-lo, tanto a nível colectivo como individual, de forma criativa. No Viewer é possível efectuar pesquisas por cor, servindo também como um sistema de organização, permitindo explorar ligações entre os diferentes vídeos, neste caso, num contexto cultural, com vídeos de dança e música de vários países. Também foram exploradas várias formas de interacção com o sistema, como a interacção por detecção de cor e a interacção gestual, que são indicadas para ambientes de instalação interactiva.This work intends to make a contribution in the field of creative video editing and visualization, developing new ways to visualize videos. Videos are made of images, text and audio all combined and changing with time, making for information, that is, at the same time, very rich and very complex. This complexity offers a challenge to explore, and visualization is one way to help explore and simplify the access to this information, that is contained within the videos. With this information we can create video spaces that can be used as a platform to support the expression of creativity and as a help to video editing tasks, through features such as video search and organization. With this purpose in mind, an interactive environment was developed to visualize and explore video spaces with focus on important visual video properties like color and movement – ColorsInMotion. This system has two modules: the Video Analyzer and the Viewer. In the Video Analyzer we use the techniques of video processing and analysis, and we create different visualizations on different color spaces, allowing different perspectives over the results. In the Viewer we focus on interactive visualization and creativity, giving the user the possibility to browse and explore video spaces, in a creative way, on a collective level, but also on an individual level. In the Viewer we can search by color, serving as a system to organize videos and also serving as a platform to explore connections between different vídeos, in this case, in a cultural context, with videos of dances and music from various countries. We also explored interaction methods to use with the system, like the color detection interaction and the gesture based interaction, that are good for artistic installation environments

    Designing for Cross-Device Interactions

    Get PDF
    Driven by technological advancements, we now own and operate an ever-growing number of digital devices, leading to an increased amount of digital data we produce, use, and maintain. However, while there is a substantial increase in computing power and availability of devices and data, many tasks we conduct with our devices are not well connected across multiple devices. We conduct our tasks sequentially instead of in parallel, while collaborative work across multiple devices is cumbersome to set up or simply not possible. To address these limitations, this thesis is concerned with cross-device computing. In particular it aims to conceptualise, prototype, and study interactions in cross-device computing. This thesis contributes to the field of Human-Computer Interaction (HCI)—and more specifically to the area of cross-device computing—in three ways: first, this work conceptualises previous work through a taxonomy of cross-device computing resulting in an in-depth understanding of the field, that identifies underexplored research areas, enabling the transfer of key insights into the design of interaction techniques. Second, three case studies were conducted that show how cross-device interactions can support curation work as well as augment users’ existing devices for individual and collaborative work. These case studies incorporate novel interaction techniques for supporting cross-device work. Third, through studying cross-device interactions and group collaboration, this thesis provides insights into how researchers can understand and evaluate multi- and cross-device interactions for individual and collaborative work. We provide a visualization and querying tool that facilitates interaction analysis of spatial measures and video recordings to facilitate such evaluations of cross-device work. Overall, the work in this thesis advances the field of cross-device computing with its taxonomy guiding research directions, novel interaction techniques and case studies demonstrating cross-device interactions for curation, and insights into and tools for effective evaluation of cross-device systems
    corecore