1,898 research outputs found

    Exploring Design Options for interactive Video with the Mnemovie hypervideo system

    Full text link
    Mnemovie is an investigative hypervideo system for exploring design options for interactivity with digital motion picture files (video). The custom-designed software toolset is used to build a series of experimental interactive models from which three models were subsequently developed for initial user experience testing and evaluation. We compared interaction with each of the models across three groups of video file users, from expert to non-expert. Understanding participants preference for each model helps define the different dimensions of the actual user experience. We discuss how these findings and the subsequent development of persona scenarios can inform the design of hypervideo systems and the implications this has for interaction design

    Visualizing the Motion Flow of Crowds

    Get PDF
    In modern cities, massive population causes problems, like congestion, accident, violence and crime everywhere. Video surveillance system such as closed-circuit television cameras is widely used by security guards to monitor human behaviors and activities to manage, direct, or protect people. With the quantity and prolonged duration of the recorded videos, it requires a huge amount of human resources to examine these video recordings and keep track of activities and events. In recent years, new techniques in computer vision field reduce the barrier of entry, allowing developers to experiment more with intelligent surveillance video system. Different from previous research, this dissertation does not address any algorithm design concerns related to object detection or object tracking. This study will put efforts on the technological side and executing methodologies in data visualization to find the model of detecting anomalies. It would like to provide an understanding of how to detect the behavior of the pedestrians in the video and find out anomalies or abnormal cases by using techniques of data visualization

    MUVTIME: a Multivariate time series visualizer for behavioral science

    Get PDF
    As behavioral science becomes progressively more data driven, the need is increasing for appropriate tools for visual exploration and analysis of large datasets, often formed by multivariate time series. This paper describes MUVTIME, a multimodal time series visualization tool, developed in Matlab that allows a user to load a time series collection (a multivariate time series dataset) and an associated video. The user can plot several time series on MUVTIME and use one of them to do brushing on the displayed data, i.e. select a time range dynamically and have it updated on the display. The tool also features a categorical visualization of two binary time series that works as a high-level descriptor of the coordination between two interacting partners. The paper reports the successful use of MUVTIME under the scope of project TURNTAKE, which was intended to contribute to the improvement of human-robot interaction systems by studying turn- taking dynamics (role interchange) in parent-child dyads during joint action.Marie Curie International Incoming Fellowship PIIF-GA-2011- 301155; Portuguese Foundation for Science and Technology (FCT) project PTDC/PSI- PCO/121494/2010; AFP was also partially funded by the FCT project (IF/00217/2013)This research was supported by: Marie Curie International Incoming Fellowship PIIF-GA-2011301155; Portuguese Foundation for Science and Technology (FCT) Strategic program FCT UID/EEA/00066/2013; FCT project PTDC/PSIPCO/121494/2010. AFP was also partially funded by the FCT project (IF/00217/2013). REFERENCE

    Crowdsourcing in Computer Vision

    Full text link
    Computer vision systems require large amounts of manually annotated data to properly learn challenging visual concepts. Crowdsourcing platforms offer an inexpensive method to capture human knowledge and understanding, for a vast number of visual perception tasks. In this survey, we describe the types of annotations computer vision researchers have collected using crowdsourcing, and how they have ensured that this data is of high quality while annotation effort is minimized. We begin by discussing data collection on both classic (e.g., object recognition) and recent (e.g., visual story-telling) vision tasks. We then summarize key design decisions for creating effective data collection interfaces and workflows, and present strategies for intelligently selecting the most important data instances to annotate. Finally, we conclude with some thoughts on the future of crowdsourcing in computer vision.Comment: A 69-page meta review of the field, Foundations and Trends in Computer Graphics and Vision, 201

    Semiotic Shortcuts. The Graphical Abstract Strategies of Engineering Students

    Get PDF
      Graphical abstracts are representative of the rising promotionalism, interdisciplinarity and changing researcher roles in the current dissemination of science and technology. Their design, moreover, amalgamates a number of transdisciplinary skills much valued in higher education, such as critical and lateral thinking, and cultural and audience awareness. In this study, I investigate a corpus of 56 samples of graphical abstracts devised by my aeronautical engineering students, to find out the ‘semiotic shortcuts’ or encoding strategies they deploy, without any previous instruction, to pack information and translate the verbal into the visual. Findings suggest that their ‘natural digital-native graphicacy’ is conservative as to the medium, format and type of representation, but versatile regarding particular meanings, although not always unambiguous or register-appropriate. Consequently, I claim the convenience of including graphicacy/visual literacy and some basic training on graphical abstract design in the English for Specific Purposes and the disciplinary English-medium curriculum.  

    Bring it to the Pitch: Combining Video and Movement Data to Enhance Team Sport Analysis

    Get PDF
    Analysts in professional team sport regularly perform analysis to gain strategic and tactical insights into player and team behavior. Goals of team sport analysis regularly include identification of weaknesses of opposing teams, or assessing performance and improvement potential of a coached team. Current analysis workflows are typically based on the analysis of team videos. Also, analysts can rely on techniques from Information Visualization, to depict e.g., player or ball trajectories. However, video analysis is typically a time-consuming process, where the analyst needs to memorize and annotate scenes. In contrast, visualization typically relies on an abstract data model, often using abstract visual mappings, and is not directly linked to the observed movement context anymore. We propose a visual analytics system that tightly integrates team sport video recordings with abstract visualization of underlying trajectory data. We apply appropriate computer vision techniques to extract trajectory data from video input. Furthermore, we apply advanced trajectory and movement analysis techniques to derive relevant team sport analytic measures for region, event and player analysis in the case of soccer analysis. Our system seamlessly integrates video and visualization modalities, enabling analysts to draw on the advantages of both analysis forms. Several expert studies conducted with team sport analysts indicate the effectiveness of our integrated approach

    Analyzing Qualitative Data with MAXQDA

    Get PDF
    “To begin at the beginning” is the opening line of the play Under Milk Wood by Welsh poet Dylan Thomas. So, we also want to start here at the beginning and start with some information about the history of the analysis software MAXQDA. This story is quite long; it begins in 1989 with a first version of the software, then just called “MAX,” for the operating system DOS and a book in the German language. The book’s title was Text Analysis Software for the Social Sciences. Introduction to MAX and Textbase Alpha written by Udo Kuckartz, published by Gustav Fischer in 1992. Since then, there have been many changes and innovations: technological, conceptual, and methodological. MAXQDA has its roots in social science methodology; the original name MAX was reference to the sociologist Max Weber, whose methodology combined quantitative and qualitative methods, explanation, and understanding in a way that was unique at the time, the beginning of the twentieth century. Since the first versions, MAX (later named winMAX and MAXQDA) has always been a very innovative analysis software. In 1994, it was one of the first programs with a graphical user interface; since 2001, it has used Rich Text Format with embedded graphics and objects. Later, MAXQDA was the first QDA program (QDA stands for qualitative data analysis) with a special version for Mac computers that included all analytical functions. Since autumn 2015, MAXQDA has been available in almost identical versions for Windows and Mac, so that users can switch between operating systems without having to familiarize themselves with a new interface or changed functionality. This compatibility and feature equality between Mac and Windows versions is unique and greatly facilitates team collaboration. MAXQDA has also come up with numerous innovations in the intervening years: a logically and very intuitively designed user interface, very versatile options for memos and comments, numerous visualization options, the summary grid as a middle level of analysis between primary data and categories, and much more, for instance, transcription, geolinks, weight scores for coding, analysis of PDF files, and Twitter analysis. Last but not least, the mixed methods features are worth mentioning, in which MAXQDA has long played a pioneering role. This list already shows that today MAXQDA is much more than text analysis software: the first chapter of this book contains a representation of the data types that MAXQDA can analyze today (in version 2018) and shows which file formats can be processed. The large variety of data types is contrasted by an even greater number o
    • 

    corecore