337 research outputs found

    Getting the message across : ten principles for web animation

    Get PDF
    The growing use of animation in Web pages testifies to the increasing ease with which such multimedia components can be created. This trend indicates a commitment to animation that is often unmatched by the skill of the implementers. The present paper details a set of ten commandments for web animation, intending to sensitise budding animators to key aspects that may impair the communicational effectiveness of their animation. These guidelines are drawn from an extensive literature survey coloured by personal experience of using Web animation packages. Our ten principles are further elucidated by a Web-based on-line tutorial

    Compréhension de contenus visuels par analyse conjointe du contenu et des usages

    Get PDF
    Dans cette thèse, nous traitons de la compréhension de contenus visuels, qu’il s’agisse d’images, de vidéos ou encore de contenus 3D. On entend par compréhension la capacité à inférer des informations sémantiques sur le contenu visuel. L’objectif de ce travail est d’étudier des méthodes combinant deux approches : 1) l’analyse automatique des contenus et 2) l’analyse des interactions liées à l’utilisation de ces contenus (analyse des usages, en plus bref). Dans un premier temps, nous étudions l’état de l’art issu des communautés de la vision par ordinateur et du multimédia. Il y a 20 ans, l’approche dominante visait une compréhension complètement automatique des images. Cette approche laisse aujourd’hui plus de place à différentes formes d’interventions humaines. Ces dernières peuvent se traduire par la constitution d’une base d’apprentissage annotée, par la résolution interactive de problèmes (par exemple de détection ou de segmentation) ou encore par la collecte d’informations implicites issues des usages du contenu. Il existe des liens riches et complexes entre supervision humaine d’algorithmes automatiques et adaptation des contributions humaines via la mise en œuvre d’algorithmes automatiques. Ces liens sont à l’origine de questions de recherche modernes : comment motiver des intervenants humains ? Comment concevoir des scénarii interactifs pour lesquels les interactions contribuent à comprendre le contenu manipulé ? Comment vérifier la qualité des traces collectées ? Comment agréger les données d’usage ? Comment fusionner les données d’usage avec celles, plus classiques, issues d’une analyse automatique ? Notre revue de la littérature aborde ces questions et permet de positionner les contributions de cette thèse. Celles-ci s’articulent en deux grandes parties. La première partie de nos travaux revisite la détection de régions importantes ou saillantes au travers de retours implicites d’utilisateurs qui visualisent ou acquièrent des con- tenus visuels. En 2D d’abord, plusieurs interfaces de vidéos interactives (en particulier la vidéo zoomable) sont conçues pour coordonner des analyses basées sur le contenu avec celles basées sur l’usage. On généralise ces résultats en 3D avec l’introduction d’un nouveau détecteur de régions saillantes déduit de la capture simultanée de vidéos de la même performance artistique publique (spectacles de danse, de chant etc.) par de nombreux utilisateurs. La seconde contribution de notre travail vise une compréhension sémantique d’images fixes. Nous exploitons les données récoltées à travers un jeu, Ask’nSeek, que nous avons créé. Les interactions élémentaires (comme les clics) et les données textuelles saisies par les joueurs sont, comme précédemment, rapprochées d’analyses automatiques des images. Nous montrons en particulier l’intérêt d’interactions révélatrices des relations spatiales entre différents objets détectables dans une même scène. Après la détection des objets d’intérêt dans une scène, nous abordons aussi le problème, plus ambitieux, de la segmentation. ABSTRACT : This thesis focuses on the problem of understanding visual contents, which can be images, videos or 3D contents. Understanding means that we aim at inferring semantic information about the visual content. The goal of our work is to study methods that combine two types of approaches: 1) automatic content analysis and 2) an analysis of how humans interact with the content (in other words, usage analysis). We start by reviewing the state of the art from both Computer Vision and Multimedia communities. Twenty years ago, the main approach was aiming at a fully automatic understanding of images. This approach today gives way to different forms of human intervention, whether it is through the constitution of annotated datasets, or by solving problems interactively (e.g. detection or segmentation), or by the implicit collection of information gathered from content usages. These different types of human intervention are at the heart of modern research questions: how to motivate human contributors? How to design interactive scenarii that will generate interactions that contribute to content understanding? How to check or ensure the quality of human contributions? How to aggregate human contributions? How to fuse inputs obtained from usage analysis with traditional outputs from content analysis? Our literature review addresses these questions and allows us to position the contributions of this thesis. In our first set of contributions we revisit the detection of important (or salient) regions through implicit feedback from users that either consume or produce visual contents. In 2D, we develop several interfaces of interactive video (e.g. zoomable video) in order to coordinate content analysis and usage analysis. We also generalize these results to 3D by introducing a new detector of salient regions that builds upon simultaneous video recordings of the same public artistic performance (dance show, chant, etc.) by multiple users. The second contribution of our work aims at a semantic understanding of fixed images. With this goal in mind, we use data gathered through a game, Ask’nSeek, that we created. Elementary interactions (such as clicks) together with textual input data from players are, as before, mixed with automatic analysis of images. In particular, we show the usefulness of interactions that help revealing spatial relations between different objects in a scene. After studying the problem of detecting objects on a scene, we also adress the more ambitious problem of segmentation

    Combining content analysis with usage analysis to better understand visual contents

    Get PDF
    This thesis focuses on the problem of understanding visual contents, which can be images, videos or 3D contents. Understanding means that we aim at inferring semantic information about the visual content. The goal of our work is to study methods that combine two types of approaches: 1) automatic content analysis and 2) an analysis of how humans interact with the content (in other words, usage analysis). We start by reviewing the state of the art from both Computer Vision and Multimedia communities. Twenty years ago, the main approach was aiming at a fully automatic understanding of images. This approach today gives way to different forms of human intervention, whether it is through the constitution of annotated datasets, or by solving problems interactively (e.g. detection or segmentation), or by the implicit collection of information gathered from content usages. These different types of human intervention are at the heart of modern research questions: how to motivate human contributors? How to design interactive scenarii that will generate interactions that contribute to content understanding? How to check or ensure the quality of human contributions? How to aggregate human contributions? How to fuse inputs obtained from usage analysis with traditional outputs from content analysis? Our literature review addresses these questions and allows us to position the contributions of this thesis. In our first set of contributions we revisit the detection of important (or salient) regions through implicit feedback from users that either consume or produce visual contents. In 2D, we develop several interfaces of interactive video (e.g. zoomable video) in order to coordinate content analysis and usage analysis. We also generalize these results to 3D by introducing a new detector of salient regions that builds upon simultaneous video recordings of the same public artistic performance (dance show, chant, etc.) by multiple users. The second contribution of our work aims at a semantic understanding of fixed images. With this goal in mind, we use data gathered through a game, Ask’nSeek, that we created. Elementary interactions (such as clicks) together with textual input data from players are, as before, mixed with automatic analysis of images. In particular, we show the usefulness of interactions that help revealing spatial relations between different objects in a scene. After studying the problem of detecting objects on a scene, we also adress the more ambitious problem of segmentation

    Navigating in Complex Process Model Collections

    Get PDF
    The increasing adoption of process-aware information systems (PAIS) has led to the emergence of large process model collections. In the automotive and healthcare domains, for example, such collections may comprise hundreds or thousands of process models, each consisting of numerous process elements (e.g., process tasks or data objects). In existing modeling environments, process models are presented to users in a rather static manner; i.e., as image maps not allowing for any context-specific user interactions. As process participants have different needs and thus require specific presentations of available process information, such static approaches are usually not sufficient to assist them in their daily work. For example, a business manager only requires an abstract overview of a process model collection, whereas a knowledge worker (e.g., a requirements engineer) needs detailed information on specific process tasks. In general, a more flexible navigation and visualization approach is needed, which allows process participants to flexibly interact with process model collections in order to navigate from a standard (i.e., default) visualization of a process model collection to a context-specific one. With the Process Navigation and Visualization (ProNaVis) framework, this thesis provides such a flexible navigation approach for large and complex process model collections. Specifically, ProNaVis enables the flexible navigation within process model collections along three navigation dimensions. First, the geographic dimension allows zooming in and out of the process models. Second, the semantic dimension may be utilized to increase or decrease the level of detail. Third, the view dimension allows switching between different visualizations. All three navigation dimensions have been addressed in an isolated fashion in existing navigation approaches so far, but only ProNaVis provides an integrated support for all three dimensions. The concepts developed in this thesis were validated using various methods. First, they were implemented in the process navigation tool Compass, which has been used by several departments of an automotive OEM (Original Equipment Manufacturer). Second, ProNaVis concepts were evaluated in two experiments, investigating both navigation and visualization aspects. Third, the developed concepts were successfully applied to process-oriented information logistics (POIL). Experimental as well as empirical results have provided evidence that ProNaVis will enable a much more flexible navigation in process model repositories compared to existing approaches

    Multi-View Ontology Explorer (MOE): Interactive Visual Exploration of Ontologies

    Get PDF
    An ontology is an explicit specification of a conceptualization. This specification consists of a common vocabulary and information structure of a domain. Ontologies have applications in many fields to semantically link information in a standardized manner. In these fields, it is often crucial for both expert and non-expert users to quickly grasp the contents of an ontology; and to achieve this, many ontology tools implement visualization components. There are many past works on ontology visualization, and most of these tools are adapted from tree and graph based visualization techniques (e.g. treemaps, node-link graphs, and 3D interfaces). However, due to the enormous size of ontologies, these existing tools have their own shortcomings when dealing information overload, usually resulting in clutter and occlusion on the screen. In this thesis, we propose a set of novel visualizations and interactions to visualize very large ontologies. We design 5 dynamically linked visualizations that focus on a different level of abstraction individually. These different levels of abstraction start from a high-level overview down to a low-level entity. In addition, these visualizations collectively visualize landmarks, routes, and survey knowledge to support the formation of mental models. Search and save features are implemented to support on-demand and guided exploration. Finally, we implement our design as a web application

    Grammar-Based Interactive Genome Visualization

    Get PDF
    Visualization is an indispensable method in the exploration of genomic data. However, the current state of the art in genome browsers – a class of interactive visualization tools – limit the exploration by coupling the visual representations with specific file formats. Because the tools do not support the exploration of the visualization design space, they are difficult to adapt to atypical data. Moreover, although the tools provide interactivity, the implementations are often rudimentary, encumbering the exploration of the data. This thesis introduces GenomeSpy, an interactive genome visualization tool that improves upon the current state of the art by providing better support for exploration. The tool uses a visualization grammar that allows for implementing novel visualization designs, which can display the underlying data more effectively. Moreover, the tool implements GPU-accelerated interactions that better support navigation in the genomic space. For instance, smoothly animated transitions between loci or sample sets improve the perception of causality and help the users stay in the flow of exploration. The expressivity of the visualization grammar and the benefit of fluid interactions are validated with two case studies. The case studies demonstrate visualization of high-grade serous ovarian cancer data at different analysis phases. First, GenomeSpy is being used to create a tool for scrutinizing raw copy-number variation data along with segmentation results. Second, the segmentations along with point mutations are used in a GenomeSpy-based multi-sample visualization that allows for exploring and comparing both multiple data dimensions and samples at the same time. Although the focus has been on cancer research, the tool could be applied to other domains as well

    Supporting exploratory browsing with visualization of social interaction history

    Get PDF
    This thesis is concerned with the design, development, and evaluation of information visualization tools for supporting exploratory browsing. Information retrieval (IR) systems currently do not support browsing well. Responding to user queries, IR systems typically compute relevance scores of documents and then present the document surrogates to users in order of relevance. Other systems such as email clients and discussion forums simply arrange messages in reverse chronological order. Using these systems, people cannot gain an overview of a collection easily, nor do they receive adequate support for finding potentially useful items in the collection. This thesis explores the feasibility of using social interaction history to improve exploratory browsing. Social interaction history refers to traces of interaction among users in an information space, such as discussions that happen in the blogosphere or online newspapers through the commenting facility. The basic hypothesis of this work is that social interaction history can serve as a good indicator of the potential value of information items. Therefore, visualization of social interaction history would offer navigational cues for finding potentially valuable information items in a collection. To test this basic hypothesis, I conducted three studies. First, I ran statistical analysis of a social media data set. The results showed that there were positive relationships between traces of social interaction and the degree of interestingness of web articles. Second, I conducted a feasibility study to collect initial feedback about the potential of social interaction history to support information exploration. Comments from the participants were in line with the research hypothesis. Finally, I conducted a summative evaluation to measure how well visualization of social interaction history can improve exploratory browsing. The results showed that visualization of social interaction history was able to help users find interesting articles, to reduce wasted effort, and to increase user satisfaction with the visualization tool
    • …
    corecore