1,401 research outputs found

    Extending an XML environment definition language for spoken dialogue and web-based interfaces

    Full text link
    This is an electronic version of the paper presented at the Workshop "Developing User Interfaces with XML: Advances on User Interface Description Languages", during the International Working Conference on Advanced Visual Interfaces (AVI), held in Gallipoli (Italy) on 2004In this work we describe how we employ XML-compliant languages to define an intelligent environment. This language represents the environment, its entities and their relationships. The XML environment definition is transformed in a middleware layer that provides interaction with the environment. Additionally, this XML definition language has been extended to support two different user interfaces. A spoken dialogue interface is created by means of specific linguistic information. GUI interaction information is converted in a web-based interface.This work has been sponsored by the Spanish Ministry of Science and Technology, project number TIC2000-046

    VERTO: a visual notation for declarative process models

    Get PDF
    Declarative approaches to business process modeling allow to represent loosely-structured (declarative) processes in flexible scenarios as a set of constraints on the allowed flow of activities. However, current graphical notations for declarative processes are difficult to interpret. As a consequence, this has affected widespread usage of such notations, by increasing the dependency on experts to understand their semantics. In this paper, we tackle this issue by introducing a novel visual declarative notation targeted to a more understandable modeling of declarative processes

    Integrating body scanning solutions into virtual dressing rooms

    Get PDF
    The world is entering its 4th Industrial Revolution, a new era of manufacturing characterized by ubiquitous digitization and computing. One industry to benefit and grow from this revolution is the fashion industry, in which Europe (and Italy in particular) has long maintained a global lead. To evolve with the changes in technology, we developed the IT- SHIRT project. In the context of this project, a key challenge relies on developing a virtual dressing room in which the final users (customers) can virtually try different clothes on their bodies. In this paper, we tackle the aforementioned issue by providing a critical analysis of the existing body scanning solutions, identifying their strengths and weaknesses towards their integration within the pipeline of virtual dressing rooms

    Target Acquisition in Multiscale Electronic Worlds

    Get PDF
    Since the advent of graphical user interfaces, electronic information has grown exponentially, whereas the size of screen displays has stayed almost the same. Multiscale interfaces were designed to address this mismatch, allowing users to adjust the scale at which they interact with information objects. Although the technology has progressed quickly, the theory has lagged behind. Multiscale interfaces pose a stimulating theoretical challenge, reformulating the classic target-acquisition problem from the physical world into an infinitely rescalable electronic world. We address this challenge by extending Fitts’ original pointing paradigm: we introduce the scale variable, thus defining a multiscale pointing paradigm. This article reports on our theoretical and empirical results. We show that target-acquisition performance in a zooming interface must obey Fitts’ law, and more specifically, that target-acquisition time must be proportional to the index of difficulty. Moreover, we complement Fitts’ law by accounting for the effect of view size on pointing performance, showing that performance bandwidth is proportional to view size, up to a ceiling effect. The first empirical study shows that Fitts’ law does apply to a zoomable interface for indices of difficulty up to and beyond 30 bits, whereas classical Fitts’ law studies have been confined in the 2-10 bit range. The second study demonstrates a strong interaction between view size and task difficulty for multiscale pointing, and shows a surprisingly low ceiling. We conclude with implications of these findings for the design of multiscale user interfaces

    On the Optimization of Visualizations of Complex Phenomena

    Get PDF
    The problem of perceptually optimizing complex visualizations is a difficult one, involving perceptual as well as aesthetic issues. In our experience, controlled experiments are quite limited in their ability to uncover interrelationships among visualization parameters, and thus may not be the most useful way to develop rules-of-thumb or theory to guide the production of high-quality visualizations. In this paper, we propose a new experimental approach to optimizing visualization quality that integrates some of the strong points of controlled experiments with methods more suited to investigating complex highly-coupled phenomena. We use human-in-the-loop experiments to search through visualization parameter space, generating large databases of rated visualization solutions. This is followed by data mining to extract results such as exemplar visualizations, guidelines for producing visualizations, and hypotheses about strategies leading to strong visualizations. The approach can easily address both perceptual and aesthetic concerns, and can handle complex parameter interactions. We suggest a genetic algorithm as a valuable way of guiding the human-in-the-loop search through visualization parameter space. We describe our methods for using clustering, histogramming, principal component analysis, and neural networks for data mining. The experimental approach is illustrated with a study of the problem of optimal texturing for viewing layered surfaces so that both surfaces are maximally observable

    TeMoCo-Doc:A visualization for supporting temporal and contextual analysis of dialogues and associated documents

    Get PDF
    Funding Information: This paper is supported by European Union’s Horizon 2020 research and innovation programme under grant agreement No 825153, project EMBEDDIA (Cross-Lingual Embeddings for Less-Represented Languages in European News Media). The results of this paper reflects only the author’s view and the Commission is not responsible for any use that may be made of the information it contains. Pierre Albert has been funded though the INCA project. We thank the INCA project members in Ireland for granting us access to the trainee data. Publisher Copyright: © 2020 Owner/Author.A common task in a number of application areas is to create textual documents based on recorded audio data. Visualizations designed to support such tasks require linking temporal audio data with contextual data contained in the resulting documents. In this paper, we present a tool for the visualization of temporal and contextual links between recorded dialogues and their summary documents.Peer reviewe

    Corpus Summarization and Exploration using Multi-Mosaics

    Get PDF

    Map-based Interfaces and Interactions

    Get PDF
    • …
    corecore