9,834 research outputs found

    Enabling collaboration in virtual reality navigators

    Get PDF
    In this paper we characterize a feature superset for Collaborative Virtual Reality Environments (CVRE), and derive a component framework to transform stand-alone VR navigators into full-fledged multithreaded collaborative environments. The contributions of our approach rely on a cost-effective and extensible technique for loading software components into separate POSIX threads for rendering, user interaction and network communications, and adding a top layer for managing session collaboration. The framework recasts a VR navigator under a distributed peer-to-peer topology for scene and object sharing, using callback hooks for broadcasting remote events and multicamera perspective sharing with avatar interaction. We validate the framework by applying it to our own ALICE VR Navigator. Experimental results show that our approach has good performance in the collaborative inspection of complex models.Postprint (published version

    Overcoming Language Dichotomies: Toward Effective Program Comprehension for Mobile App Development

    Full text link
    Mobile devices and platforms have become an established target for modern software developers due to performant hardware and a large and growing user base numbering in the billions. Despite their popularity, the software development process for mobile apps comes with a set of unique, domain-specific challenges rooted in program comprehension. Many of these challenges stem from developer difficulties in reasoning about different representations of a program, a phenomenon we define as a "language dichotomy". In this paper, we reflect upon the various language dichotomies that contribute to open problems in program comprehension and development for mobile apps. Furthermore, to help guide the research community towards effective solutions for these problems, we provide a roadmap of directions for future work.Comment: Invited Keynote Paper for the 26th IEEE/ACM International Conference on Program Comprehension (ICPC'18

    Aesthetic Perspectives in Group Decision and Negotiation Practice

    Get PDF
    This paper explores the role of the aesthetics in Group Decision and Negotiation (GDN) practice, specifically how it affects the methods and the cognitive processes in the architectural field. We intend aesthetics as “scientia cognitionis sensitivé”, a particular process and way of knowing and experiencing the problem through senses, imagination and empathy. We argue that (a) aesthetics and aesthetic features can (and do) convey knowledge about the problem; (b) we can distinguish between two kinds of aesthetics, one of the process and one of the product and (c) the aes-thetics can contribute to create a “plural subject”. The issue is investigated through a decision problem about the transformation of an iconic building in the centre of Turin (Italy), in two ways: (1) by merging the Strategic Choice Approach (SCA) with architectural design and (2) by approaching the same issue with Storytelling, as a method for problem-based instruction. Considering the aesthetics as a specific form of language, the paper offers innovative considerations about the role of repre-sentation and visualisation tools and models—drawing, scheme, diagrams, but also video and text—as support for group decisions and negotiations, in the construction of knowledge within decisional processes

    Embedding Explorer

    Get PDF
    The present disclosure describes an embedding explorer that allows a user to interactively explore properties of an embedding space and how the embedding space relates to features of entities being embedded. The embedding space is a low-dimensional space onto which the embedding explorer translates high-dimensional vectors to low-dimensional vectors. In particular, the embedding explorer allows the user to load a table of embeddings and feature providers that are required by the user during the exploration. The table includes at least one column with an entity ID and another column with an array of floats representing the embedding for the entity. The user may further add a new embedding to the embedding space. To add the new embedding to the embedding space, the user provides hive tables or CSV files as input to a preprocessing workflow of the embedding explorer. The preprocessing workflow utilizes a t-distributed stochastic neighbor embedding (t-SNE) to scale down tens of millions of points representing the entities. After the preprocessing workflow is completed, it is required to manually register the embedding by adding an element to an “EmbeddingExplorationSources” function of the embedding explorer. An “allDataRootFilepath” field of the “EmbeddingExplorationSources” function accepts an output folder returned by the preprocessing workflow

    The dancer in the eye: Towards a multi-layered computational framework of qualities in movement

    Get PDF
    This paper presents a conceptual framework for the analysis of expressive qualities of movement. Our perspective is to model an observer of a dance performance. The conceptual framework is made of four layers, ranging from the physical signals that sensors capture to the qualities that movement communicate (e.g., in terms of emotions). The framework aims to provide a conceptual background the development of computational systems can build upon, with a particular reference to systems analyzing a vocabulary of expressive movement qualities, and translating them to other sensory channels, such as the auditory modality. Such systems enable their users to "listen to a choreography" or to "feel a ballet", in a new kind of cross-modal mediated experience

    Hierarchically Structured Reinforcement Learning for Topically Coherent Visual Story Generation

    Full text link
    We propose a hierarchically structured reinforcement learning approach to address the challenges of planning for generating coherent multi-sentence stories for the visual storytelling task. Within our framework, the task of generating a story given a sequence of images is divided across a two-level hierarchical decoder. The high-level decoder constructs a plan by generating a semantic concept (i.e., topic) for each image in sequence. The low-level decoder generates a sentence for each image using a semantic compositional network, which effectively grounds the sentence generation conditioned on the topic. The two decoders are jointly trained end-to-end using reinforcement learning. We evaluate our model on the visual storytelling (VIST) dataset. Empirical results from both automatic and human evaluations demonstrate that the proposed hierarchically structured reinforced training achieves significantly better performance compared to a strong flat deep reinforcement learning baseline.Comment: Accepted to AAAI 201
    • 

    corecore