9,510 research outputs found

    Interpretation at the controller's edge: designing graphical user interfaces for the digital publication of the excavations at Gabii (Italy)

    Get PDF
    This paper discusses the authors’ approach to designing an interface for the Gabii Project’s digital volumes that attempts to fuse elements of traditional synthetic publications and site reports with rich digital datasets. Archaeology, and classical archaeology in particular, has long engaged with questions of the formation and lived experience of towns and cities. Such studies might draw on evidence of local topography, the arrangement of the built environment, and the placement of architectural details, monuments and inscriptions (e.g. Johnson and Millett 2012). Fundamental to the continued development of these studies is the growing body of evidence emerging from new excavations. Digital techniques for recording evidence “on the ground,” notably SFM (structure from motion aka close range photogrammetry) for the creation of detailed 3D models and for scene-level modeling in 3D have advanced rapidly in recent years. These parallel developments have opened the door for approaches to the study of the creation and experience of urban space driven by a combination of scene-level reconstruction models (van Roode et al. 2012, Paliou et al. 2011, Paliou 2013) explicitly combined with detailed SFM or scanning based 3D models representing stratigraphic evidence. It is essential to understand the subtle but crucial impact of the design of the user interface on the interpretation of these models. In this paper we focus on the impact of design choices for the user interface, and make connections between design choices and the broader discourse in archaeological theory surrounding the practice of the creation and consumption of archaeological knowledge. As a case in point we take the prototype interface being developed within the Gabii Project for the publication of the Tincu House. In discussing our own evolving practices in engagement with the archaeological record created at Gabii, we highlight some of the challenges of undertaking theoretically-situated user interface design, and their implications for the publication and study of archaeological materials

    Semantic Flexibility and Grounded Language Learning

    Get PDF
    International audienceWe explore the way that the flexibility inherent in the lexicon might be incorporated into the process by which an environmentally grounded artificial agent-a robot-acquires language. We take flexibility to indicate not only many-to-many mappings between words and extensions, but also the way that word meaning is specified in the context of a particular situation in the world. Our hypothesis is that embodiment and embededness are necessary conditions for the development of semantic representations that exhibit this flexibility. We examine this hypothesis by first very briefly reviewing work to date in the domain of grounded language learning, and then proposing two research objectives: 1) the incorporation of high-dimensional semantic representations that permit context-specific projections, and 2) an exploration of ways in which non-humanoid robots might exhibit language-learning capacities. We suggest that the experimental programme implicated by this theoretical investigation could be situated broadly within the enactivist paradigm, which approaches cognition from the perspective of agents emerging in the course of dynamic entanglements within an environment

    Tangible user interfaces : past, present and future directions

    Get PDF
    In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this ïŹeld. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research

    Integrating the strengths of cognitive emotion models with traditional HCI analysis tools

    Get PDF
    This paper reports an attempt to integrate key concepts from cognitive models of emotion to cognitive models of interaction established in HCI literature. The aim is to transfer the strengths of interaction models to analysis of affect-critical systems in games, e-commerce and education, thereby increasing their usefulness in these systems where affect is increasingly recognised as a key success factor. Concepts from Scherer’s appraisal model and stimulation evaluation checks, along with a framework of emotion contexts proposed by Coulson (An everything but framework for modelling emotion. In proceeding of AAAI spring symposium on architectures for emotion, 2004), are integrated into the cycle of display-based action proposed by Norman (The design of everyday things. Basic Books, New York, 1988). Norman’s action cycle has commonly been applied as an interaction analysis tool in the field of HCI. In the wake of the recent shift of emphasis to user experience, the cognition-based action cycle is deemed inadequate to explicate affective experiences, such as happiness, joy and surprise. Models based on appraisal theories, focusing on cognitive accounts of emotion, are more relevant to understanding the causes and effects of feelings arising from interacting with digital artefacts. The paper explores the compatibility between these two genres of model, and future development of integrated analysis tools

    Proceedings of Abstracts Engineering and Computer Science Research Conference 2019

    Get PDF
    © 2019 The Author(s). This is an open-access work distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. For further details please see https://creativecommons.org/licenses/by/4.0/. Note: Keynote: Fluorescence visualisation to evaluate effectiveness of personal protective equipment for infection control is © 2019 Crown copyright and so is licensed under the Open Government Licence v3.0. Under this licence users are permitted to copy, publish, distribute and transmit the Information; adapt the Information; exploit the Information commercially and non-commercially for example, by combining it with other Information, or by including it in your own product or application. Where you do any of the above you must acknowledge the source of the Information in your product or application by including or linking to any attribution statement specified by the Information Provider(s) and, where possible, provide a link to this licence: http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/This book is the record of abstracts submitted and accepted for presentation at the Inaugural Engineering and Computer Science Research Conference held 17th April 2019 at the University of Hertfordshire, Hatfield, UK. This conference is a local event aiming at bringing together the research students, staff and eminent external guests to celebrate Engineering and Computer Science Research at the University of Hertfordshire. The ECS Research Conference aims to showcase the broad landscape of research taking place in the School of Engineering and Computer Science. The 2019 conference was articulated around three topical cross-disciplinary themes: Make and Preserve the Future; Connect the People and Cities; and Protect and Care

    Distributed Technology-Sustained Pervasive Applications

    Full text link
    Technology-sustained pervasive games, contrary to technology-supported pervasive games, can be understood as computer games interfacing with the physical world. Pervasive games are known to make use of 'non-standard input devices' and with the rise of the Internet of Things (IoT), pervasive applications can be expected to move beyond games. This dissertation is requirements- and development-focused Design Science research for distributed technology-sustained pervasive applications, incorporating knowledge from the domains of Distributed Computing, Mixed Reality, Context-Aware Computing, Geographical Information Systems and IoT. Computer video games have existed for decades, with a reusable game engine to drive them. If pervasive games can be understood as computer games interfacing with the physical world, can computer game engines be used to stage pervasive games? Considering the use of non-standard input devices in pervasive games and the rise of IoT, how will this affect the architectures supporting the broader set of pervasive applications? The use of a game engine can be found in some existing pervasive game projects, but general research into how the domain of pervasive games overlaps with that of video games is lacking. When an engine is used, a discussion of, what type of engine is most suitable and what properties are being fulfilled by the engine, is often not part of the discourse. This dissertation uses multiple iterations of the method framework for Design Science for the design and development of three software system architectures. In the face of IoT, the problem of extending pervasive games into a fourth software architecture, accommodating a broader set of pervasive applications, is explicated. The requirements, for technology-sustained pervasive games, are verified through the design, development and demonstration of the three software system architectures. The ...Comment: 64 pages, 13 figure

    Integrating the strengths of cognitive emotion models with traditional HCI analysis tools

    Get PDF
    This paper reports an attempt to integrate key concepts from cognitive models of emotion to cognitive models of interaction established in HCI literature. The aim is to transfer the strengths of interaction models to analysis of affect-critical systems in games, e-commerce and education, thereby increasing their usefulness in these systems where affect is increasingly recognised as a key success factor. Concepts from Scherer’s appraisal model and stimulation evaluation checks, along with a framework of emotion contexts proposed by Coulson (An everything but framework for modelling emotion. In proceeding of AAAI spring symposium on architectures for emotion, 2004), are integrated into the cycle of display-based action proposed by Norman (The design of everyday things. Basic Books, New York, 1988). Norman’s action cycle has commonly been applied as an interaction analysis tool in the field of HCI. In the wake of the recent shift of emphasis to user experience, the cognition-based action cycle is deemed inadequate to explicate affective experiences, such as happiness, joy and surprise. Models based on appraisal theories, focusing on cognitive accounts of emotion, are more relevant to understanding the causes and effects of feelings arising from interacting with digital artefacts. The paper explores the compatibility between these two genres of model, and future development of integrated analysis tools

    An Overview of Self-Adaptive Technologies Within Virtual Reality Training

    Get PDF
    This overview presents the current state-of-the-art of self-adaptive technologies within virtual reality (VR) training. Virtual reality training and assessment is increasingly used for five key areas: medical, industrial & commercial training, serious games, rehabilitation and remote training such as Massive Open Online Courses (MOOCs). Adaptation can be applied to five core technologies of VR including haptic devices, stereo graphics, adaptive content, assessment and autonomous agents. Automation of VR training can contribute to automation of actual procedures including remote and robotic assisted surgery which reduces injury and improves accuracy of the procedure. Automated haptic interaction can enable tele-presence and virtual artefact tactile interaction from either remote or simulated environments. Automation, machine learning and data driven features play an important role in providing trainee-specific individual adaptive training content. Data from trainee assessment can form an input to autonomous systems for customised training and automated difficulty levels to match individual requirements. Self-adaptive technology has been developed previously within individual technologies of VR training. One of the conclusions of this research is that while it does not exist, an enhanced portable framework is needed and it would be beneficial to combine automation of core technologies, producing a reusable automation framework for VR training
    • 

    corecore