26,371 research outputs found

    Designing a training tool for imaging mental models

    Get PDF
    The training process can be conceptualized as the student acquiring an evolutionary sequence of classification-problem solving mental models. For example a physician learns (1) classification systems for patient symptoms, diagnostic procedures, diseases, and therapeutic interventions and (2) interrelationships among these classifications (e.g., how to use diagnostic procedures to collect data about a patient's symptoms in order to identify the disease so that therapeutic measures can be taken. This project developed functional specifications for a computer-based tool, Mental Link, that allows the evaluative imaging of such mental models. The fundamental design approach underlying this representational medium is traversal of virtual cognition space. Typically intangible cognitive entities and links among them are visible as a three-dimensional web that represents a knowledge structure. The tool has a high degree of flexibility and customizability to allow extension to other types of uses, such a front-end to an intelligent tutoring system, knowledge base, hypermedia system, or semantic network

    MCPLOTS: a particle physics resource based on volunteer computing

    Get PDF
    The mcplots.cern.ch web site (MCPLOTS) provides a simple online repository of plots made with high-energy-physics event generators, comparing them to a wide variety of experimental data. The repository is based on the HEPDATA online database of experimental results and on the RIVET Monte Carlo analysis tool. The repository is continually updated and relies on computing power donated by volunteers, via the LHC@HOME platform.Comment: 30 page

    Interpretation at the controller's edge: designing graphical user interfaces for the digital publication of the excavations at Gabii (Italy)

    Get PDF
    This paper discusses the authorsā€™ approach to designing an interface for the Gabii Projectā€™s digital volumes that attempts to fuse elements of traditional synthetic publications and site reports with rich digital datasets. Archaeology, and classical archaeology in particular, has long engaged with questions of the formation and lived experience of towns and cities. Such studies might draw on evidence of local topography, the arrangement of the built environment, and the placement of architectural details, monuments and inscriptions (e.g. Johnson and Millett 2012). Fundamental to the continued development of these studies is the growing body of evidence emerging from new excavations. Digital techniques for recording evidence ā€œon the ground,ā€ notably SFM (structure from motion aka close range photogrammetry) for the creation of detailed 3D models and for scene-level modeling in 3D have advanced rapidly in recent years. These parallel developments have opened the door for approaches to the study of the creation and experience of urban space driven by a combination of scene-level reconstruction models (van Roode et al. 2012, Paliou et al. 2011, Paliou 2013) explicitly combined with detailed SFM or scanning based 3D models representing stratigraphic evidence. It is essential to understand the subtle but crucial impact of the design of the user interface on the interpretation of these models. In this paper we focus on the impact of design choices for the user interface, and make connections between design choices and the broader discourse in archaeological theory surrounding the practice of the creation and consumption of archaeological knowledge. As a case in point we take the prototype interface being developed within the Gabii Project for the publication of the Tincu House. In discussing our own evolving practices in engagement with the archaeological record created at Gabii, we highlight some of the challenges of undertaking theoretically-situated user interface design, and their implications for the publication and study of archaeological materials

    Verification-guided modelling of salience and cognitive load

    Get PDF
    Well-designed interfaces use procedural and sensory cues to increase the cognitive salience of appropriate actions. However, empirical studies suggest that cognitive load can influence the strength of those cues. We formalise the relationship between salience and cognitive load revealed by empirical data. We add these rules to our abstract cognitive architecture, based on higher-order logic and developed for the formal verification of usability properties. The interface of a fire engine dispatch task from the empirical studies is then formally modelled and verified. The outcomes of this verification and their comparison with the empirical data provide a way of assessing our salience and load rules. They also guide further iterative refinements of these rules. Furthermore, the juxtaposition of the outcomes of formal analysis and empirical studies suggests new experimental hypotheses, thus providing input to researchers in cognitive science

    ATTIRE (analytical tools for thermal infrared engineering): A sensor simulation and modeling package

    Get PDF
    The Advanced Sensor Development Laboratory (ASDL) at the Stennis Space Center develops, maintains and calibrates remote sensing instruments for the National Aeronautics & Space Administration (NASA). To perform system design trade-offs, analysis, and establish system parameters, ASDL has developed a software package for analytical simulation of sensor systems. This package called 'Analytical Tools for Thermal InfraRed Engineering' - ATTIRE, simulates the various components of a sensor system. The software allows each subsystem of the sensor to be analyzed independently for its performance. These performance parameters are then integrated to obtain system level information such as Signal-to-Noise Ratio (SNR), Noise Equivalent Radiance (NER), Noise Equivalent Temperature Difference (NETD) etc. This paper describes the uses of the package and the physics that were used to derive the performance parameters

    Teaching programming at a distance: the Internet software visualization laboratory

    Get PDF
    This paper describes recent developments in our approach to teaching computer programming in the context of a part-time Masters course taught at a distance. Within our course, students are sent a pack which contains integrated text, software and video course material, using a uniform graphical representation to tell a consistent story of how the programming language works. The students communicate with their tutors over the phone and through surface mail. Through our empirical studies and experience teaching the course we have identified four current problems: (i) students' difficulty mapping between the graphical representations used in the course and the programs to which they relate, (ii) the lack of a conversational context for tutor help provided over the telephone, (iii) helping students who due to their other commitments tend to study at 'unsociable' hours, and (iv) providing software for the constantly changing and expanding range of platforms and operating systems used by students. We hope to alleviate these problems through our Internet Software Visualization Laboratory (ISVL), which supports individual exploration, and both synchronous and asynchronous communication. As a single user, students are aided by the extra mappings provided between the graphical representations used in the course and their computer programs, overcoming the problems of the original notation. ISVL can also be used as a synchronous communication medium whereby one of the users (generally the tutor) can provide an annotated demonstration of a program and its execution, a far richer alternative to technical discussions over the telephone. Finally, ISVL can be used to support asynchronous communication, helping students who work at unsociable hours by allowing the tutor to prepare short educational movies for them to view when convenient. The ISVL environment runs on a conventional web browser and is therefore platform independent, has modest hardware and bandwidth requirements, and is easy to distribute and maintain. Our planned experiments with ISVL will allow us to investigate ways in which new technology can be most appropriately applied in the service of distance education

    Target Acquisition in Multiscale Electronic Worlds

    Get PDF
    Since the advent of graphical user interfaces, electronic information has grown exponentially, whereas the size of screen displays has stayed almost the same. Multiscale interfaces were designed to address this mismatch, allowing users to adjust the scale at which they interact with information objects. Although the technology has progressed quickly, the theory has lagged behind. Multiscale interfaces pose a stimulating theoretical challenge, reformulating the classic target-acquisition problem from the physical world into an infinitely rescalable electronic world. We address this challenge by extending Fittsā€™ original pointing paradigm: we introduce the scale variable, thus defining a multiscale pointing paradigm. This article reports on our theoretical and empirical results. We show that target-acquisition performance in a zooming interface must obey Fittsā€™ law, and more specifically, that target-acquisition time must be proportional to the index of difficulty. Moreover, we complement Fittsā€™ law by accounting for the effect of view size on pointing performance, showing that performance bandwidth is proportional to view size, up to a ceiling effect. The first empirical study shows that Fittsā€™ law does apply to a zoomable interface for indices of difficulty up to and beyond 30 bits, whereas classical Fittsā€™ law studies have been confined in the 2-10 bit range. The second study demonstrates a strong interaction between view size and task difficulty for multiscale pointing, and shows a surprisingly low ceiling. We conclude with implications of these findings for the design of multiscale user interfaces

    Tagging time in prolog : the temporality effect project

    Get PDF
    This article combines a brief introduction into a particular philosophical theory of "time" with a demonstration of how this theory has been implemented in a Literary Studies oriented Humanities Computing project. The aim of the project was to create a model of text-based time cognition and design customized markup and text analysis tools that help to understand ā€˜ā€˜how time worksā€™ā€™: more precisely, how narratively organised and communicated information motivates readers to generate the mental image of a chronologically organized world. The approach presented is based on the unitary model of time originally proposed by McTaggart, who distinguished between two perspectives onto time, the so-called A- and B-series. The first step towards a functional Humanities Computing implementation of this theoretical approach was the development of TempusMarkerā€”a software tool providing automatic and semi-automatic markup routines for the tagging of temporal expressions in natural language texts. In the second step we discuss the principals underlying TempusParserā€”an analytical tool that can reconstruct temporal order in events by way of an algorithm-driven process of analysis and recombination of textual segments during which the "time stamp" of each segment as indicated by the temporal tags is interpreted
    • ā€¦
    corecore