7,343 research outputs found

    GED - a generalised syntax editor : a thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Computer Science at Massey University

    Get PDF
    This thesis traces the development of a full-screen syntax-directed editor - a type of editor that operates on a program in terms of its syntactic tree structure instead of its sequential character representation. The editor is table-driven, reading as input an extended BNF syntax of the target language. It can therefore be used for any language whose syntax can be defined in EBNF. Print formatting information can be included with the syntactic definition to enable programs to be pretty-printed when they are displayed. The user is presented with a pretty-printed skeletal outline of a program with the currently selected construct highlighted and all required syntactic items provided by the editor. Any constructs with alternatives, such as "", which occurs in many languages, are initially denoted by a placeholder in the form of a non-terminal name (i.e. "") which is expanded when the user indicates which alternative is wanted. All symbols entered by the user are parsed immediately and any erroneous symbols rejected, making it impossible to create a syntactically incorrect program. The editor cannot detect semantic errors as no semantic information is available from the EBNF syntax. However the first use of all identifiers is flagged by the editor as an aid to the detection of undeclared identifiers. A "help" area at the bottom of the screen continuously displays a list of the correct next symbols and the syntactic definition of the currently selected program construct. This display, together with a multi-level "undo" command and the provision of a skeletal program by the editor, provides a way of exploring the various constructs in a programming language, while ensuring the syntactic correctness of the resultant program

    Microgravity cursor control device evaluation for Space Station Freedom workstations

    Get PDF
    This research addressed direct manipulation interface (curser-controlled device) usability in microgravity. The data discussed are from KC-135 flights. This included pointing and dragging movements over a variety of angles and distances. Detailed error and completion time data provided researchers with information regarding cursor control shape, selection button arrangement, sensitivity, selection modes, and considerations for future research

    Micro-based fact collection tool user's manual

    Get PDF
    A procedure designed for use by an analyst to assist in the collection and organization of data gathered during the interview processes associated with system analysis and modeling task is described. The basic concept behind the development of this tool is that during the interview process an analyst is presented with assertions of facts by the domain expert. The analyst also makes observations of the domain. These facts need to be collected and preserved in such a way as to allow them to serve as the basis for a number of decision making processes throughout the system development process. This tool can be thought of as a computerization of the analysts's notebook

    Interaction With Tilting Gestures In Ubiquitous Environments

    Full text link
    In this paper, we introduce a tilting interface that controls direction based applications in ubiquitous environments. A tilt interface is useful for situations that require remote and quick interactions or that are executed in public spaces. We explored the proposed tilting interface with different application types and classified the tilting interaction techniques. Augmenting objects with sensors can potentially address the problem of the lack of intuitive and natural input devices in ubiquitous environments. We have conducted an experiment to test the usability of the proposed tilting interface to compare it with conventional input devices and hand gestures. The experiment results showed greater improvement of the tilt gestures in comparison with hand gestures in terms of speed, accuracy, and user satisfaction.Comment: 13 pages, 10 figure

    The fast contribution of visual-proprioceptive discrepancy to reach aftereffects and proprioceptive recalibration

    Get PDF
    Adapting reaches to altered visual feedback not only leads to motor changes, but also to shifts in perceived hand location; “proprioceptive recalibration”. These changes are robust to many task variations and can occur quite rapidly. For instance, our previous study found both motor and sensory shifts arise in as few as 6 rotated-cursor training trials. The aim of this study is to investigate one of the training signals that contribute to these rapid sensory and motor changes. We do this by removing the visuomotor error signals associated with classic visuomotor rotation training; and provide only experience with a visual-proprioceptive discrepancy for training. While a force channel constrains reach direction 30o away from the target, the cursor representing the hand unerringly moves straight to the target. The resulting visual-proprioceptive discrepancy drives significant and rapid changes in no-cursor reaches and felt hand position, again within only 6 training trials. The extent of the sensory change is unexpectedly larger following the visual-proprioceptive discrepancy training. Not surprisingly the size of the reach aftereffects is substantially smaller than following classic visuomotor rotation training. However, the time course by which both changes emerge is similar in the two training types. These results suggest that even the mere exposure to a discrepancy between felt and seen hand location is a sufficient training signal to drive robust motor and sensory plasticity.York University Librarie

    Surface electromyographic control of a novel phonemic interface for speech synthesis

    Full text link
    Many individuals with minimal movement capabilities use AAC to communicate. These individuals require both an interface with which to construct a message (e.g., a grid of letters) and an input modality with which to select targets. This study evaluated the interaction of two such systems: (a) an input modality using surface electromyography (sEMG) of spared facial musculature, and (b) an onscreen interface from which users select phonemic targets. These systems were evaluated in two experiments: (a) participants without motor impairments used the systems during a series of eight training sessions, and (b) one individual who uses AAC used the systems for two sessions. Both the phonemic interface and the electromyographic cursor show promise for future AAC applications.F31 DC014872 - NIDCD NIH HHS; R01 DC002852 - NIDCD NIH HHS; R01 DC007683 - NIDCD NIH HHS; T90 DA032484 - NIDA NIH HHShttps://www.ncbi.nlm.nih.gov/pubmed/?term=Surface+electromyographic+control+of+a+novel+phonemic+interface+for+speech+synthesishttps://www.ncbi.nlm.nih.gov/pubmed/?term=Surface+electromyographic+control+of+a+novel+phonemic+interface+for+speech+synthesisPublished versio

    Distributed-Pair Programming can work well and is not just Distributed Pair-Programming

    Full text link
    Background: Distributed Pair Programming can be performed via screensharing or via a distributed IDE. The latter offers the freedom of concurrent editing (which may be helpful or damaging) and has even more awareness deficits than screen sharing. Objective: Characterize how competent distributed pair programmers may handle this additional freedom and these additional awareness deficits and characterize the impacts on the pair programming process. Method: A revelatory case study, based on direct observation of a single, highly competent distributed pair of industrial software developers during a 3-day collaboration. We use recordings of these sessions and conceptualize the phenomena seen. Results: 1. Skilled pairs may bridge the awareness deficits without visible obstruction of the overall process. 2. Skilled pairs may use the additional editing freedom in a useful limited fashion, resulting in potentially better fluency of the process than local pair programming. Conclusion: When applied skillfully in an appropriate context, distributed-pair programming can (not will!) work at least as well as local pair programming
    corecore