2,674 research outputs found

    BGS Sigma 2012 open source user guide

    Get PDF
    The British Geological Survey began developing digital field mapping systems in 1989. However, it was apparent that the commercially available hardware was not suitable at that time. In 2001, we revisited the topic under the System for Integrated Geoscience Mapping (SIGMA) programme. By 2003, BGS had developed a PDA (personal digital assistant) field system, which was superseded in 2005, when we began deploying a beta system on rugged Tablet PCs. The Tablet PC system, which we called BGS•SIGMAmobile was used by BGS in mapping projects across the UK as well as overseas. It first became available in Open Source form, in June 2009 via the BGS website, www.bgs.ac.uk, under an agreement which stipulates that updates and modifications must be supplied to BGS in order to stimulate further developments. In 2011/2012, BGS•SIGMAmobile was rewritten in .NET and combined with our office based mapping software BGS•SIGMAdesktop within ArcGIS 10.x to create BGS•SIGMA 2012. It is envisaged that future releases will be made available from the BGS website incorporating new modules, modifications and upgrades supplied by BGS and external users of the system. This document has been written to guide users through the installation and use of BGS•SIGMA 2012 (mobile and desktop), which is the third free release. We are happy to receive feedback and modifications emailed to [email protected]

    Gaze-shifting:direct-indirect input with pen and touch modulated by gaze

    Get PDF
    Modalities such as pen and touch are associated with direct input but can also be used for indirect input. We propose to combine the two modes for direct-indirect input modulated by gaze. We introduce gaze-shifting as a novel mechanism for switching the input mode based on the alignment of manual input and the user's visual attention. Input in the user's area of attention results in direct manipulation whereas input offset from the user's gaze is redirected to the visual target. The technique is generic and can be used in the same manner with different input modalities. We show how gaze-shifting enables novel direct-indirect techniques with pen, touch, and combinations of pen and touch input

    Multi-Modal Interfaces for Sensemaking of Graph-Connected Datasets

    Get PDF
    The visualization of hypothesized evolutionary processes is often shown through phylogenetic trees. Given evolutionary data presented in one of several widely accepted formats, software exists to render these data into a tree diagram. However, software packages commonly in use by biologists today often do not provide means to dynamically adjust and customize these diagrams for studying new hypothetical relationships, and for illustration and publication purposes. Even where these options are available, there can be a lack of intuitiveness and ease-of-use. The goal of our research is, thus, to investigate more natural and effective means of sensemaking of the data with different user input modalities. To this end, we experimented with different input modalities, designing and running a series of prototype studies, ultimately focusing our attention on pen-and-touch. Through several iterations of feedback and revision provided with the help of biology experts and students, we developed a pen-and-touch phylogenetic tree browsing and editing application called PhyloPen. This application expands on the capabilities of existing software with visualization techniques such as overview+detail, linked data views, and new interaction and manipulation techniques using pen-and-touch. To determine its impact on phylogenetic tree sensemaking, we conducted a within-subject comparative summative study against the most comparable and commonly used state-of-the-art mouse-based software system, Mesquite. Conducted with biology majors at the University of Central Florida, each used both software systems on a set number of exercise tasks of the same type. Determining effectiveness by several dependent measures, the results show PhyloPen was significantly better in terms of usefulness, satisfaction, ease-of-learning, ease-of-use, and cognitive load and relatively the same in variation of completion time. These results support an interaction paradigm that is superior to classic mouse-based interaction, which could have the potential to be applied to other communities that employ graph-based representations of their problem domains

    Using voice to tag digital photographs on the spot

    Get PDF
    Tagging of media, particularly digital photographs, has become a very popular and efficient means of organizing material on the internet and on personal computers. Tagging, though, is normally accomplished long after the images have been captured, possibly at the expense of in-the-moment information. Although some digital cameras have begun to automatically populate the various fields of a photograph\u27s metadata, these generic labels often lack in the descriptiveness presented through user-observed annotations and therefore stress the necessity of a user-driven input method. However, most mobile annotation applications demand a great number of keystrokes in order for users to tag photographs and thereby focus the user\u27s attention inward. Specifically, the problem is that these applications require users to take their eyes off the environment while typing in tags. We hypothesize that we can shift the user\u27s focus away from the mobile device and back to their environment by creating a mobile annotation application which accepts voice commands. In other words, our major hypothesis is that a convenient way of tagging digital photographs is by using voice commands

    Manual for conducting socioeconomic surveys through Pocket Portable Device Assistants (PDAs) and personal computers

    Get PDF
    Socioeconomic environment, Surveys, Computers, Computer software, Data collection, PDAs, Personal computers, Research Methods/ Statistical Methods, U40, E10,

    Barehand Mode Switching in Touch and Mid-Air Interfaces

    Get PDF
    Raskin defines a mode as a distinct setting within an interface where the same user input will produce results different to those it would produce in other settings. Most interfaces have multiple modes in which input is mapped to different actions, and, mode-switching is simply the transition from one mode to another. In touch interfaces, the current mode can change how a single touch is interpreted: for example, it could draw a line, pan the canvas, select a shape, or enter a command. In Virtual Reality (VR), a hand gesture-based 3D modelling application may have different modes for object creation, selection, and transformation. Depending on the mode, the movement of the hand is interpreted differently. However, one of the crucial factors determining the effectiveness of an interface is user productivity. Mode-switching time of different input techniques, either in a touch interface or in a mid-air interface, affects user productivity. Moreover, when touch and mid-air interfaces like VR are combined, making informed decisions pertaining to the mode assignment gets even more complicated. This thesis provides an empirical investigation to characterize the mode switching phenomenon in barehand touch-based and mid-air interfaces. It explores the potential of using these input spaces together for a productivity application in VR. And, it concludes with a step towards defining and evaluating the multi-faceted mode concept, its characteristics and its utility, when designing user interfaces more generally

    Designing Explicit Numeric Input Interfaces for Immersive Virtual Environments

    Get PDF
    User interfaces involving explicit control of numeric values in immersive virtual environments have not been well studied. In the context of designing three-dimensional interaction techniques for the creation of multiple objects, called cloning, we have developed and tested a dynamic slider interface (D-Slider) and a virtual numeric keypad (VKey). Our cloning interface requires precise number input because it allows users to place objects at any location in the environment with a precision of 1/10 unit. The design of the interface focuses on feedback, constraints, and expressiveness. Comparative usability studies have shown that the newly designed user interfaces were easy to use, effective, and had a good quality of interaction. We describe a working prototype of our cloning interface, the iterative design process for D-Slider and V-Key, and lessons learned. Our interfaces can be re-used for any virtual environment interaction tasks requiring explicit numeric input

    Usability of the Stylus Pen in Mobile Electronic Documentation

    Get PDF
    Stylus pens are often used with mobile information devices. However, few studies have examined the stylus’ simple movements because the technical expertise to support documentation with stylus pens has not been developed. This study examined the usability of stylus pens in authentic documentation tasks, including three main tasks (sentence, table, and paragraph making) with two types of styluses (touchsmart stylus and mobile stylus) and a traditional pen. The statistical results showed that participants preferred the traditional pen in all criteria. Because of inconvenient hand movements, the mobile stylus was the least preferred on every task. Mobility does not provide any advantage in using the stylus. In addition, the study also found inconvenient hand support using a stylus and different feedback between a stylus and a traditional pen.This study was supported by the Dongguk University Research Fund of 2015. Support for the University Jaume-I (UJI) Robotic Intelligence Laboratory is provided in part by Ministerio de Economía y Competitividad (DPI2011-27846), by Generalitat Valenciana (PROMETEOII/2014/028) and by Universitat Jaume I (P1-1B2014-52)

    Multi-cursor multi-user mobile interaction with a large shared display

    Get PDF
    When using a mobile device to control a cursor on a large shared display, the interaction must be carefully planned to match the environment and purpose of the systems use. We describe a ‘democratic jukebox’ system that revealed five recommendations that should be considered when designing this type of interaction relating to providing feedback to the user; how to represent users in a multi-cursor based system; where people tend to look and their expectation of how to move their cursor; the orientation of screens and the social context; and, the use of simulated users to give the real users a sense that they are engaging with a greater audience

    The machine refinement of raw graphic data for translation into a low level data base for computer aided architectural design (CAAD).

    Get PDF
    It is argued that a significant feature which acts as a disincentive against the adoption of CAAD systems by small private architectural practices, is the awkwardness of communicating with computers when compared with traditional drawing board techniques. This consideration, although not perhaps the dominant feature, may be mitigated by the development of systems in which the onus of communicating is placed on the machine, through the medium of an architect's sketch plan drawing. In reaching this conclusion, a design morphology is suggested, in which the creative generation of building designs is set in the context of the development of a 'data-base' of information which completely and consistently describes the architect's hypothetical building solution. This thesis describes research carried out by the author between 1981 and 1984, and describes the theory, development and application of algorithms to interpret architect's sketch plan drawings, and hence permit the encoding of building geometries for CAAD applications programs
    • …
    corecore