16,695 research outputs found

    The nature and evaluation of commercial expert system building tools, revision 1

    Get PDF
    This memorandum reviews the factors that constitute an Expert System Building Tool (ESBT) and evaluates current tools in terms of these factors. Evaluation of these tools is based on their structure and their alternative forms of knowledge representation, inference mechanisms and developer end-user interfaces. Next, functional capabilities, such as diagnosis and design, are related to alternative forms of mechanization. The characteristics and capabilities of existing commercial tools are then reviewed in terms of these criteria

    Statistical user interfaces

    Get PDF
    A statistical user interface is an interface between a human user and a statistical software package. Whenever we use a statistical software package we want to solve a specific statistical problem. But very often at first it is necessary to learn specific things about the software package. Everyone of us knows about the ?religious wars? concerning the question which statistical software package/method is the best for a certain task; see Marron (1996) and Cleveland and Loader (1996) and related internet discussions. Experienced statisticians use a bunch of different statistical software packages rather than a single one; although all of the major companies (at least the marketing departments) tell us that we only need their software package. --

    KIPSE1: A Knowledge-based Interactive Problem Solving Environment for data estimation and pattern classification

    Get PDF
    A knowledge-based interactive problem solving environment called KIPSE1 is presented. The KIPSE1 is a system built on a commercial expert system shell, the KEE system. This environment gives user capability to carry out exploratory data analysis and pattern classification tasks. A good solution often consists of a sequence of steps with a set of methods used at each step. In KIPSE1, solution is represented in the form of a decision tree and each node of the solution tree represents a partial solution to the problem. Many methodologies are provided at each node to the user such that the user can interactively select the method and data sets to test and subsequently examine the results. Otherwise, users are allowed to make decisions at various stages of problem solving to subdivide the problem into smaller subproblems such that a large problem can be handled and a better solution can be found

    Using non-speech sounds to provide navigation cues

    Get PDF
    This article describes 3 experiments that investigate the possibiity of using structured nonspeech audio messages called earcons to provide navigational cues in a menu hierarchy. A hierarchy of 27 nodes and 4 levels was created with an earcon for each node. Rules were defined for the creation of hierarchical earcons at each node. Participants had to identify their location in the hierarchy by listening to an earcon. Results of the first experiment showed that participants could identify their location with 81.5% accuracy, indicating that earcons were a powerful method of communicating hierarchy information. One proposed use for such navigation cues is in telephone-based interfaces (TBIs) where navigation is a problem. The first experiment did not address the particular problems of earcons in TBIs such as “does the lower quality of sound over the telephone lower recall rates,” “can users remember earcons over a period of time.” and “what effect does training type have on recall?” An experiment was conducted and results showed that sound quality did lower the recall of earcons. However; redesign of the earcons overcame this problem with 73% recalled correctly. Participants could still recall earcons at this level after a week had passed. Training type also affected recall. With personal training participants recalled 73% of the earcons, but with purely textual training results were significantly lower. These results show that earcons can provide good navigation cues for TBIs. The final experiment used compound, rather than hierarchical earcons to represent the hierarchy from the first experiment. Results showed that with sounds constructed in this way participants could recall 97% of the earcons. These experiments have developed our general understanding of earcons. A hierarchy three times larger than any previously created was tested, and this was also the first test of the recall of earcons over time

    A novel R-package graphic user interface for the analysis of metabonomic profiles

    Get PDF
    Background Analysis of the plethora of metabolites found in the NMR spectra of biological fluids or tissues requires data complexity to be simplified. We present a graphical user interface (GUI) for NMR-based metabonomic analysis. The "Metabonomic Package" has been developed for metabonomics research as open-source software and uses the R statistical libraries. /Results The package offers the following options: Raw 1-dimensional spectra processing: phase, baseline correction and normalization. Importing processed spectra. Including/excluding spectral ranges, optional binning and bucketing, detection and alignment of peaks. Sorting of metabolites based on their ability to discriminate, metabolite selection, and outlier identification. Multivariate unsupervised analysis: principal components analysis (PCA). Multivariate supervised analysis: partial least squares (PLS), linear discriminant analysis (LDA), k-nearest neighbor classification. Neural networks. Visualization and overlapping of spectra. Plot values of the chemical shift position for different samples. Furthermore, the "Metabonomic" GUI includes a console to enable other kinds of analyses and to take advantage of all R statistical tools. /Conclusion We made complex multivariate analysis user-friendly for both experienced and novice users, which could help to expand the use of NMR-based metabonomics

    Investigating graphical user interface usability on task sequence and display structure dependencies

    Get PDF
    Designing Graphical User Interfaces (GUI) requires the consideration of task sequence requirements (sequence of operations arising from task structures and application constraints) and display structure (layout of the elements of the interface) relationships. The basic purpose was to understand the usability differences of the interfaces through efficiency, motor performance, and search performance. Thirty-two subjects performed experiments in four groups. The experiments differed in display structure and compatibility of task sequences. Subject mouse actions, mouse coordinates and eye positions were recorded. The derived measures, click efficiency, mouse traversal and eye visits to different areas of interest (namely the tool, object, and goal), were analyzed in a repeated measures factorial design with compatibility and display structure as the between subjects factors and phase of learning as the within subject factor. A significant interaction between compatibility and phase of learning (p\u3c.01) was observed. Mouse traversal per unit time increased significantly (p\u3c. 05) across phases of learning. The phase of learning affected the number of eye visits for all groups. Compatibility had a significant ((p\u3c.005) effect on the average processing time during search. The results establish that the compatibility of task sequence requirement with the display structure affecting the performance of subjects and hence the usability of the interface was thus obtained. However, through learning, subject performance showed considerable improvement and the effects of task sequence and display structure diminished at final stages of user learning. Based on this evidence, a systemic structural activity approach was used to develop a model of human performance on the eye movement and mouse action data. This structural model of human performance is defined as an algorithm and can be used for estimating complexity of task performance. In this study only the assumptions for development of the model and the formulation of the model are explained as an application of the results of the study. The study hence served a dual purpose in the long run: understanding the compatibility of the task sequence with the interface display structure as well as establishing eye and mouse movements as a viable tool to study task performance at human computer interfaces
    • 

    corecore