9,950 research outputs found

    Implicit Measures of Lostness and Success in Web Navigation

    Get PDF
    In two studies, we investigated the ability of a variety of structural and temporal measures computed from a web navigation path to predict lostness and task success. The user’s task was to find requested target information on specified websites. The web navigation measures were based on counts of visits to web pages and other statistical properties of the web usage graph (such as compactness, stratum, and similarity to the optimal path). Subjective lostness was best predicted by similarity to the optimal path and time on task. The best overall predictor of success on individual tasks was similarity to the optimal path, but other predictors were sometimes superior depending on the particular web navigation task. These measures can be used to diagnose user navigational problems and to help identify problems in website design

    Hypermedia learning and prior knowledge: Domain expertise vs. system expertise

    Get PDF
    Prior knowledge is often argued to be an important determinant in hypermedia learning, and may be thought of as including two important elements: domain expertise and system expertise. However, there has been a lack of research considering these issues together. In an attempt to address this shortcoming, this paper presents a study that examines how domain expertise and system expertise influence students’ learning performance in, and perceptions of, a hypermedia system. The results indicate that participants with lower domain knowledge show a greater improvement in their learning performance than those with higher domain knowledge. Furthermore, those who enjoy using the Web more are likely to have positive perceptions of non-linear interaction. Discussions on how to accommodate the different needs of students with varying levels of prior knowledge are provided based on the results

    The Best Trail Algorithm for Assisted Navigation of Web Sites

    Full text link
    We present an algorithm called the Best Trail Algorithm, which helps solve the hypertext navigation problem by automating the construction of memex-like trails through the corpus. The algorithm performs a probabilistic best-first expansion of a set of navigation trees to find relevant and compact trails. We describe the implementation of the algorithm, scoring methods for trails, filtering algorithms and a new metric called \emph{potential gain} which measures the potential of a page for future navigation opportunities.Comment: 11 pages, 11 figure

    The Impact of Link Suggestions on User Navigation and User Perception

    Get PDF
    The study reported in this paper explores the effects of providing web users with link suggestions that are relevant to their tasks. Results indicate that link suggestions were positively received. Furthermore, users perceived sites with link suggestions as more usable and themselves as less disoriented. The average task execution time was significantly lower than in the control condition and users appeared to navigate in a more structured manner. Unexpectedly, men took more advantage from link suggestions than women

    A fine grained heuristic to capture web navigation patterns

    Get PDF
    In previous work we have proposed a statistical model to capture the user behaviour when browsing the web. The user navigation information obtained from web logs is modelled as a hypertext probabilistic grammar (HPG) which is within the class of regular probabilistic grammars. The set of highest probability strings generated by the grammar corresponds to the user preferred navigation trails. We have previously conducted experiments with a Breadth-First Search algorithm (BFS) to perform the exhaustive computation of all the strings with probability above a specified cut-point, which we call the rules. Although the algorithm’s running time varies linearly with the number of grammar states, it has the drawbacks of returning a large number of rules when the cut-point is small and a small set of very short rules when the cut-point is high. In this work, we present a new heuristic that implements an iterative deepening search wherein the set of rules is incrementally augmented by first exploring trails with high probability. A stopping parameter is provided which measures the distance between the current rule-set and its corresponding maximal set obtained by the BFS algorithm. When the stopping parameter takes the value zero the heuristic corresponds to the BFS algorithm and as the parameter takes values closer to one the number of rules obtained decreases accordingly. Experiments were conducted with both real and synthetic data and the results show that for a given cut-point the number of rules induced increases smoothly with the decrease of the stopping criterion. Therefore, by setting the value of the stopping criterion the analyst can determine the number and quality of rules to be induced; the quality of a rule is measured by both its length and probability

    The WEB Book experiments in electronic textbook design

    Get PDF
    This paper describes a series of three evaluations of electronic textbooks on the Web, which focused on assessing how appearance and design can affect users' sense of engagement and directness with the material. The EBONI Project's methodology for evaluating electronic textbooks is outlined and each experiment is described, together with an analysis of results. Finally, some recommendations for successful design are suggested, based on an analysis of all experimental data. These recommendations underline the main findings of the evaluations: that users want some features of paper books to be preserved in the electronic medium, while also preferring electronic text to be written in a scannable style

    Revisitation Patterns and Disorientation

    Get PDF
    The non-linear structure of web sites may cause users to become disorientated. In this paper we describe the results of a pilot study to find measures of user revisitation patterns that help in predicting disorientation

    Mapping cyberspace: visualising, analysing and exploring virtual worlds

    Get PDF
    In the past years, with the development of computer networks such as the Internet and world wide web (WWW), cyberspace has been increasingly studied by researchers in various disciplines such as computer sciences, sociology, geography, and cartography as well. Cyberspace is mainly rooted in two computer technologies: network and virtual reality. Cybermaps, as special maps for cyberspace, have been used as a tool for understanding various aspects of cyberspace. As recognised, cyberspace as a virtual space can be distinguished from the earth we live on in many ways. Because of these distinctions, mapping it implies a big challenge for cartographers with their long tradition of mapping things in clear ways. This paper, by comparing it to traditional maps, addresses various cybermap issues such as visualising, analysing and exploring cyberspace from different aspects

    Seven ways to make a hypertext project fail

    Get PDF
    Hypertext is an exciting concept, but designing and developing hypertext applications of practical scale is hard. To make a project feasible and successful 'hypertext engineers' must overcome the following problems: (1) developing realistic expectations in the face of hypertext hype; (2) assembling a multidisciplinary project team; (3) establishing and following design guidelines; (4) dealing with installed base constraints; (5) obtaining usable source files; (6) finding appropriate software technology and methods; and (7) overcoming legal uncertainties about intellectual property concerns
    • …
    corecore