7 research outputs found

    Supporting ‘Word-of-Mouth’ Social Networks through Collaborative Information Filtering

    Get PDF
    Altered Vista is an instructional system that supports a form of ‘contextual’ collaborative learning. Its design incorporates an information filtering technique, called collaborative information filtering, which, through computational and statistical means, leverages the work of individuals to benefit a group of users. Altered Vista is designed to provide, upon request, personalized recommendations of Web sites. It can also provide recommendations of like-minded people, thus setting the stage for future collaboration and communication. An empirical study involving in-service and pre-service teachers was conducted using Altered Vista and presents results from an empirical study. The study examined the feasibility and utility of automating the well-known social feature of propagating word-of-mouth opinions within educational settings. It also examined the impact of Altered Vista’s ability to recommend a social network of potentially unknown people

    Implications of Computational Cognitive Models for Information Retrieval

    Get PDF
    This dissertation explores the implications of computational cognitive modeling for information retrieval. The parallel between information retrieval and human memory is that the goal of an information retrieval system is to find the set of documents most relevant to the query whereas the goal for the human memory system is to access the relevance of items stored in memory given a memory probe (Steyvers & Griffiths, 2010). The two major topics of this dissertation are desirability and information scent. Desirability is the context independent probability of an item receiving attention (Recker & Pitkow, 1996). Desirability has been widely utilized in numerous experiments to model the probability that a given memory item would be retrieved (Anderson, 2007). Information scent is a context dependent measure defined as the utility of an information item (Pirolli & Card, 1996b). Information scent has been widely utilized to predict the memory item that would be retrieved given a probe (Anderson, 2007) and to predict the browsing behavior of humans (Pirolli & Card, 1996b). In this dissertation, I proposed the theory that desirability observed in human memory is caused by preferential attachment in networks. Additionally, I showed that documents accessed in large repositories mirror the observed statistical properties in human memory and that these properties can be used to improve document ranking. Finally, I showed that the combination of information scent and desirability improves document ranking over existing well-established approaches

    A Model-Driven Simulation Study of World-Wide-Web Cache Policies.

    Get PDF
    The World-Wide-Web has experienced exponential growth in recent years. This growth has created a tremendous increase in network and server loads that have subsequently adversely affected user-response times. Among many viable and available approaches to reducing user-response time, Web caching appears to be one approach that has recently received considerable attention. In this dissertation we explore a new approach to the study of Web cache policies, namely model-driven simulation. We present a good model of Web-access user patterns based on sound theory and principles from the information sciences. This model is justified by the empirical web access data from several different web sites. The importance of removal policies in improving cache performance motivates us to propose a dynamic and robust removal policy which incorporates the characteristics of user access patterns. We show that our proposed removal policy performs rigorously well over a variety of parameters. In this research we take a model-driven simulation approach to evaluate the impact of different factors and policies on cache performance. The results indicate that cache size, user access patterns and removal policy are major factors affecting cache performance. Continuous removal method is a good and simple method. The increase of average document size, low comfort level (less than 50% cache size) and threshold policy would degrade web cache performance. Finally, we discuss the limitations of our current research and give some directions of future research

    A Theory and Practice of Website Engagibility

    Get PDF
    This thesis explores the domain of website quality. It presents a new study of website quality - an abstraction and synthesis, a measurement methodology, and analysis - and proposes metrics which can be used to quantify it. The strategy employed involved revisiting software quality, modelling its broader perspectives and identifying quality factors which are specific to the World Wide Web (WWW). This resulted in a detailed set of elements which constitute website quality, a method for quantifying a quality measure, and demonstrating an approach to benchmarking eCommerce websites. The thesis has two dimensions. The first is a contribution to the theory of software quality - specifically website quality. The second dimension focuses on two perspectives of website quality - quality-of-product and quality-of-use - and uses them to present a new theory and methodology which are important first steps towards understanding metrics and their use when quantifying website quality. Once quantified, the websites can be benchmarked by evaluators and website owners for comparison with competitor sites. The thesis presents a study of five mature eCommerce websites. The study involves identifying, defining and collecting data counts for 67 site-level criteria for each site. These counts are specific to website product quality and include criteria such as occurrences of hyperlinks and menus which underpin navigation, occurrences of activities which underpin interactivity, and counts relating to a site’s eCommerce maturity. Lack of automated count collecting tools necessitated online visits to 537 HTML pages and performing manual counts. The thesis formulates a new approach to measuring website quality, named Metric Ratio Analysis (MRA). The thesis demonstrates how one website quality factor - engagibility - can be quantified and used for website comparison analysis. The thesis proposes a detailed theoretical and empirical validation procedure for MRA

    Predicting Document Access in Large Multimedia Repositories

    No full text
    Network-accessible multimedia databases, repositories, and libraries are proliferating at a rapid rate. A crucial problem for these repositories remains timely and appropriate document access. In this article, we borrow a model from psychological research on human memory, which has long studied retrieval of memory items based on frequency and recency rates of past item occurrences. Specifically, the model uses frequency and recency rates of prior document accesses to predict future document requests. The model is illustrated by analyzing the log file of document accesses to the Georgia Institute of Technology World Wide Web (WWW) repository, a large multimedia respository exhibiting high access rates. Results show that the model predicts document access rates with a reliable degree of accuracy. We describe extensions to the basic approach that combine the recency and frequency analyses and which incorporate respository structure and document type. These results have implications for the formulation of descriptive user models of information access in large repositories. In addition, we sketch applications in the areas of design of information systems and interfaces and their document-caching algorithms

    Predicting Document Access in Large, Multimedia Repositories

    Get PDF
    Network-accessible multimedia databases, repositories, and libraries are proliferating at a rapid rate. A crucial problem for these repositories remains timely and appropriate document access. In this paper, we borrow a model from psychological research on human memory, which has long studied retrieval of memory items based on frequency and recency rates of past item occurrences. Specifically, the model uses frequency and recency rates of prior document accesses to predict future document requests. The model is illustrated by analyzing the log file of document accesses to the Georgia Institute of Technology World-Wide Web (WWW) database, a large multimedia repository exhibiting high access rates. Results show that the model predicts document access rates with a reliable degree of accuracy. We describe extensions to the basic approach that combine the recency and frequency analyses, and incorporate repository structure and document type. These results have implications for the formulati..
    corecore