27,133 research outputs found

    Web Mining-Based Objective Metrics for Measuring Website Navigatability

    Get PDF
    Web site design is critical to the success of electronic commerce and digital government. Effective design requires appropriate evaluation methods and measurement metrics. The current research examines Web site navigability, a fundamental structural aspect of Web site design. We define Web site navigability as the extent to which a visitor can use a Web site’s hyperlink structure to locate target contents successfully in an easy and efficient manner. We propose a systematic Web site navigability evaluation method built on Web mining techniques. To complement the subjective self-reported metrics commonly used by previous research, we develop three objective metrics for measuring Web site navigability on the basis of the Law of Surfing. We illustrate the use of the proposed methods and measurement metrics with two large Web sites

    Detecting and Preventing SQL Injection and XSS Attack using Web Security Mechanisms

    Get PDF
    In this paper we proposed a system prototype tool to evaluate web application security mechanisms. The methodology is based on the idea that injecting realistic vulnerabilities in a web application and attacking them automatically can be used to support the assessment of existing security mechanisms and tools in custom setup scenarios. To provide true to life results, the proposed vulnerability and attack injection methodology relies on the study of a large number of vulnerabilities in real web applications. To remove the vulnerabilities by implementing a concrete Vulnerability & Attack Injector Tool (VAIT) for securing web applications. To prevent various attacks like follows: 1. SQL Injection (SQLi) 2. Cross Site Scripting (XSS) 3. Brute Force Attack 4. Shoulder surfing Attack 5. Social Attack. 6. Dictionary Attac

    The Best of the Best Ranking and Rating Digital Reference Resources

    Get PDF
    What makes a Web site the best? There are myriad answers. What makes a Web site the best For reference? Even though the question is more specific, there are still many answers. A high-quality site can be hard to define in generic terms. In describing the process of selecting the top reference titles for the year, Lawrence similarly asked, "As for the pertinent question, what constitutes an outstanding reference title? Ask ten people, or librarians anyway, and you will get as many answers." It has been said, in fact, that quality is like art—it's hard to define, but you know it when you see it. Increasing attempts are being made to provide evaluated, high-quality Web surfing. Some of this is done by meta-site creation. These resources imply that a site is "good" if it's in the guide. Many examples of these sorts of sites available by and for libraries and their constituents exist. However, some resources go beyond simple listing and provide actual ranking, rating, and evaluation of sites, which can lean toward either the subjective or the scientific and are hard to do well without selection and ranking criteria. This column examines various examples of Web site rankings or ratings and attempts to enumerate the vast possibilities of criteria for evaluation

    Zipf's Law for web surfers

    Get PDF
    One of the main activities of Web users, known as 'surfing', is to follow links. Lengthy navigation often leads to disorientation when users lose track of the context in which they are navigating and are unsure how to proceed in terms of the goal of their original query. Studying navigation patterns of Web users is thus important, since it can lead us to a better understanding of the problems users face when they are surfing. We derive Zipf's rank frequency law (i.e., an inverse power law) from an absorbing Markov chain model of surfers' behavior assuming that less probable navigation trails are, on average, longer than more probable ones. In our model the probability of a trail is interpreted as the relevance (or 'value') of the trail. We apply our model to two scenarios: in the first the probability of a user terminating the navigation session is independent of the number of links he has followed so far, and in the second the probability of a user terminating the navigation session increases by a constant each time the user follows a link. We analyze these scenarios using two sets of experimental data sets showing that, although the first scenario is only a rough approximation of surfers' behavior, the data is consistent with the second scenario and can thus provide an explanation of surfers' behavior

    Shuffling a Stacked Deck: The Case for Partially Randomized Ranking of Search Engine Results

    Get PDF
    In-degree, PageRank, number of visits and other measures of Web page popularity significantly influence the ranking of search results by modern search engines. The assumption is that popularity is closely correlated with quality, a more elusive concept that is difficult to measure directly. Unfortunately, the correlation between popularity and quality is very weak for newly-created pages that have yet to receive many visits and/or in-links. Worse, since discovery of new content is largely done by querying search engines, and because users usually focus their attention on the top few results, newly-created but high-quality pages are effectively ``shut out,'' and it can take a very long time before they become popular. We propose a simple and elegant solution to this problem: the introduction of a controlled amount of randomness into search result ranking methods. Doing so offers new pages a chance to prove their worth, although clearly using too much randomness will degrade result quality and annul any benefits achieved. Hence there is a tradeoff between exploration to estimate the quality of new pages and exploitation of pages already known to be of high quality. We study this tradeoff both analytically and via simulation, in the context of an economic objective function based on aggregate result quality amortized over time. We show that a modest amount of randomness leads to improved search results

    The egalitarian effect of search engines

    Full text link
    Search engines have become key media for our scientific, economic, and social activities by enabling people to access information on the Web in spite of its size and complexity. On the down side, search engines bias the traffic of users according to their page-ranking strategies, and some have argued that they create a vicious cycle that amplifies the dominance of established and already popular sites. We show that, contrary to these prior claims and our own intuition, the use of search engines actually has an egalitarian effect. We reconcile theoretical arguments with empirical evidence showing that the combination of retrieval by search engines and search behavior by users mitigates the attraction of popular pages, directing more traffic toward less popular sites, even in comparison to what would be expected from users randomly surfing the Web.Comment: 9 pages, 8 figures, 2 appendices. The final version of this e-print has been published on the Proc. Natl. Acad. Sci. USA 103(34), 12684-12689 (2006), http://www.pnas.org/cgi/content/abstract/103/34/1268

    Spartan Daily, September 9, 2003

    Get PDF
    Volume 121, Issue 8https://scholarworks.sjsu.edu/spartandaily/9874/thumbnail.jp

    The Local TV News Experience: How to Win Viewers by Focusing on Engagement

    Get PDF
    Offers television stations insights to help them engage their audiences, stimulate strategic thinking about their position and role in the market, and connect with viewers in ways that could lead to improved civic involvement
    corecore