60 research outputs found

    The PageRank Axioms

    Get PDF
    This talk introduces the first graph-theoretic, ordinal representation theorem for the PageRank algorithm, bridging the gap between page ranking algorithms and the formal theory of social choice

    Ranking authors using fractional counting of citations : an axiomatic approach

    Get PDF
    This paper analyzes from an axiomatic point of view a recent proposal for counting citations: the value of a citation given by a paper is inversely proportional to the total number of papers it cites. This way of fractionally counting citations was suggested as a possible way to normalize citation counts between fields of research having different citation cultures. It belongs to the “citing-side” approach to normalization. We focus on the properties characterizing this way of counting citations when it comes to ranking authors. Our analysis is conducted within a formal framework that is more complex but also more realistic than the one usually adopted in most axiomatic analyses of this kind

    An Axiomatic Approach to Routing

    Full text link
    Information delivery in a network of agents is a key issue for large, complex systems that need to do so in a predictable, efficient manner. The delivery of information in such multi-agent systems is typically implemented through routing protocols that determine how information flows through the network. Different routing protocols exist each with its own benefits, but it is generally unclear which properties can be successfully combined within a given algorithm. We approach this problem from the axiomatic point of view, i.e., we try to establish what are the properties we would seek to see in such a system, and examine the different properties which uniquely define common routing algorithms used today. We examine several desirable properties, such as robustness, which ensures adding nodes and edges does not change the routing in a radical, unpredictable ways; and properties that depend on the operating environment, such as an "economic model", where nodes choose their paths based on the cost they are charged to pass information to the next node. We proceed to fully characterize minimal spanning tree, shortest path, and weakest link routing algorithms, showing a tight set of axioms for each.Comment: In Proceedings TARK 2015, arXiv:1606.0729

    A Clustering and Associativity Analysis Based Probabilistic Method for Web Page Prediction

    Get PDF
    Today all the information, resources are available online through websites and web page. To access any instant information about any product, institution or organization, users can access the online available web pages. In this work, a three stage model is provided for more intelligent web page prediction. The method used the clustering and associativity analysis with rule formulation to improve the prediction results. The CMeans clustering is applied in this prior stage to identify the sessions with high and low usage of web pages. Once the clustering is done, the rule is defined to identify the sessions with page occurrence more than average. In the final stage, the neuro-fuzzy is applied to perform the web page prediction. The result shows that the model has provided the effective derivation on web page visits

    Measuring Tie Strength in Implicit Social Networks

    Full text link
    Given a set of people and a set of events they attend, we address the problem of measuring connectedness or tie strength between each pair of persons given that attendance at mutual events gives an implicit social network between people. We take an axiomatic approach to this problem. Starting from a list of axioms that a measure of tie strength must satisfy, we characterize functions that satisfy all the axioms and show that there is a range of measures that satisfy this characterization. A measure of tie strength induces a ranking on the edges (and on the set of neighbors for every person). We show that for applications where the ranking, and not the absolute value of the tie strength, is the important thing about the measure, the axioms are equivalent to a natural partial order. Also, to settle on a particular measure, we must make a non-obvious decision about extending this partial order to a total order, and that this decision is best left to particular applications. We classify measures found in prior literature according to the axioms that they satisfy. In our experiments, we measure tie strength and the coverage of our axioms in several datasets. Also, for each dataset, we bound the maximum Kendall's Tau divergence (which measures the number of pairwise disagreements between two lists) between all measures that satisfy the axioms using the partial order. This informs us if particular datasets are well behaved where we do not have to worry about which measure to choose, or we have to be careful about the exact choice of measure we make.Comment: 10 page
    • …
    corecore