18,279 research outputs found

    Conceptual Linking: Ontology-based Open Hypermedia

    No full text
    This paper describes the attempts of the COHSE project to define and deploy a Conceptual Open Hypermedia Service. Consisting of • an ontological reasoning service which is used to represent a sophisticated conceptual model of document terms and their relationships; • a Web-based open hypermedia link service that can offer a range of different link-providing facilities in a scalable and non-intrusive fashion; and integrated to form a conceptual hypermedia system to enable documents to be linked via metadata describing their contents and hence to improve the consistency and breadth of linking of WWW documents at retrieval time (as readers browse the documents) and authoring time (as authors create the documents)

    Conceptual Linking: Ontology-based Open Hypermedia

    No full text
    This paper describes the attempts of the COHSE project to define and deploy a Conceptual Open Hypermedia Service. Consisting of • an ontological reasoning service which is used to represent a sophisticated conceptual model of document terms and their relationships; • a Web-based open hypermedia link service that can offer a range of different link-providing facilities in a scalable and non-intrusive fashion; and integrated to form a conceptual hypermedia system to enable documents to be linked via metadata describing their contents and hence to improve the consistency and breadth of linking of WWW documents at retrieval time (as readers browse the documents) and authoring time (as authors create the documents)

    Data conversion and interoperability for FCA

    Get PDF
    This paper proposes a tool that converts non-FCA format data files into an FCA format, thereby making a wide range of public data sets and data produced by non-FCA tools interoperable with FCA tools. This will also offer the power of FCA to a wider community of data analysts. A repository of converted data is also proposed, as a consistent resource of public data for analysis and for the testing, evaluation and comparison of FCA tools and algorithms.</p

    The Partial Evaluation Approach to Information Personalization

    Get PDF
    Information personalization refers to the automatic adjustment of information content, structure, and presentation tailored to an individual user. By reducing information overload and customizing information access, personalization systems have emerged as an important segment of the Internet economy. This paper presents a systematic modeling methodology - PIPE (`Personalization is Partial Evaluation') - for personalization. Personalization systems are designed and implemented in PIPE by modeling an information-seeking interaction in a programmatic representation. The representation supports the description of information-seeking activities as partial information and their subsequent realization by partial evaluation, a technique for specializing programs. We describe the modeling methodology at a conceptual level and outline representational choices. We present two application case studies that use PIPE for personalizing web sites and describe how PIPE suggests a novel evaluation criterion for information system designs. Finally, we mention several fundamental implications of adopting the PIPE model for personalization and when it is (and is not) applicable.Comment: Comprehensive overview of the PIPE model for personalizatio

    Multifractal Network Generator

    Full text link
    We introduce a new approach to constructing networks with realistic features. Our method, in spite of its conceptual simplicity (it has only two parameters) is capable of generating a wide variety of network types with prescribed statistical properties, e.g., with degree- or clustering coefficient distributions of various, very different forms. In turn, these graphs can be used to test hypotheses, or, as models of actual data. The method is based on a mapping between suitably chosen singular measures defined on the unit square and sparse infinite networks. Such a mapping has the great potential of allowing for graph theoretical results for a variety of network topologies. The main idea of our approach is to go to the infinite limit of the singular measure and the size of the corresponding graph simultaneously. A very unique feature of this construction is that the complexity of the generated network is increasing with the size. We present analytic expressions derived from the parameters of the -- to be iterated-- initial generating measure for such major characteristics of graphs as their degree, clustering coefficient and assortativity coefficient distributions. The optimal parameters of the generating measure are determined from a simple simulated annealing process. Thus, the present work provides a tool for researchers from a variety of fields (such as biology, computer science, biology, or complex systems) enabling them to create a versatile model of their network data.Comment: Preprint. Final version appeared in PNAS

    K-core decomposition of Internet graphs: hierarchies, self-similarity and measurement biases

    Get PDF
    We consider the kk-core decomposition of network models and Internet graphs at the autonomous system (AS) level. The kk-core analysis allows to characterize networks beyond the degree distribution and uncover structural properties and hierarchies due to the specific architecture of the system. We compare the kk-core structure obtained for AS graphs with those of several network models and discuss the differences and similarities with the real Internet architecture. The presence of biases and the incompleteness of the real maps are discussed and their effect on the kk-core analysis is assessed with numerical experiments simulating biased exploration on a wide range of network models. We find that the kk-core analysis provides an interesting characterization of the fluctuations and incompleteness of maps as well as information helping to discriminate the original underlying structure
    corecore