7,169 research outputs found

    A purely logic-based approach to approximate matching of Semantic Web Services

    Full text link
    Most current approaches to matchmaking of semantic Web services utilize hybrid strategies consisting of logic- and non-logic-based similarity measures (or even no logic-based similarity at all). This is mainly due to pure logic-based matchers achieving a good precision, but very low recall values. We present a purely logic-based matcher implementation based on approximate subsumption and extend this approach to take additional information about the taxonomy of the background ontology into account. Our aim is to provide a purely logic-based matchmaker implementation, which also achieves reasonable recall levels without large impact on precision

    Term-Specific Eigenvector-Centrality in Multi-Relation Networks

    Get PDF
    Fuzzy matching and ranking are two information retrieval techniques widely used in web search. Their application to structured data, however, remains an open problem. This article investigates how eigenvector-centrality can be used for approximate matching in multi-relation graphs, that is, graphs where connections of many different types may exist. Based on an extension of the PageRank matrix, eigenvectors representing the distribution of a term after propagating term weights between related data items are computed. The result is an index which takes the document structure into account and can be used with standard document retrieval techniques. As the scheme takes the shape of an index transformation, all necessary calculations are performed during index tim

    Fuzzy Content Mining for Targeted Advertisement

    Get PDF
    Content-targeted advertising system is becoming an increasingly important part of the funding source of free web services. Highly efficient content analysis is the pivotal key of such a system. This project aims to establish a content analysis engine involving fuzzy logic that is able to automatically analyze real user-posted Web documents such as blog entries. Based on the analysis result, the system matches and retrieves the most appropriate Web advertisements. The focus and complexity is on how to better estimate and acquire the keywords that represent a given Web document. Fuzzy Web mining concept will be applied to synthetically consider multiple factors of Web content. A Fuzzy Ranking System is established based on certain fuzzy (and some crisp) rules, fuzzy sets, and membership functions to get the best candidate keywords. Once it is has obtained the keywords, the system will retrieve corresponding advertisements from certain providers through Web services as matched advertisements, similarly to retrieving a products list from Amazon.com. In 87% of the cases, the results of this system can match the accuracy of the Google Adwords system. Furthermore, this expandable system will also be a solid base for further research and development on this topic

    A platform for discovering and sharing confidential ballistic crime data.

    Get PDF
    Criminal investigations generate large volumes of complex data that detectives have to analyse and understand. This data tends to be "siloed" within individual jurisdictions and re-using it in other investigations can be difficult. Investigations into trans-national crimes are hampered by the problem of discovering relevant data held by agencies in other countries and of sharing those data. Gun-crimes are one major type of incident that showcases this: guns are easily moved across borders and used in multiple crimes but finding that a weapon was used elsewhere in Europe is difficult. In this paper we report on the Odyssey Project, an EU-funded initiative to mine, manipulate and share data about weapons and crimes. The project demonstrates the automatic combining of data from disparate repositories for cross-correlation and automated analysis. The data arrive from different cultural/domains with multiple reference models using real-time data feeds and historical databases

    Flexible provisioning of Web service workflows

    No full text
    Web services promise to revolutionise the way computational resources and business processes are offered and invoked in open, distributed systems, such as the Internet. These services are described using machine-readable meta-data, which enables consumer applications to automatically discover and provision suitable services for their workflows at run-time. However, current approaches have typically assumed service descriptions are accurate and deterministic, and so have neglected to account for the fact that services in these open systems are inherently unreliable and uncertain. Specifically, network failures, software bugs and competition for services may regularly lead to execution delays or even service failures. To address this problem, the process of provisioning services needs to be performed in a more flexible manner than has so far been considered, in order to proactively deal with failures and to recover workflows that have partially failed. To this end, we devise and present a heuristic strategy that varies the provisioning of services according to their predicted performance. Using simulation, we then benchmark our algorithm and show that it leads to a 700% improvement in average utility, while successfully completing up to eight times as many workflows as approaches that do not consider service failures

    Who Cares about Axiomatization? Representation, Invariance, and Formal Ontologies

    Get PDF
    The philosophy of science of Patrick Suppes is centered on two important notions that are part of the title of his recent book (Suppes 2002): Representation and Invariance. Representation is important because when we embrace a theory we implicitly choose a way to represent the phenomenon we are studying. Invariance is important because, since invariants are the only things that are constant in a theory, in a way they give the “objective” meaning of that theory. Every scientific theory gives a representation of a class of structures and studies the invariant properties holding in that class of structures. In Suppes’ view, the best way to define this class of structures is via axiomatization. This is because a class of structures is given by a definition, and this same definition establishes which are the properties that a single structure must possess in order to belong to the class. These properties correspond to the axioms of a logical theory. In Suppes’ view, the best way to characterize a scientific structure is by giving a representation theorem for its models and singling out the invariants in the structure. Thus, we can say that the philosophy of science of Patrick Suppes consists in the application of the axiomatic method to scientific disciplines. What I want to argue in this paper is that this application of the axiomatic method is also at the basis of a new approach that is being increasingly applied to the study of computer science and information systems, namely the approach of formal ontologies. The main task of an ontology is that of making explicit the conceptual structure underlying a certain domain. By “making explicit the conceptual structure” we mean singling out the most basic entities populating the domain and writing axioms expressing the main properties of these primitives and the relations holding among them. So, in both cases, the axiomatization is the main tool used to characterize the object of inquiry, being this object scientific theories (in Suppes’ approach), or information systems (for formal ontologies). In the following section I will present the view of Patrick Suppes on the philosophy of science and the axiomatic method, in section 3 I will survey the theoretical issues underlying the work that is being done in formal ontologies and in section 4 I will draw a comparison of these two approaches and explore similarities and differences between them
    corecore