41,789 research outputs found

    Ontology-based specific and exhaustive user profiles for constraint information fusion for multi-agents

    Get PDF
    Intelligent agents are an advanced technology utilized in Web Intelligence. When searching information from a distributed Web environment, information is retrieved by multi-agents on the client site and fused on the broker site. The current information fusion techniques rely on cooperation of agents to provide statistics. Such techniques are computationally expensive and unrealistic in the real world. In this paper, we introduce a model that uses a world ontology constructed from the Dewey Decimal Classification to acquire user profiles. By search using specific and exhaustive user profiles, information fusion techniques no longer rely on the statistics provided by agents. The model has been successfully evaluated using the large INEX data set simulating the distributed Web environment

    An artefact repository to support distributed software engineering

    Get PDF
    The Open Source Component Artefact Repository (OSCAR) system is a component of the GENESIS platform designed to non-invasively inter-operate with work-flow management systems, development tools and existing repository systems to support a distributed software engineering team working collaboratively. Every artefact possesses a collection of associated meta-data, both standard and domain-specific presented as an XML document. Within OSCAR, artefacts are made aware of changes to related artefacts using notifications, allowing them to modify their own meta-data actively in contrast to other software repositories where users must perform all and any modifications, however trivial. This recording of events, including user interactions provides a complete picture of an artefact's life from creation to (eventual) retirement with the intention of supporting collaboration both amongst the members of the software engineering team and agents acting on their behalf

    Peer - Mediated Distributed Knowledge Management

    Get PDF
    Distributed Knowledge Management is an approach to knowledge management based on the principle that the multiplicity (and heterogeneity) of perspectives within complex organizations is not be viewed as an obstacle to knowledge exploitation, but rather as an opportunity that can foster innovation and creativity. Despite a wide agreement on this principle, most current KM systems are based on the idea that all perspectival aspects of knowledge should be eliminated in favor of an objective and general representation of knowledge. In this paper we propose a peer-to-peer architecture (called KEx), which embodies the principle above in a quite straightforward way: (i) each peer (called a K-peer) provides all the services needed to create and organize "local" knowledge from an individual's or a group's perspective, and (ii) social structures and protocols of meaning negotiation are introduced to achieve semantic coordination among autonomous peers (e.g., when searching documents from other K-peers). A first version of the system, called KEx, is imple-mented as a knowledge exchange level on top of JXTA

    Formal representation and proof for cooperative games

    Get PDF
    In this contribution we present some work we have been doing in representing and proving theorems from the area of economics, and mainly we present work we will do in a project in which we will apply mechanised theorem proving tools to a class of economic problems for which very few general tools currently exist. For mechanised theorem proving, the research introduces the field to a new application domain with a large user base; more specifically, the researchers are collaborating with developers working on state-of-the-art theorem provers. For economics, the research will provide tools for handling a hard class of problems; more generally, as the first application of mechanised theorem proving to centrally involve economic theorists, it aims to properly introduce mechanised theorem proving techniques to the discipline.\u

    Compensation methods to support cooperative applications: A case study in automated verification of schema requirements for an advanced transaction model

    Get PDF
    Compensation plays an important role in advanced transaction models, cooperative work and workflow systems. A schema designer is typically required to supply for each transaction another transaction to semantically undo the effects of . Little attention has been paid to the verification of the desirable properties of such operations, however. This paper demonstrates the use of a higher-order logic theorem prover for verifying that compensating transactions return a database to its original state. It is shown how an OODB schema is translated to the language of the theorem prover so that proofs can be performed on the compensating transactions

    Past, present and future of information and knowledge sharing in the construction industry: Towards semantic service-based e-construction

    Get PDF
    The paper reviews product data technology initiatives in the construction sector and provides a synthesis of related ICT industry needs. A comparison between (a) the data centric characteristics of Product Data Technology (PDT) and (b) ontology with a focus on semantics, is given, highlighting the pros and cons of each approach. The paper advocates the migration from data-centric application integration to ontology-based business process support, and proposes inter-enterprise collaboration architectures and frameworks based on semantic services, underpinned by ontology-based knowledge structures. The paper discusses the main reasons behind the low industry take up of product data technology, and proposes a preliminary roadmap for the wide industry diffusion of the proposed approach. In this respect, the paper stresses the value of adopting alliance-based modes of operation

    Just an Update on PMING Distance for Web-based Semantic Similarity in Artificial Intelligence and Data Mining

    Full text link
    One of the main problems that emerges in the classic approach to semantics is the difficulty in acquisition and maintenance of ontologies and semantic annotations. On the other hand, the Internet explosion and the massive diffusion of mobile smart devices lead to the creation of a worldwide system, which information is daily checked and fueled by the contribution of millions of users who interacts in a collaborative way. Search engines, continually exploring the Web, are a natural source of information on which to base a modern approach to semantic annotation. A promising idea is that it is possible to generalize the semantic similarity, under the assumption that semantically similar terms behave similarly, and define collaborative proximity measures based on the indexing information returned by search engines. The PMING Distance is a proximity measure used in data mining and information retrieval, which collaborative information express the degree of relationship between two terms, using only the number of documents returned as result for a query on a search engine. In this work, the PMINIG Distance is updated, providing a novel formal algebraic definition, which corrects previous works. The novel point of view underlines the features of the PMING to be a locally normalized linear combination of the Pointwise Mutual Information and Normalized Google Distance. The analyzed measure dynamically reflects the collaborative change made on the web resources

    ADEPT2 - Next Generation Process Management Technology

    Get PDF
    If current process management systems shall be applied to a broad spectrum of applications, they will have to be significantly improved with respect to their technological capabilities. In particular, in dynamic environments it must be possible to quickly implement and deploy new processes, to enable ad-hoc modifications of single process instances at runtime (e.g., to add, delete or shift process steps), and to support process schema evolution with instance migration, i.e., to propagate process schema changes to already running instances. These requirements must be met without affecting process consistency and by preserving the robustness of the process management system. In this paper we describe how these challenges have been addressed and solved in the ADEPT2 Process Management System. Our overall vision is to provide a next generation process management technology which can be used in a variety of application domains
    corecore