21 research outputs found

    Multi-representation Ontology in the Context of Enterprise Information Systems

    Get PDF
    International audienceIn the last decade, ontologies as shared common vocabulary played a major role in many AI applications and informationintegration for heterogeneous, distributed systems. The problems of integrating and developing information systems anddatabases in heterogeneous, distributed environment have been translated in the technical perspectives as system’sinteroperability. Ontologies, however, are foreseen to play a key role in resolving partially the semantic conflicts anddifferences that exist among systems. Domain ontologies, however, are constructed by capturing a set of concepts and theirlinks according to various criteria such as the abstraction paradigm, the granularity scale, interest of user communities, andthe perception of the ontology developer. Thus, different applications of the same domain end up having severalrepresentations of the same real world phenomenon. Multi-representation ontology is an ontology (or ontologies) thatcharacterizes ontological concept by a variable set of properties (static and dynamic) or attributes in several contexts and/ orin several scales of granularity. This paper introduces the formalism used for defining the paradigm of multi-representationontology and shows the manifestation of this paradigm with Enterprise Information Systems

    Semantic Model Alignment for Business Process Integration

    Get PDF
    Business process models describe an enterprise’s way of conducting business and in this form the basis for shaping the organization and engineering the appropriate supporting or even enabling IT. Thereby, a major task in working with models is their analysis and comparison for the purpose of aligning them. As models can differ semantically not only concerning the modeling languages used, but even more so in the way in which the natural language for labeling the model elements has been applied, the correct identification of the intended meaning of a legacy model is a non-trivial task that thus far has only been solved by humans. In particular at the time of reorganizations, the set-up of B2B-collaborations or mergers and acquisitions the semantic analysis of models of different origin that need to be consolidated is a manual effort that is not only tedious and error-prone but also time consuming and costly and often even repetitive. For facilitating automation of this task by means of IT, in this thesis the new method of Semantic Model Alignment is presented. Its application enables to extract and formalize the semantics of models for relating them based on the modeling language used and determining similarities based on the natural language used in model element labels. The resulting alignment supports model-based semantic business process integration. The research conducted is based on a design-science oriented approach and the method developed has been created together with all its enabling artifacts. These results have been published as the research progressed and are presented here in this thesis based on a selection of peer reviewed publications comprehensively describing the various aspects

    E-business framework enabled B2B integration

    Get PDF
    Standards for B2B integration help to facilitate the interoperability between organisations. These standards, often called e-business frameworks, guide integration by specifying the details for business processes, business documents and secure messaging. Extensible Mark-up Language (XML) is used in modern e-business frameworks instead of Electronic Data Interchange (EDI) formats. Having XML as the data format is not enough for integration, but e-business frameworks are needed to guide how XML is used. This work analyses the many partly competing and overlapping e-business frameworks how they differ in support for business processes, documents and secure messaging. In addition, the effect of standardisation organisation to the outcome of the e-business framework is studied. In this work, one e-business framework, RosettaNet, is used to tackle the challenges of product development (PD) integrations. A proof-of-concept implementation of a RosettaNet integration is provided to support PD and the lessons learned are discussed. The current specifications lack good processes for PD integrations, while they fail in specifying the concepts needed for document management. Furthermore, there are interoperability problems due to a lack of expressivity of the schema languages to encode the business documents, and the current setup of integration takes a very long time. RosettaNet has a lot of flexibility in the specifications, and thus just supporting the same standard process is not enough for interoperability. With semantic technologies, many shortcomings of the current standards for B2B integration can be solved, as they make it possible to present constraints the current technologies have problems with. This work presents a practical case of B2B integration with semantic technologies and describes the benefits of applying such technologies.Standardit tukevat organisaatioiden välistä järjestelmäintegraatiota. Integroinnin standardit määrittelevät organisaatioiden välisiä liiketoimintaprosesseja, -dokumentteja sekä määrittelevät turvallisen tavan kommunikoida. Nykyaikaiset standardit ovat XML-perusteisia vanhemman EDI-formaatin sijaan. XML:n käyttö ei ole riittävästi takaamaan integraation onnistumista, vaan tarvitaan tarkempaa sopimista, miten XML:ää käytetään integraatiossa. Joukko yritystenvälisen integroinnin standardeja määrittelee tämän. Tässä työssä analysoidaan useaa, osittain kilpailevaa, yritystenvälisen integroinnin standardia ja tutkitaan miten ne tukevat liiketoimintaprosessien, -dokumenttien ja turvallisen viestinvälityksen määrittelyjä ottaen huomioon myös standardointiorganisaation vaikutuksen lopputulokseen. Tässä työssä RosettaNet-standardia sovelletaan tuotekehitykseen liittyvissä integroinneissa. Työssä esitetään prototyyppi tuotekehitystiedon integroinnista RosettaNetin avulla ja keskustellaan saavutetuista kokemuksista. Nykyiset spesifikaatiot tuotekehitysprosesseille ovat tarpeisiin riittämättömiä, koska tuki dokumenttien hallinnan käsitteistölle on puutteellinen. Myös RosettaNetin käyttämien XML-skeemakielien puutteellinen ilmaisuvoima aiheuttaa ongelmia dokumenttien yhteentoimivuudelle. Lisäksi integraation tekeminen on hidasta verrattuna tyypillisen tuotekehitysprojektin kestoon. RosettaNetin tarjoamissa spesifikaatioissa on paljon joustavuutta, joten saman standardiprosessin tukeminen ei tarkoita, että järjestelmät ovat yhteentoimivia. Nykyspesifikaatioissa ja niissä käytettyjen skeema-kielten ilmaisuvoiman puutteet voidaan osittain paikata käyttämällä semanttisia teknologioita. Tämä työ esittää, miten integraatioissa voidaan saavuttaa semanttisia teknologioita käyttämällä parempi yhteentoimivuus.reviewe

    Creation and extension of ontologies for describing communications in the context of organizations

    Get PDF
    Thesis submitted to Faculdade de Ciências e Tecnologia of the Universidade Nova de Lisboa, in partial fulfillment of the requirements for the degree of Master in Computer ScienceThe use of ontologies is nowadays a sufficiently mature and solid field of work to be considered an efficient alternative in knowledge representation. With the crescent growth of the Semantic Web, it is expectable that this alternative tends to emerge even more in the near future. In the context of a collaboration established between FCT-UNL and the R&D department of a national software company, a new solution entitled ECC – Enterprise Communications Center was developed. This application provides a solution to manage the communications that enter, leave or are made within an organization, and includes intelligent classification of communications and conceptual search techniques in a communications repository. As specificity may be the key to obtain acceptable results with these processes, the use of ontologies becomes crucial to represent the existing knowledge about the specific domain of an organization. This work allowed us to guarantee a core set of ontologies that have the power of expressing the general context of the communications made in an organization, and of a methodology based upon a series of concrete steps that provides an effective capability of extending the ontologies to any business domain. By applying these steps, the minimization of the conceptualization and setup effort in new organizations and business domains is guaranteed. The adequacy of the core set of ontologies chosen and of the methodology specified is demonstrated in this thesis by its effective application to a real case-study, which allowed us to work with the different types of sources considered in the methodology and the activities that support its construction and evolution

    Method for Reusing and Re-engineering Non-ontological Resources for Building Ontologies

    Get PDF
    This thesis is focused on the reuse and possible subsequent re-engineering of knowledge resources, as opposed to custom-building new ontologies from scratch. The deep analysis of the state of the art has revealed that there are some methods and tools in the literature for transforming non-ontological resources into ontologies, but with some limitations: _ Most of the methods presented are based on ad-hoc transformations for the resource type, and the resource implementation. _ Only a few take advantage of the resource data model, an important artifact for the re-engineering process [GGPSFVT08]. _ There is no any integrated framework, method or corresponding tool, that considers the resources types, data models and implementations identified in an unified way. _ With regard to the transformation approach, the majority of the methods perform a TBox transformation, many others perform an ABox transformation and some perform a population. However, no method includes the possibility to perform the three transformation approaches. _ Regarding to the degree of automation, almost all the methods perform a semi-automatic transformation of the resource. _ According to the explicitation of the hidden semantics in the relations of the resource components, we can state that the methods that perform a TBox transformation make explicit the semantics in the relations of the resource components. Most of those methods identify subClassOf relations, others identify ad-hoc relations, and some identify partOf relations. However, only a few methods make explicit the three types of relations. _ With respect to how the methods make explicit the hidden semantics in the relations of the resource terms, we can say that three methods rely on the domain expert for making explicit the semantics, and two rely on an external resource, e.g., DOLCE ontology. Moreover, there are two methods that rely on external resources but not for making explicit the hidden semantics, but for finding out a proper ontology for populating it. _ According to the provision of the methodological guidelines, almost all the methods provide methodological guidelines for the transformation. However these guidelines are not finely detailed; for instance, they do not provide information about who is in charge of performing a particular activity/task, nor when that activity/task has to be carried out. _ With regard to the techniques employed, most of the methods do not mention them at all. Only a few methods specify techniques as transformation rules, lexico-syntactic patterns, mapping rules and natural language techniques. In this thesis we have provided a method and its technological support that rely on re-engineering patterns in order to speed up the ontology development process by reusing and re-engineering as much as possible available non-ontological resources. To achieve this overall goal, we have decomposed it in the following objectives: (1) the definition of methodological aspects related with the reuse of non-ontolo-gical resource for building ontologies; (2) the definition of methodological aspects related with the re-engineering of non-ontological resources for building ontologies; (3) the creation of a library of patterns for re-engineering nonontological resources into ontologies; and (4) the development of a software library that implements the suggestions given by the re-engineering patterns. Having in mind these goals, in this chapter we present how the open research problems identified in Chapter 2 are solved by the main thesis contributions. Then, we discuss the verification of our hypotheses, and finally we provide an outlook for the future work in those topics

    FILOZOFIJA LOGIKE

    Get PDF
    Aktivnost integracije i distribucije prožima cjelokupni čovjekov jezik, mišljenje i djelovanje; njegov praktični i teorijski um, uz pomoć malog broja operacija (konjunkcija, negacija, kvantifikacija) koje čine logičke konstante, sabire i razdjeljuje varijabilne elemente jezika, svijeta i mišljenja u beskonačne konačnosti (skupovi, klase, relacije, atributi) u kojima se koreliraju realne stimulacije i virtualne simulacije, čijom se konstrukcijom, rekonstrukcijom i dekonstrukcijom formiraju i transformiraju "dobro uređene formule" jezičko-gramatičkih i mentalno-psiholoških struktura koje se u svijetu saznanja imenuju pojmom svijeta, pojmom jezika, pojmom duha. Svijet pojmova, koji se uzima u kvantificiranoj formi govora / "(x) (Fx) / kao realni ili egzistentni, a u substitutivnoj formi / " { x : Fx}" / kao virtualni ili subzistentni zbog apstraktnosti klasa i relacija, konstruira mrežu multipliciranih logičkih općenitosti, njihovih relacija ili zakonitih sukcesija koje se mogu izraziti, parafrazirati i prevesti iz jedne notacije u drugu salva veritate ed salva congruentia

    Wiktionary: The Metalexicographic and the Natural Language Processing Perspective

    Get PDF
    Dictionaries are the main reference works for our understanding of language. They are used by humans and likewise by computational methods. So far, the compilation of dictionaries has almost exclusively been the profession of expert lexicographers. The ease of collaboration on the Web and the rising initiatives of collecting open-licensed knowledge, such as in Wikipedia, caused a new type of dictionary that is voluntarily created by large communities of Web users. This collaborative construction approach presents a new paradigm for lexicography that poses new research questions to dictionary research on the one hand and provides a very valuable knowledge source for natural language processing applications on the other hand. The subject of our research is Wiktionary, which is currently the largest collaboratively constructed dictionary project. In the first part of this thesis, we study Wiktionary from the metalexicographic perspective. Metalexicography is the scientific study of lexicography including the analysis and criticism of dictionaries and lexicographic processes. To this end, we discuss three contributions related to this area of research: (i) We first provide a detailed analysis of Wiktionary and its various language editions and dictionary structures. (ii) We then analyze the collaborative construction process of Wiktionary. Our results show that the traditional phases of the lexicographic process do not apply well to Wiktionary, which is why we propose a novel process description that is based on the frequent and continual revision and discussion of the dictionary articles and the lexicographic instructions. (iii) We perform a large-scale quantitative comparison of Wiktionary and a number of other dictionaries regarding the covered languages, lexical entries, word senses, pragmatic labels, lexical relations, and translations. We conclude the metalexicographic perspective by finding that the collaborative Wiktionary is not an appropriate replacement for expert-built dictionaries due to its inconsistencies, quality flaws, one-fits-all-approach, and strong dependence on expert-built dictionaries. However, Wiktionary's rapid and continual growth, its high coverage of languages, newly coined words, domain-specific vocabulary and non-standard language varieties, as well as the kind of evidence based on the authors' intuition provide promising opportunities for both lexicography and natural language processing. In particular, we find that Wiktionary and expert-built wordnets and thesauri contain largely complementary entries. In the second part of the thesis, we study Wiktionary from the natural language processing perspective with the aim of making available its linguistic knowledge for computational applications. Such applications require vast amounts of structured data with high quality. Expert-built resources have been found to suffer from insufficient coverage and high construction and maintenance cost, whereas fully automatic extraction from corpora or the Web often yields resources of limited quality. Collaboratively built encyclopedias present a viable solution, but do not cover well linguistically oriented knowledge as it is found in dictionaries. That is why we propose extracting linguistic knowledge from Wiktionary, which we achieve by the following three main contributions: (i) We propose the novel multilingual ontology OntoWiktionary that is created by extracting and harmonizing the weakly structured dictionary articles in Wiktionary. A particular challenge in this process is the ambiguity of semantic relations and translations, which we resolve by automatic word sense disambiguation methods. (ii) We automatically align Wiktionary with WordNet 3.0 at the word sense level. The largely complementary information from the two dictionaries yields an aligned resource with higher coverage and an enriched representation of word senses. (iii) We represent Wiktionary according to the ISO standard Lexical Markup Framework, which we adapt to the peculiarities of collaborative dictionaries. This standardized representation is of great importance for fostering the interoperability of resources and hence the dissemination of Wiktionary-based research. To this end, our work presents a foundational step towards the large-scale integrated resource UBY, which facilitates a unified access to a number of standardized dictionaries by means of a shared web interface for human users and an application programming interface for natural language processing applications. A user can, in particular, switch between and combine information from Wiktionary and other dictionaries without completely changing the software. Our final resource and the accompanying datasets and software are publicly available and can be employed for multiple different natural language processing applications. It particularly fills the gap between the small expert-built wordnets and the large amount of encyclopedic knowledge from Wikipedia. We provide a survey of previous works utilizing Wiktionary, and we exemplify the usefulness of our work in two case studies on measuring verb similarity and detecting cross-lingual marketing blunders, which make use of our Wiktionary-based resource and the results of our metalexicographic study. We conclude the thesis by emphasizing the usefulness of collaborative dictionaries when being combined with expert-built resources, which bears much unused potential

    Framework for collaborative knowledge management in organizations

    Get PDF
    Nowadays organizations have been pushed to speed up the rate of industrial transformation to high value products and services. The capability to agilely respond to new market demands became a strategic pillar for innovation, and knowledge management could support organizations to achieve that goal. However, current knowledge management approaches tend to be over complex or too academic, with interfaces difficult to manage, even more if cooperative handling is required. Nevertheless, in an ideal framework, both tacit and explicit knowledge management should be addressed to achieve knowledge handling with precise and semantically meaningful definitions. Moreover, with the increase of Internet usage, the amount of available information explodes. It leads to the observed progress in the creation of mechanisms to retrieve useful knowledge from the huge existent amount of information sources. However, a same knowledge representation of a thing could mean differently to different people and applications. Contributing towards this direction, this thesis proposes a framework capable of gathering the knowledge held by domain experts and domain sources through a knowledge management system and transform it into explicit ontologies. This enables to build tools with advanced reasoning capacities with the aim to support enterprises decision-making processes. The author also intends to address the problem of knowledge transference within an among organizations. This will be done through a module (part of the proposed framework) for domain’s lexicon establishment which purpose is to represent and unify the understanding of the domain’s used semantic

    History of Logic in Contemporary China

    Get PDF
    corecore