11 research outputs found

    "May I Speak Freely?" : between templates and free choice in natural language generation ; workshop at the 23rd German Annual Conference for Artificial Intelligence (KI\u2799), Bonn14.-15. September 1999

    Get PDF

    Domain ontology learning from the web

    Get PDF
    El Aprendizaje de Ontologías se define como el conjunto de métodos utilizados para construir, enriquecer o adaptar una ontología existente de forma semiautomática, utilizando fuentes de información heterogéneas. En este proceso se emplea texto, diccionarios electrónicos, ontologías lingüísticas e información estructurada y semiestructurada para extraer conocimiento. Recientemente, gracias al enorme crecimiento de la Sociedad de la Información, la Web se ha convertido en una valiosa fuente de información para casi cualquier dominio. Esto ha provocado que los investigadores empiecen a considerar a la Web como un repositorio válido para Recuperar Información y Adquirir Conocimiento. No obstante, la Web presenta algunos problemas que no se observan en repositorios de información clásicos: presentación orientada al usuario, ruido, fuentes no confiables, alta dinamicidad y tamaño abrumador. Pese a ello, también presenta algunas características que pueden ser interesantes para la adquisición de conocimiento: debido a su enorme tamaño y heterogeneidad, se asume que la Web aproxima la distribución real de la información a nivel global. Este trabajo describe una aproximación novedosa para el aprendizaje de ontologías, presentando nuevos métodos para adquirir conocimiento de la Web. La propuesta se distingue de otros trabajos previos principalmente en la particular adaptación de algunas técnicas clásicas de aprendizaje al corpus Web y en la explotación de las características interesantes del entorno Web para componer una aproximación automática, no supervisada e independiente del dominio. Con respecto al proceso de construcción de la ontologías, se han desarrollado los siguientes métodos: i) extracción y selección de términos relacionados con el dominio, organizándolos de forma taxonómica; ii) descubrimiento y etiquetado de relaciones no taxonómicas entre los conceptos; iii) métodos adicionales para mejorar la estructura final, incluyendo la detección de entidades con nombre, atributos, herencia múltiple e incluso un cierto grado de desambiguación semántica. La metodología de aprendizaje al completo se ha implementado mediante un sistema distribuido basado en agentes, proporcionando una solución escalable. También se ha evaluado para varios dominios de conocimiento bien diferenciados, obteniendo resultados de buena calidad. Finalmente, se han desarrollado varias aplicaciones referentes a la estructuración automática de librerías digitales y recursos Web, y la recuperación de información basada en ontologías.Ontology Learning is defined as the set of methods used for building from scratch, enriching or adapting an existing ontology in a semi-automatic fashion using heterogeneous information sources. This data-driven procedure uses text, electronic dictionaries, linguistic ontologies and structured and semi-structured information to acquire knowledge. Recently, with the enormous growth of the Information Society, the Web has become a valuable source of information for almost every possible domain of knowledge. This has motivated researchers to start considering the Web as a valid repository for Information Retrieval and Knowledge Acquisition. However, the Web suffers from problems that are not typically observed in classical information repositories: human oriented presentation, noise, untrusted sources, high dynamicity and overwhelming size. Even though, it also presents characteristics that can be interesting for knowledge acquisition: due to its huge size and heterogeneity it has been assumed that the Web approximates the real distribution of the information in humankind. The present work introduces a novel approach for ontology learning, introducing new methods for knowledge acquisition from the Web. The adaptation of several well known learning techniques to the web corpus and the exploitation of particular characteristics of the Web environment composing an automatic, unsupervised and domain independent approach distinguishes the present proposal from previous works.With respect to the ontology building process, the following methods have been developed: i) extraction and selection of domain related terms, organising them in a taxonomical way; ii) discovery and label of non-taxonomical relationships between concepts; iii) additional methods for improving the final structure, including the detection of named entities, class features, multiple inheritance and also a certain degree of semantic disambiguation. The full learning methodology has been implemented in a distributed agent-based fashion, providing a scalable solution. It has been evaluated for several well distinguished domains of knowledge, obtaining good quality results. Finally, several direct applications have been developed, including automatic structuring of digital libraries and web resources, and ontology-based Web Information Retrieval

    Semi-Automated Development of Conceptual Models from Natural Language Text

    Get PDF
    The process of converting natural language specifications into conceptual models requires detailed analysis of natural language text, and designers frequently make mistakes when undertaking this transformation manually. Although many approaches have been used to help designers translate natural language text into conceptual models, each approach has its limitations. One of the main limitations is the lack of a domain-independent ontology that can be used as a repository for entities and relationships, thus guiding the transition from natural language processing into a conceptual model. Such an ontology is not currently available because it would be very difficult and time consuming to produce. In this thesis, a semi-automated system for mapping natural language text into conceptual models is proposed. The model, which is called SACMES, combines a linguistic approach with an ontological approach and human intervention to achieve the task. The model learns from the natural language specifications that it processes, and stores the information that is learnt in a conceptual model ontology and a user history knowledge database. It then uses the stored information to improve performance and reduce the need for human intervention. The evaluation conducted on SACMES demonstrates that (1) designers’ creation of conceptual models is improved when using the system comparing with not using any system, and that (2) the performance of the system is improved by processing more natural language requirements, and thus, the need for human intervention has decreased. However, these advantages may be improved further through development of the learning and retrieval techniques used by the system

    International Workshop on Description Logics : Bonn, May 28/29, 1994

    Get PDF
    This collection of papers forms the permanent record of the 1994 Description Logic Workshop, that was held at the Gustav Stresemann Institut in Bonn, Germany on 28 and 29 May 1994, immediately after the Fourth International Conference on Principles of Knowledge Representation and Reasoning. The workshop was set up to be as informal as possible, so this collection cannot hope to capture the discussions associated with the workshop. However, we hope that it will serve to remind participants of their discussion at the workshop, and provide non-participants with indications of the topics that were discussed at the workshop. The workshop consisted of seven regular sessions and one panel session. Each regular session had about four short presentations on a single theme, but also had considerable time reserved for discussion. The themes of the sessions were Foundations of Description Logics, Architecture of Description Logics and Description Logic Systems, Language Extensions, Expanding Description Logics, General Applications of Description Logics, Natural Language Applications of Description Logics, Connections between Description Logics and Databases, and the Future of Description Logics and Description Logic Systems. The session on Foundations of Description Logics concentrated on computational properties of description logics, correspondences between description logics and other formalisms, and on semantics of description logics, Similarly, there is discussion on how to develop tractable desription logics, for some notion of tractable, and whether it is useful to worry about achieving tractability at all. Several of the participants argued in favour of a very expressive description logic. This obviously precludes tractability or even decidability of complete reasoning. Klaus Schild proposed that for some purposes one could employ "model checking" (i .e., a closed world assumption) instead of "theorem proving," and has shown that this is still tractable for very large languages. Maurizio Lenzerini's opinion was that it is important to have decidable languages. Tractability cannot be achieved in several application areas because there one needs very expressive constructs: e.g., axioms, complex role constructors, and cycles with fixed-point semantics. For Bob MacGregor, not even decidability is an issue since he claims that Loom's incomplete reasoner is sufficient for his applications. The discussion addressed the question of whether there is still need for foundations, and whether the work on foundation done until now really solved the problems that the designers of early DL systems had. Both questions were mostly answered in the affirmative, with the caveat that new research on foundations should make sure that it is concerned with "real" problems, and not just generates new problems. In the session on Architecture of Description Logics and Description Logic Systems the participants considered different ways of putting together description logics and description logic systems. One way of doing this is to have a different kind of inference strategy for description logics, such as one based on intuitionistic logics or one based directly on rules of inference-thus allowing variant systems. Another way of modifying description logic systems is to divide them up in different ways, such as making a terminology consist of a schema portion and a view portion. Some discussion in this session concerned whether architectures should be influenced by application areas, or even by particular applications. There was considerable discussion at the workshop on how Description Logics should be extended or expanded to make them more useful. There are several methods to do this. The first is to extend the language of descriptions, e.g ., to represent n-ary relations, temporal information, or whole-part relationships, all of which were discussed at the workshop. The second is to add in another kind of reasoning, such as default reasoning, while still keeping the general framework of description logic reasoning. The third is to incorporate descriptions or description-like constructs in a larger reasoner, such as a first order reasoner. This was the approach taken in OMEGA and is the approach being taken in the Loom project. There have been many extensions of the first two kinds proposed for description logics, including several presented at the workshop. One quest ion discussed at the workshop was whether these extensions fit in well with the philosophy of description logic. Another question was whether the presence of many proposals for extensions means that description logics are easy to expand, or that description logics are inadequate representation formalisms? The general consensus was that description logics adequately capture a certain kind of core reasoning and that they lend themselves to incorporation with other kinds of reasoning. Care must be taken, however, to keep the extended versions true to the goals of description logics. The sessions on Applications of Description Logics had presentations on applications of description logics in various areas, including configuration, tutoring, natural language processing, and domain modeling. Most of these applications are research applications, funded by government research programs. There was discussion of what is needed to have more fielded applications of description logics. The session on Connections between Description Logics and Databases considered three kinds of connections between Description Logics and Databases: 1. using Description Logics for expressing database schemas, including local schemas, integrated schemas, and views, integrity constraints, and queries; 2. using Description Logic reasoning for various database-related reasoning, including schema integration and validation, and query optimization, and query validation and organization; and 3. making Description Logic reasoners more like Database Mangagement Systems via optimization. All three of these connections are being actively investigated by the description logic community. The panel session on the Future of Description Logics and Description Logic Systems discussed where the future of description logics will lie. There seems to be a consensus that description logics must forge tighter connections with other formalisms, such as databases or object-oriented systems. In this way, perhaps, description logics will find more real applications

    International Workshop on Description Logics : Bonn, May 28/29, 1994

    Get PDF
    This collection of papers forms the permanent record of the 1994 Description Logic Workshop, that was held at the Gustav Stresemann Institut in Bonn, Germany on 28 and 29 May 1994, immediately after the Fourth International Conference on Principles of Knowledge Representation and Reasoning. The workshop was set up to be as informal as possible, so this collection cannot hope to capture the discussions associated with the workshop. However, we hope that it will serve to remind participants of their discussion at the workshop, and provide non-participants with indications of the topics that were discussed at the workshop. The workshop consisted of seven regular sessions and one panel session. Each regular session had about four short presentations on a single theme, but also had considerable time reserved for discussion. The themes of the sessions were Foundations of Description Logics, Architecture of Description Logics and Description Logic Systems, Language Extensions, Expanding Description Logics, General Applications of Description Logics, Natural Language Applications of Description Logics, Connections between Description Logics and Databases, and the Future of Description Logics and Description Logic Systems. The session on Foundations of Description Logics concentrated on computational properties of description logics, correspondences between description logics and other formalisms, and on semantics of description logics, Similarly, there is discussion on how to develop tractable desription logics, for some notion of tractable, and whether it is useful to worry about achieving tractability at all. Several of the participants argued in favour of a very expressive description logic. This obviously precludes tractability or even decidability of complete reasoning. Klaus Schild proposed that for some purposes one could employ "model checking" (i .e., a closed world assumption) instead of "theorem proving," and has shown that this is still tractable for very large languages. Maurizio Lenzerini's opinion was that it is important to have decidable languages. Tractability cannot be achieved in several application areas because there one needs very expressive constructs: e.g., axioms, complex role constructors, and cycles with fixed-point semantics. For Bob MacGregor, not even decidability is an issue since he claims that Loom's incomplete reasoner is sufficient for his applications. The discussion addressed the question of whether there is still need for foundations, and whether the work on foundation done until now really solved the problems that the designers of early DL systems had. Both questions were mostly answered in the affirmative, with the caveat that new research on foundations should make sure that it is concerned with "real" problems, and not just generates new problems. In the session on Architecture of Description Logics and Description Logic Systems the participants considered different ways of putting together description logics and description logic systems. One way of doing this is to have a different kind of inference strategy for description logics, such as one based on intuitionistic logics or one based directly on rules of inference-thus allowing variant systems. Another way of modifying description logic systems is to divide them up in different ways, such as making a terminology consist of a schema portion and a view portion. Some discussion in this session concerned whether architectures should be influenced by application areas, or even by particular applications. There was considerable discussion at the workshop on how Description Logics should be extended or expanded to make them more useful. There are several methods to do this. The first is to extend the language of descriptions, e.g ., to represent n-ary relations, temporal information, or whole-part relationships, all of which were discussed at the workshop. The second is to add in another kind of reasoning, such as default reasoning, while still keeping the general framework of description logic reasoning. The third is to incorporate descriptions or description-like constructs in a larger reasoner, such as a first order reasoner. This was the approach taken in OMEGA and is the approach being taken in the Loom project. There have been many extensions of the first two kinds proposed for description logics, including several presented at the workshop. One quest ion discussed at the workshop was whether these extensions fit in well with the philosophy of description logic. Another question was whether the presence of many proposals for extensions means that description logics are easy to expand, or that description logics are inadequate representation formalisms? The general consensus was that description logics adequately capture a certain kind of core reasoning and that they lend themselves to incorporation with other kinds of reasoning. Care must be taken, however, to keep the extended versions true to the goals of description logics. The sessions on Applications of Description Logics had presentations on applications of description logics in various areas, including configuration, tutoring, natural language processing, and domain modeling. Most of these applications are research applications, funded by government research programs. There was discussion of what is needed to have more fielded applications of description logics. The session on Connections between Description Logics and Databases considered three kinds of connections between Description Logics and Databases: 1. using Description Logics for expressing database schemas, including local schemas, integrated schemas, and views, integrity constraints, and queries; 2. using Description Logic reasoning for various database-related reasoning, including schema integration and validation, and query optimization, and query validation and organization; and 3. making Description Logic reasoners more like Database Mangagement Systems via optimization. All three of these connections are being actively investigated by the description logic community. The panel session on the Future of Description Logics and Description Logic Systems discussed where the future of description logics will lie. There seems to be a consensus that description logics must forge tighter connections with other formalisms, such as databases or object-oriented systems. In this way, perhaps, description logics will find more real applications

    International Workshop on Description Logics : Bonn, May 28/29, 1994

    Get PDF
    This collection of papers forms the permanent record of the 1994 Description Logic Workshop, that was held at the Gustav Stresemann Institut in Bonn, Germany on 28 and 29 May 1994, immediately after the Fourth International Conference on Principles of Knowledge Representation and Reasoning. The workshop was set up to be as informal as possible, so this collection cannot hope to capture the discussions associated with the workshop. However, we hope that it will serve to remind participants of their discussion at the workshop, and provide non-participants with indications of the topics that were discussed at the workshop. The workshop consisted of seven regular sessions and one panel session. Each regular session had about four short presentations on a single theme, but also had considerable time reserved for discussion. The themes of the sessions were Foundations of Description Logics, Architecture of Description Logics and Description Logic Systems, Language Extensions, Expanding Description Logics, General Applications of Description Logics, Natural Language Applications of Description Logics, Connections between Description Logics and Databases, and the Future of Description Logics and Description Logic Systems. The session on Foundations of Description Logics concentrated on computational properties of description logics, correspondences between description logics and other formalisms, and on semantics of description logics, Similarly, there is discussion on how to develop tractable desription logics, for some notion of tractable, and whether it is useful to worry about achieving tractability at all. Several of the participants argued in favour of a very expressive description logic. This obviously precludes tractability or even decidability of complete reasoning. Klaus Schild proposed that for some purposes one could employ "model checking" (i .e., a closed world assumption) instead of "theorem proving," and has shown that this is still tractable for very large languages. Maurizio Lenzerini\u27s opinion was that it is important to have decidable languages. Tractability cannot be achieved in several application areas because there one needs very expressive constructs: e.g., axioms, complex role constructors, and cycles with fixed-point semantics. For Bob MacGregor, not even decidability is an issue since he claims that Loom\u27s incomplete reasoner is sufficient for his applications. The discussion addressed the question of whether there is still need for foundations, and whether the work on foundation done until now really solved the problems that the designers of early DL systems had. Both questions were mostly answered in the affirmative, with the caveat that new research on foundations should make sure that it is concerned with "real" problems, and not just generates new problems. In the session on Architecture of Description Logics and Description Logic Systems the participants considered different ways of putting together description logics and description logic systems. One way of doing this is to have a different kind of inference strategy for description logics, such as one based on intuitionistic logics or one based directly on rules of inference-thus allowing variant systems. Another way of modifying description logic systems is to divide them up in different ways, such as making a terminology consist of a schema portion and a view portion. Some discussion in this session concerned whether architectures should be influenced by application areas, or even by particular applications. There was considerable discussion at the workshop on how Description Logics should be extended or expanded to make them more useful. There are several methods to do this. The first is to extend the language of descriptions, e.g ., to represent n-ary relations, temporal information, or whole-part relationships, all of which were discussed at the workshop. The second is to add in another kind of reasoning, such as default reasoning, while still keeping the general framework of description logic reasoning. The third is to incorporate descriptions or description-like constructs in a larger reasoner, such as a first order reasoner. This was the approach taken in OMEGA and is the approach being taken in the Loom project. There have been many extensions of the first two kinds proposed for description logics, including several presented at the workshop. One quest ion discussed at the workshop was whether these extensions fit in well with the philosophy of description logic. Another question was whether the presence of many proposals for extensions means that description logics are easy to expand, or that description logics are inadequate representation formalisms? The general consensus was that description logics adequately capture a certain kind of core reasoning and that they lend themselves to incorporation with other kinds of reasoning. Care must be taken, however, to keep the extended versions true to the goals of description logics. The sessions on Applications of Description Logics had presentations on applications of description logics in various areas, including configuration, tutoring, natural language processing, and domain modeling. Most of these applications are research applications, funded by government research programs. There was discussion of what is needed to have more fielded applications of description logics. The session on Connections between Description Logics and Databases considered three kinds of connections between Description Logics and Databases: 1. using Description Logics for expressing database schemas, including local schemas, integrated schemas, and views, integrity constraints, and queries; 2. using Description Logic reasoning for various database-related reasoning, including schema integration and validation, and query optimization, and query validation and organization; and 3. making Description Logic reasoners more like Database Mangagement Systems via optimization. All three of these connections are being actively investigated by the description logic community. The panel session on the Future of Description Logics and Description Logic Systems discussed where the future of description logics will lie. There seems to be a consensus that description logics must forge tighter connections with other formalisms, such as databases or object-oriented systems. In this way, perhaps, description logics will find more real applications

    Spatial and temporal resolution of sensor observations

    Full text link
    Beobachtung ist ein Kernkonzept der Geoinformatik. Beobachtungen dienen bei Phänomenen wie Klimawandel, Massenbewegungen (z. B. Hangbewegungen) und demographischer Wandel zur Überwachung, Entwicklung von Modellen und Simulation dieser Erscheinungen. Auflösung ist eine zentrale Eigenschaft von Beobachtungen. Der Gebrauch von Beobachtungen unterschiedlicher Auflösung führt zu (potenziell) unterschiedlichen Entscheidungen, da die Auflösung der Beobachtungen das Erkennen von Strukturen während der Phase der Datenanalyse beeinflusst. Der Hauptbeitrag dieser Arbeit ist eine entwickelte Theorie der raum- und zeitlichen Auflösung von Beobachtungen, die sowohl auf technische Sensoren (z. B. Fotoapparat) als auch auf menschliche Sensoren anwendbar ist. Die Konsistenz der Theorie wurde anhand der Sprache Haskell evaluiert, und ihre praktische Anwendbarkeit wurde unter Einsatz von Beobachtungen des Webportals Flickr illustriert

    On the Way in Upper Mesopotamia

    Get PDF
    corecore