11 research outputs found

    On the Role of Conceptual and Linguistic Ontologies in Spoken Dialogue Systems

    Get PDF
    We report on the role of well-formed conceptual and linguistic ontologies in empirically grounded `spoken dialogue systems' (SDS). In particular we use empirical results from spatial dialogues in German to argue for the strict separation of linguistically motivated knowledge from non-linguistic, domain concerns. We motivate our arguments with a number of examples relevant to the language generation task, and show how a well-defined separation of linguistic and domain concerns can be effected in a practical SDS

    MARCO ONTOLÓGICO PARA LA ESTRUCTURACIÓN SEMÁNTICA Y LA RECUPERACIÓN DE RECURSOS BIBLIOGRÁFICOS EMPLEANDO PROCESAMIENTO DEL LENGUAJE NATURAL

    Get PDF
    Resumen: El proyecto tiene como propósito crear un modelo ontológico que describa y relacione los elementos requeridos para el procesamiento del lenguaje natural en el dominio de las búsquedas bibliográficas semánticas. Esta propuesta será abordada como una investigación del tipo descriptiva bajo un enfoque mixto dado que se pretende describir de modo sistemático las características de un modelo que describe una problemática muy común que puede ser abordada desde una perspectiva tecnológica. Palabras clave: Recuperación de la información, web semántica, procesamiento de lenguaje natural

    An Extended Semantic Interoperability Model for Distributed Electronic Health Record Based on Fuzzy Ontology Semantics

    Get PDF
    Semantic interoperability of distributed electronic health record (EHR) systems is a crucial problem for querying EHR and machine learning projects. The main contribution of this paper is to propose and implement a fuzzy ontology-based semantic interoperability framework for distributed EHR systems. First, a separate standard ontology is created for each input source. Second, a unified ontology is created that merges the previously created ontologies. However, this crisp ontology is not able to answer vague or uncertain queries. We thirdly extend the integrated crisp ontology into a fuzzy ontology by using a standard methodology and fuzzy logic to handle this limitation. The used dataset includes identified data of 100 patients. The resulting fuzzy ontology includes 27 class, 58 properties, 43 fuzzy data types, 451 instances, 8376 axioms, 5232 logical axioms, 1216 declarative axioms, 113 annotation axioms, and 3204 data property assertions. The resulting ontology is tested using real data from the MIMIC-III intensive care unit dataset and real archetypes from openEHR. This fuzzy ontology-based system helps physicians accurately query any required data about patients from distributed locations using near-natural language queries. Domain specialists validated the accuracy and correctness of the obtained resultsThis work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (NRF-2021R1A2B5B02002599)S

    User-System Dialogues and the Notion of Focus

    Get PDF
    In recent years, the capabilities of knowledge-based systems to communicate with their users have evolved from simple interactions to complex dialogues. With this evolution comes a need to understand what makes a good dialogue. In this paper, we are concerned with dialogue coherence. We review the notion of focus, which partly explains this property, and its use for user-system communication. First, we examine the major theories dealing with this notion. We describe what their contribution is and how they differ. Then, we illustrate the benefits of using the notion of focus and especially the improvement in text coherence. We pay particular attention to how the notion can concretely be implemented. Its integration with other techniques and theories is described. We conclude the paper by pointing out remaining issues in the understanding of the notion of focus. The contribution of this paper is to provide a classification of the theories of focus and to show the improvements they offer in elaborate user-system dialogues.</jats:p

    Key steps for the construction of a glossary based on FunGramKB Term Extractor and referred to international cooperation against organised crime and terrorism

    Get PDF
    The employment of new technological instruments for the processing of natural languages is crucial to improve the way humans interact with machines. The Functional Grammar Knowledge Base (FunGramKB henceforth) has been designed to cover Natural Language Processing (NLP henceforth) tasks in the area of Artificial Intelligence. The multipurpose lexical conceptual knowledge base FunGramKB is capable of combining linguistic knowledge and human cognitive abilities within its system as a whole. The conceptual module of FunGramKB contains both common-sense knowledge (Ontology), procedural knowledge (Cognicon) as well as knowledge about named entities representing people, places, organisations or other entities (Onomasticon). The Onomastical component is used to process the information from the perspective of specialised discourse. The definition in Natural Language of a consistent list of encyclopaedic terms existent referred to the legislation and to entities which fight against organised crime and terrorism existent in the GCTC would be the stepping stone for the future development of the Onomasticon. The FunGramKB Term Extractor (FGKBTE henceforth) is used to process the information. To cope with the inclusion of the terms in the Onomasticon according to the Conceptual Representation Language (COREL henceforth) schemata, the DBpedia project has been of paramount importance to develop specific patterns for the structure of the definitions.El empleo de nuevas herramientas tecnológicas para el Procesamiento del Lenguaje Natural (PLN en adelante) es fundamental para mejorar la forma en que las máquinas se relacionan con los seres humanos. FunGramKB ha sido diseñada para abordar tareas de PLN inmersas en el área de la Inteligencia Artificial. La base de conocimiento léxico conceptual multipropósito FunGramKB es capaz de combinar el conocimiento lingüístico con las habilidades cognitivas humanas dentro de su sistema como conjunto. El modulo conceptual de FunGramKB se basa en el sentido común (Ontología) y en el conocimiento procedimental (Cognicón), a la vez que en el conocimiento sobre entidades nombradas que representan personas, lugares, organizaciones u otras entidades (Onomasticon). La definición en Lenguaje Natural de una lista consistente de términos enciclopédicos concerniente tanto a instrumentos legales como a organizaciones que luchan contra el crimen organizado y el terrorismo que se ha incluido en el GCTC supondrá un gran adelanto en aras al futuro desarrollo del Onomasticon. El FGKBTE se usa para procesar la información. Con vistas a incluir los términos en el Onomasticón de acuerdo al esquema COREL, el proyecto DBpedia ha sido de una importancia fundamental para desarrollar patrones determinados con los que estructurar las definiciones.Universidad de Granada. Departamento de Filologías Inglesa y Alemana. Máster en Lingüística y Literatura Inglesas, curso 2013-201

    In defence of a linguistic-aware approach to natural language processing

    Full text link
    [EN] Although natural language processing can be deemed as a discipline between applied linguistics and artificial intelligence, theoretical linguistics has played a remarkably minor role in this field of research. One of the goals of this paper is to portray the reasons of the failed symbiosis between linguists’ research and that of computer scientists, where probabilistic approaches haven been steadily overshadowing linguistic models. In spite of this discouraging scenario, FunGramKB, a knowledge base particularly designed for natural language understanding systems, serves to illustrate how a language-aware and cognitively-plausible approach to human-like processing can contribute to the development of enhanced knowledgeengineering projects.[ES] A pesar de que podríamos ubicar el procesamiento del lenguaje natural entre la lingüística aplicada y la inteligencia artificial, el papel que ha desempeñado la lingüística teórica a lo largo de la historia de esta disciplina ha sido generalmente poco notorio. uno de los objetivos de este artículo es desgranar las causas de esta malograda simbiosis entre las investigaciones de lingüistas e informáticos, donde los enfoques probabilísticos han ido gradualmente relegando los modelos lingüísticos a un segundo plano, en el mejor de los casos. A pesar de este desalentador panorama, FunGramKB, una base de conocimiento particularmente útil para sistemas que requieran la comprensión del lenguaje, sirve para ilustrar cómo actualmente la lingüística teórica y la ciencia cognitiva pueden contribuir al desarrollo de un proyecto de ingeniería del conocimiento.este trabajo forma parte de dos proyectos de investigación financiados por el ministerio de ciencia y Tecnología de españa, códigos FFi2011-29798-c02-01 y FFi2010-15983. También quiero expresar mi agradecimiento a Francisco cortés Rodríguez, carlos González Vergara y Ricardo mairal usón por sus comentarios sobre el primer borrador de este artículo.Periñán Pascual, JC. (2012). En defensa del procesamiento del lenguaje natural fundamentado en la lingüística teórica. Onomázein : Revista de Lingüística, Filología y Traducción. (26):13-48. http://hdl.handle.net/10251/45752S13482

    Formalisation and evaluation of focus theories for requirements elicitation dialogues in natural language

    Get PDF
    Requirements engineering is an important part of software engineering. It consists in defining the needs of users when building a new system. These needs may be functional, i.e., what service should the system be able to provide, as well as non-functional, i.e., under which constraints should the system operate. Errors in requirements may have disastrous effects in the rest of the software engineering process (Brooks 1995, p.199), since they would lead to the construction of a system of little interest to its users or would require expensive modifications to correct. Because requirements documents may be very large, errors are usually hard to detect manually. Computer support is therefore often beneficial for their analysis. This is made easier if requirements are expressed formally. However, this support must also be adapted to and be usable by people who are expressing their requirements. These people are usually not computer specialists and are not accustomed to use formal languages. It is therefore necessary to help them express their requirements. Numerous approaches, have been suggested as aids to the acquisition of requirements (Reubenstein 1990). Much less attention has been paid to the control of the dialogue taking place between the users and the system whilst using such frameworks (Bubenko et al. 1994). Frameworks for requirements acquisition are not normally accompanied by theories of the types of dialogue which they support. Our ability to develop sophisticated formal frameworks to analyse requirements makes this deficiency more acutely felt, since increases in formality are often accompanied by greater difficulty in understanding and using the frameworks (Robertson et al. 1989).Users write their requirements in more or less natural language. This is then translated into a formal language that can be interpreted by the elicitation module. This module works on the requirements and provide feedback. The translation process is then applied to convert feedback into more or less natural language. Different systems put different emphasis on the parts of that general architecture. Some are very good at natural language interpretation while others put more emphasis on analysing the requirements and providing feedback.Natural language approaches to requirements elicitation, put an emphasis on natural language interpretation (see section 1.2.1). In these approaches, users write their specifica¬ tion in a subset of natural language. The system then translates it into a formal notation. The main benefit provided by these approaches is the improvement in the ease of use of the system: natural language is the main means of communication for human beings and does not need to be learned. However, most of these approaches do not provide a dialogue well suited for the requirements elicitation process. Because they translate the natural lan¬ guage specification into a formal notation but do not provide guidance on how to write the specification in the first place, users are left in charge of writing correct requirements. If a mistake is made while writing the specification, it will simply be translated into the formal notation.In order to actively help users in the process of writing the requirements, the elicit¬ ation system must interact with them. The emphasis, here, is no longer on translating requirements, but on actively extracting them through a dialogue with users. This is useful, since the requirements elicitation process is complex, and offering guidance is a big help for users. Unfortunately, most of the approaches providing guidance expose their formal underlying frameworks directly to users (see section 1.2.2). In order to benefit from the guidance provided, users have to learn the idiosyncrasies of the system they use. The task of providing guidance is complicated by the fact that there are numerous ways of carrying out the requirements elicitation. Very little research has been done on how to organise best the elicitation process to provide effective guidance. An arbitrary choice could be made, but forcing users to adopt a predefined method is usually not possible as it would make the elicitation process very difficult to follow and understand. The system must therefore be able to adapt itself to various elicitation methods. On the other hand, it is necessary for the system to make choices in order to provide active guidance. A "least-commitment" strategy, such as asking users at every choice point what to do next, is not a useful approach (Ferguson et al. 1996).One way of offering guidance without restricting users too much is by communicating with them in natural language, and by using natural language constraints to inform the choices made by the system to select a guidance strategy. These constraints ensure that the system adopts a strategy that will guide users in a natural and understandable manner, by taking into account the current state of the dialogue. In other words, the system takes into account the current state of the specification to help users complete it, but the current state of the dialogue is the principal factor constraining what will be spoken about next. Using such an approach reduces some of the problems discussed above. The specification does not need to be immediately correct as it will be checked and reworked by the system. The formal framework is hidden from users but is still there to ensure the correctness of the specifications. Guidance is continuously offered through dialogue, which is influenced by but does not directly follow the steps of construction of the specification.The natural language constraints we use in this thesis are theories of dialogue coherence, called "focus" theories. They define what can be spoken about next in a dialogue based on what has already been discussed and the subject under discussion. The theories take into account what participants in a dialogue pay attention to and try to ensure that the rest of the dialogue is related to it. The systems tries to help its users define how a research group WWW site should look like. The way the dialogue evolves from discussing the research group, to discussing the site and its associated home page, to discussing the set of publication can quite easily be followed. The use of pronouns helps in making the text fell natural. It would have been difficult to achieve the same result without using focus rules.Other techniques for organising dialogues, such as those based on the intentions under¬ lying the dialogue (Cohen et al. 1990), would require the dialogue manager to know what the elicitation system is trying to achieve and what its plan is. For some elicitation systems, this knowledge may not be available. Similarly, techniques based on the content of the communications exchanged and how they relate, e.g., based on RST (Mann and Thompson 1987), usually require a lot of domain knowledge. They are therefore time-consumming to code. Focus theories require less information from the elicitation module while enabling the dialogue manager to structure the dialogue. However, in some cases, focus theories are not sufficient to organise a dialogue. We use a theory based on speech act (see section 3.4.1) and some ideas from Grice's work on conversation (see section 5.2.1) to deal with these cases. More generally, although we tried to minimise the impact of other theories to study in detail focus theories, it would be interesting to know whether and how we can integrate them with the work presented in this thesis. In particular, the notion of dialog act and its application to dialog grammar could be of interest. General frameworks developped to study various aspects of dialogue, including dialog acts and focus, have started to appear but work is still at an early stage (C-Star Consortium 1998; Allen and Core 1997).Organising a dialogue based on attention requires a lot of domain knowledge in order to know how things mentioned in the dialogue relate to each other. Therefore, the amount of knowledge engineering needed to build natural language applications is also an important issue. We have tried to limit the engineering difficulties by clearly separating the domain knowledge needed by our dialogue manager from its management capabilities, and by provid¬ ing a way of re-using the existing domain knowledge as far as possible. This is done by using rules which enable us to re-use part of the domain knowledge already used by the elicitation module.The contribution of this thesis is therefore the formalisation and evaluation of focus theories for requirements elicitation dialogues in natural language. The main questions we deal with are the following: • Which focus theories should we use? • What are the relations between the constraints imposed by the focus theories and the constraints inherent to the requirements elicitation process? • Does this approach improve the perceived quality of the dialogue between the elicita¬ tion tool and its users?A prototype system has been developed. This system mainly operates in the WWW site design domain. It has also been applied in other domains as an initial demonstration of the range of problems that can be tackled by our approach

    Creatività e Processo Progettuale in Architettura: un Approccio Cognitivista

    Get PDF
    La tesi sviluppa, secondo l’approccio cognitivista, l’analisi del processo progettuale sull’architettura e riflette sui processi cognitivi applicati allo spazio ed al progetto, al fine di individuare il ruolo che gioca la creatività all’interno del processo progettuale. Le variazioni, e i vincoli che si creatività alla nello svolgere il progetto di architettura
    corecore