135 research outputs found

    ONTOGENERATION: Reusing Domain and Linguistic Ontologies for Spanish Text Generation

    Get PDF
    A significant problem facing the reuse of ontologies is to make their content more widely accessible to any potential user. Wording all the information represented in an ontology is the best way to ease the retrieval and understanding of its contents. This article proposes a general approach to reuse domain and linguistic ontologies with natural language generation technology, describing a practical system for the generation of Spanish texts in the domain of chemical substances. For this purpose the following steps have been taken: (a) an ontology in the chemicals domain developed under the METHONTOLOGY framework and the Ontology Design Environment (ODE) has been taken as knowledge source; (b) the linguistic ontology GUM (Generalized Upper Model) used in other languages has been extended and modified for Spanish; (c) a Spanish grammar has been built following the systemic-functional model by using the KPML (Komet-Penman Multilingual) environment. As result, the final system named Ontogeneration permits the user to consult and retrieve all the information of the ontology in Spanish

    A comparison of languages which operationalise and formalise {KADS} models of expertise

    Get PDF
    In the field of Knowledge Engineering, dissatisfaction with the rapid-prototyping approach has led to a number of more principled methodologies for the construction of knowledge-based systems. Instead of immediately implementing the gathered and interpreted knowledge in a given implementation formalism according to the rapid-prototyping approach, many such methodologies centre around the notion of a conceptual model: an abstract, implementation independent description of the relevant problem solving expertise. A conceptual model should describe the task which is solved by the system and the knowledge which is required by it. Although such conceptual models have often been formulated in an informal way, recent years have seen the advent of formal and operational languages to describe such conceptual models more precisely, and operationally as a means for model evaluation. In this paper, we study a number of such formal and operational languages for specifying conceptual models. In order to enable a meaningful comparison of such languages, we focus on languages which are all aimed at the same underlying conceptual model, namely that from the KADS method for building KBS. We describe eight formal languages for KADS models of expertise, and compare these languages with respect to their modelling primitives, their semantics, their implementations and their applications. Future research issues in the area of formal and operational specification languages for KBS are identified as the result of studying these languages. The paper also contains an extensive bibliography of research in this area

    Ontology-based question answering systems over knowledge bases: a survey

    Get PDF
    Searching relevant, specific information in big data volumes is quite a challenging task. Despite the numerous strategies in the literature to tackle this problem, this task is usually carried out by resorting to a Question Answering (QA) systems. There are many ways to build a QA system, such as heuristic approaches, machine learning, and ontologies. Recent research focused their efforts on ontology-based methods since the resulting QA systems can benefit from knowledge modeling. In this paper, we present a systematic literature survey on ontology-based QA systems regarding any questions. We also detail the evaluation process carried out in these systems and discuss how each approach differs from the others in terms of the challenges faced and strategies employed. Finally, we present the most prominent research issues still open in the field

    Classification and representation of types in TDL

    Get PDF
    TDL is a typed feature-based representation language and inference system, specifically designed to support highly lexicalized constraint-based grammar theories. Type definitions in TDL consist of type and feature constraints over the full Boolean connectives together with coreferences, thus making TDL Turing-complete. TDL provides open- and closed-world reasoning over types. Working with partially as well as with fully expanded types is possible. Efficient reasoning in TDL is accomplished through specialized modules. In this paper, we will highlight the type/inheritance hierarchy module of TDL and show how we represent conjunctively and disjunctively defined types. Negated types and incompatible types are handled by specialized bottom symbols. Redefining a type only leads to the redefinition of the dependent types, and not to the redefinition of the whole grammar/lexicon. Undefined types are nothing special. Reasoning over the type hierarchy is partially realized by a bit vector encoding of types, similar to the one used in Aït-Kaci\u27s LOGIN. However, the underlying semantics does not harmonize with the open-world assumption of TDL. Thus, we have to generalize the GLB/LUB operation to account for this fact. The system, as presented in the paper, has been fully implemented in Common Lisp and is an integrated part of a large NL system. It has been installed and successfully employed at other sites and runs on various platforms

    New IR & Ranking Algorithm for Top-K Keyword Search on Relational Databases ‘Smart Search’

    Get PDF
    Database management systems are as old as computers, and the continuous research and development in databases is huge and an interest of many database venders and researchers, as many researchers work in solving and developing new modules and frameworks for more efficient and effective information retrieval based on free form search by users with no knowledge of the structure of the database. Our work as an extension to previous works, introduces new algorithms and components to existing databases to enable the user to search for keywords with high performance and effective top-k results. Work intervention aims at introducing new table structure for indexing of keywords, which would help algorithms to understand the semantics of keywords and generate only the correct CN‟s (Candidate Networks) for fast retrieval of information with ranking of results according to user‟s history, semantics of keywords, distance between keywords and match of keywords. In which a three modules where developed for this purpose. We implemented our three proposed modules and created the necessary tables, with the development of a web search interface called „Smart Search‟ to test our work with different users. The interface records all user interaction with our „Smart Search‟ for analyses, as the analyses of results shows improvements in performance and effective results returned to the user. We conducted hundreds of randomly generated search terms with different sizes and multiple users; all results recorded and analyzed by the system were based on different factors and parameters. We also compared our results with previous work done by other researchers on the DBLP database which we used in our research. Our final result analysis shows the importance of introducing new components to the database for top-k keywords search and the performance of our proposed system with high effective results.نظم إدارة قواعد البيانات قديمة مثل أجيزة الكمبيوتر، و البحث والتطوير المستمر في قواعد بيانات ضخم و ىنالك اىتمام من العديد من مطوري قواعد البيانات والباحثين، كما يعمل العديد من الباحثين في حل وتطوير وحدات جديدة و أطر السترجاع المعمومات بطرق أكثر كفاءة وفعالية عمى أساس نموذج البحث الغير مقيد من قبل المستخدمين الذين ليس لدييم معرفة في بنية قاعدة البيانات. ويأتي عممنا امتدادا لألعمال السابقة، ويدخل الخوارزميات و مكونات جديدة لقواعد البيانات الموجودة لتمكين المستخدم من البحث عن الكممات المفتاحية )search Keyword )مع األداء العالي و نتائج فعالة في الحصول عمى أعمى ترتيب لمبيانات .)Top-K( وييدف ىذا العمل إلى تقديم بنية جديدة لفيرسة الكممات المفتاحية )Table Keywords Index ،)والتي من شأنيا أن تساعد الخوارزميات المقدمة في ىذا البحث لفيم معاني الكممات المفتاحية المدخمة من قبل المستخدم وتوليد فقط الشبكات المرشحة (s’CN (الصحيحة السترجاع سريع لممعمومات مع ترتيب النتائج وفقا ألوزان مختمفة مثل تاريخ البحث لممستخدم، ترتيب الكمات المفتاحية في النتائج والبعد بين الكممات المفتاحية في النتائج بالنسبة لما قام المستخدم بأدخالو. قمنا بأقتراح ثالث مكونات جديدة )Modules )وتنفيذىا من خالل ىذه االطروحة، مع تطوير واجية البحث عمى شبكة اإلنترنت تسمى "البحث الذكي" الختبار عممنا مع المستخدمين. وتتضمن واجية البحث مكونات تسجل تفاعل المستخدمين وتجميع تمك التفاعالت لمتحميل والمقارنة، وتحميالت النتائج تظير تحسينات في أداء استرجاع البينات و النتائج ذات صمة ودقة أعمى. أجرينا مئات عمميات البحث بأستخدام جمل بحث تم أنشائيا بشكل عشوائي من مختمف األحجام، باالضافة الى االستعانة بعدد من المستخدمين ليذه الغاية. واستندت جميع النتائج المسجمة وتحميميا بواسطة واجية البحث عمى عوامل و معايير مختمفة .وقمنا بالنياية بعمل مقارنة لنتائجنا مع االعمال السابقة التي قام بيا باحثون آخرون عمى نفس قاعدة البيانات (DBLP (الشييرة التي استخدمناىا في أطروحتنا. وتظير نتائجنا النيائية مدى أىمية أدخال بنية جديدة لفيرسة الكممات المفتاحية الى قواعد البيانات العالئقية، وبناء خوارزميات استنادا الى تمك الفيرسة لمبحث بأستخدام كممات مفتاحية فقط والحصول عمى نتائج أفضل ودقة أعمى، أضافة الى التحسن في وقت البحث

    A two level representation for spatial relations. - Part I

    Get PDF
    A model to represent spatial relations is presented. It is used for the definition of common sense knowledge of rational agents in a multi-agent-scenario. The main idea is, that it is structured in two levels: the representation of relations may be accomplished in terms of predicate logic at one level or in expressions of Cartesian coordinates at the other. Hence reasoning is possible with common rules of deduction as well as via exact calculations of the positions. Here we give an overview on the whole structure and then investigate in the definition of a set of spatial relations at the "Logical Level". Finally special features like the handling of the context and the problem of multiple views are discussed

    Mechanisms for structuring knowledge-based systems

    Get PDF

    Verzeichnis von Softwarekomponenten für natürlichsprachliche Systeme : Ergebnisse einer Umfrage im Rahmen der VERBMOBIL-Vorbereitung

    Get PDF
    Das DFKI (Deutsches Forschungszentrum für Künstliche Intelligenz) wurde vom BMFT (Bundesministerium für Forschung und Technologie) mit der Durchführung einer Umfrage zu existierenden Software-Komponenten im Bereich Verarbeitung natürlicher Sprache beauftragt (413 - 4001 - 01 IV 201). Das Ziel der Umfrage war die Erstellung einer Übersicht von in Deutschland verfügbaren Software-Komponenten, die im Bereich der natürlichsprachlichen Systeme für das Projekt VERBMOBIL relevant sein könnten. Das Ergebnis dieser Umfrage liegt nun vor. Zur Durchführung der Umfrage wurde ein Fragebogen erstellt, der im März 1992 über die News-Gruppe mod-ki verbreitet und außerdem an ca. 400 Adressen geschickt wurde (Mitglieder der Gesellschaft für Informatik e. V. FA 1.3 1 "Natürliche Sprache", Mitglieder der DGfS, Sektion Computerlinguistik). Das Verzeichnis ist auf in Deutschland entwickelte Software beschränkt und enthält akademische, kommerzielle und geschützte Software, wobei jeweils angegeben ist, unter welchen Bedingungen die Komponenten erhältlich sind
    corecore