7 research outputs found

    Artequakt: Generating tailored biographies from automatically annotated fragments from the web

    Get PDF
    The Artequakt project seeks to automatically generate narrativebiographies of artists from knowledge that has been extracted from the Web and maintained in a knowledge base. An overview of the system architecture is presented here and the three key components of that architecture are explained in detail, namely knowledge extraction, information management and biography construction. Conclusions are drawn from the initial experiences of the project and future progress is detailed

    Artificial Intelligence: A Promised Land for Web Services

    Get PDF
    6 page(s

    Towards smart style : combining RDF semantics with XML document transformations

    Get PDF
    The 'Document Web' has established itself through the creation of an impressive family of XML and related languages. In addition to this, the 'Semantic Web' is developing its own family of languages based primarily on RDF. Although these families were both developed specifically for 'the Web', each language family has been developed from different premises with specific goals in mind. The result is that combining both families in a single application is surprisingly difficult. This is unfortunate, since the combination of semantic processing with document processing provides advantages in both directions --- namely using semantic inferencing for more intelligent document processing and using document processing tools for presenting semantic representations to an end-user. In this paper, we investigate this integration problem, focusing on the role of (RDF) semantics in selecting, structuring and styling (XML) content. We analyze the approaches taken by two example architectures and use our analysis to derive a more integrated alternative

    Annotating the semantic web

    Get PDF
    The web of today has evolved into a huge repository of rich Multimedia content for human consumption. The exponential growth of the web made it possible for information size to reach astronomical proportions; far more than a mere human can manage, causing the problem of information overload. Because of this, the creators of the web(lO) spoke of using computer agents in order to process the large amounts of data. To do this, they planned to extend the current web to make it understandable by computer programs. This new web is being referred to as the Semantic Web. Given the huge size of the web, a collective effort is necessary to extend the web. For this to happen, tools easy enough for non-experts to use must be available. This thesis first proposes a methodology which semi-automatically labels semantic entities in web pages. The methodology first requires a user to provide some initial examples. The tool then learns how to reproduce the user's examples and generalises over them by making use of Adaptive Information Extraction (AlE) techniques. When its level of performance is good enough when compared to the user, it then takes over the process and processes the remaining documents autonomously. The second methodology goes a step further and attempts to gather semantically typed information from web pages automatically. It starts from the assumption that semantics are already available all over the web, and by making use of a number of freely available resources (like databases) combined with AlE techniques, it is possible to extract most information automatically. These techniques will certainly not provide all the solutions for the problems brought about with the advent of the Semantic Web. They are intended to provide a step forward towards making the Semantic Web a reality

    Ontologiebasierte Indexierung und Kontextualisierung multimedialer Dokumente fĂŒr das persönliche Wissensmanagement

    Get PDF
    Die Verwaltung persönlicher, multimedialer Dokumente kann mit Hilfe semantischer Technologien und Ontologien intelligent und effektiv unterstĂŒtzt werden. Dies setzt jedoch Verfahren voraus, die den grundlegenden Annotations- und Bearbeitungsaufwand fĂŒr den Anwender minimieren und dabei eine ausreichende DatenqualitĂ€t und -konsistenz sicherstellen. Im Rahmen der Dissertation wurden notwendige Mechanismen zur semi-automatischen Modellierung und Wartung semantischer Dokumentenbeschreibungen spezifiziert. Diese bildeten die Grundlage fĂŒr den Entwurf einer komponentenbasierten, anwendungsunabhĂ€ngigen Architektur als Basis fĂŒr die Entwicklung innovativer, semantikbasierter Lösungen zur persönlichen Dokumenten- und Wissensverwaltung.Personal multimedia document management benefits from Semantic Web technologies and the application of ontologies. However, an ontology-based document management system has to meet a number of challenges regarding flexibility, soundness, and controllability of the semantic data model. The first part of the dissertation proposes necessary mechanisms for the semi-automatic modeling and maintenance of semantic document descriptions. The second part introduces a component-based, application-independent architecture which forms the basis for the development of innovative, semantic-driven solutions for personal document and information management

    Designing a Griotte for the Global Village: Increasing the Evidentiary Value of Oral Histories for Use in Digital Libraries

    Get PDF
    A griotte in West African culture is a female professional storyteller, responsible for preserving a tribe's history and genealogy by relaying its folklore in oral and musical recitations. Similarly, Griotte is an interdisciplinary project that seeks to foster collaboration between tradition bearers, subject experts, and computer specialists in an effort to build high quality digital oral history collections. To accomplish this objective, this project preserves the primary strength of oral history, namely its ability to disclose "our" intangible culture, and addresses its primary criticism, namely its dubious reliability due to reliance on human memory and integrity. For a theoretical foundation and a systematic model, William Moss's work on the evidentiary value of historical sources is employed. Using his work as a conceptual framework, along with Semantic Web technologies (e.g. Topic Maps and ontologies), a demonstrator system is developed to provide digital oral history tools to a "sample" of the target audience(s). This demonstrator system is evaluated via two methods: 1) a case study conducted to employ the system in the actual building of a digital oral history collection (this step also created sample data for the following assessment), and 2) a survey which involved a task-based evaluation of the demonstrator system. The results of the survey indicate that integrating oral histories with documentary evidence increases the evidentiary value of oral histories. Furthermore, the results imply that individuals are more likely to use oral histories in their work if their evidentiary value is increased. The contributions of this research – primarily in the area of organizing metadata on the World Wide Web – and considerations for future research are also provided

    A survey of the application of soft computing to investment and financial trading

    Get PDF
    corecore