4 research outputs found

    Combining knowledge discovery, ontologies, annotations, and semantic wikis

    Get PDF
    Semantic Wikis provide an original and operational infrastructure for efficiently com- bining semantic technologies and collaborative design activities. This text presents: a running example and its context (organization of the collections in a museum); concepts of wikis as a tool to allow computer supported cooperative work (cscw); concepts of se- mantic technologies and knowledge representation; concepts and examples of semantic wikis; anatomy of a semantic wiki (reasoning tools, storage, querying); and research directions.Laboratorio de Investigación y Formación en Informática Avanzad

    Programming with keywords

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.Includes bibliographical references (p. 105-108).Modern applications provide interfaces for scripting, but many users do not know how to write script commands. However, many users are familiar with the idea of entering keywords into a web search engine. Hence, if a user is familiar with the vocabulary of an application domain, they may be able to write a set of keywords expressing a command in that domain. For instance, in the web browsing domain, a user might enter the keywords click search button. This thesis presents several algorithms for translating keyword queries such as this directly into code. A prototype of this system in the web browsing domain translates click search button into the code click(findButton("search")). This code may then be executed in the context of a web browser to carry out the effect. Another prototype in the Java domain translates append message to log into log.append(message), given an appropriate context of local variables and imported classes. The algorithms and prototypes are evaluated with several studies, suggesting that users can write keyword queries with little or no instructions, and that the resulting translations are often accurate. This is especially true in small domains like the web, whereas in a large domain like Java, the accuracy is comparable to the accuracy of writing syntactically correct Java code without assistance.by Greg Little.S.M

    Natural Language Processing: Integration of Automatic and Manual Analysis

    Get PDF
    There is a current trend to combine natural language analysis with research questions from the humanities. This requires an integration of automatic analysis with manual analysis, e.g. to develop a theory behind the analysis, to test the theory against a corpus, to generate training data for automatic analysis based on machine learning algorithms, and to evaluate the quality of the results from automatic analysis. Manual analysis is traditionally the domain of linguists, philosophers, and researchers from other humanities disciplines, who are often not expert programmers. Automatic analysis, on the other hand, is traditionally done by expert programmers, such as computer scientists and more recently computational linguists. It is important to bring these communities, their tools, and data closer together, to produce analysis of a higher quality with less effort. However, promising cooperations involving manual and automatic analysis, e.g. for the purpose of analyzing a large corpus, are hindered by many problems: - No comprehensive set of interoperable automatic analysis components is available. - Assembling automatic analysis components into workflows is too complex. - Automatic analysis tools, exploration tools, and annotation editors are not interoperable. - Workflows are not portable between computers. - Workflows are not easily deployable to a compute cluster. - There are no adequate tools for the selective annotation of large corpora. - In automatic analysis, annotation type systems are predefined, but manual annotation requires customizability. - Implementing new interoperable automatic analysis components is too complex. - Workflows and components are not sufficiently debuggable and refactorable. - Workflows that change dynamically via parametrization are not readily supported. - The user has no control over workflows that rely on expert skills from a different domain, undocumented knowledge, or third-party infrastructures, e.g. web services. In cooperation with researchers from the humanities, we develop innovative technical solutions and designs to facilitate the use of automatic analysis and to promote the integration of manual and automatic analysis. To address these issues, we set foundations in four areas: - Usability is improved by reducing the complexity of the APIs for building workflows and creating custom components, improving the handling of resources required by such components, and setting up auto-configuration mechanisms. - Reproducibility is improved through a concept for self-contained, portable analysis components and workflows combined with a declarative modeling approach for dynamic parametrized workflows, that facilitates avoiding unnecessary auxiliary manual steps in automatic workflows. - Flexibility is achieved by providing an extensive collection of interoperable automatic analysis components. We also compare annotation type systems used by different automatic analysis components to locate design patterns that allow for customization when used in manual analysis tasks. - Interactivity is achieved through a novel "annotation-by-query" process combining corpus search with annotation in a multi-user scenario. The process is supported by a web-based tool. We demonstrate the adequacy of our concepts through examples which represent whole classes of research problems. Additionally, we integrated all our concepts into existing open-source projects, or we implemented and published them within new open-source projects
    corecore