3,841 research outputs found
Collaborative editing of knowledge resources for cross-lingual text mining
The need to smoothly deal with textual documents expressed in different languages is increasingly becoming a relevant issue in modern text mining environments. Recently the research on this field has been considerably fostered by the necessity for Web users to easily search and browse the growing amount of heterogeneous multilingual contents available on-line as well as by the related spread of the Semantic Web. A common approach to cross-lingual text mining relies on the exploitation of sets of properly structured multilingual knowledge resources. The involvement of huge communities of users spread over different locations represents a valuable aid to create, enrich, and refine these knowledge resources. Collaborative editing Web environments are usually exploited to this purpose.
This thesis analyzes the features of several knowledge editing tools, both semantic wikis and ontology editors, and discusses the main challenges related to the design and development of this kind of tools. Subsequently, it presents the design, implementation, and evaluation of the Wikyoto Knowledge Editor, called also Wikyoto. Wikyoto is the collaborative editing Web environment that enables Web users lacking any knowledge engineering background to edit the multilingual network of knowledge resources exploited by KYOTO, a cross-lingual text mining system developed in the context of the KYOTO European Project.
To experiment real benefits from social editing of knowledge resources, it is important to provide common Web users with simplified and intuitive interfaces and interaction patterns. Users need to be motivated and properly driven so as to supply information useful for cross-lingual text mining. In addition, the management and coordination of their concurrent editing actions involve relevant technical issues.
In the design of Wikyoto, all these requirements have been considered together with the structure and the set of knowledge resources exploited by KYOTO. Wikyoto aims at enabling common Web users to formalize cross-lingual knowledge by exploiting simplified language-driven interactions. At the same time, Wikyoto generates the set of complex knowledge structures needed by computers to mine information from textual contents. The learning curve of Wikyoto has been kept as shallow as possible by hiding the complexity of the knowledge structures to the users. This goal has been pursued by both enhancing the simplicity and interactivity of knowledge editing patterns and by using natural language interviews to carry out the most complex knowledge editing tasks. In this context, TMEKO, a methodology useful to support users to easily formalize cross-lingual information by natural language interviews has been defined. The collaborative creation of knowledge resources has been evaluated in Wikyoto
Bridging the gap between folksonomies and the semantic web: an experience report
Abstract. While folksonomies allow tagging of similar resources with a variety of tags, their content retrieval mechanisms are severely hampered by being agnostic to the relations that exist between these tags. To overcome this limitation, several methods have been proposed to find groups of implicitly inter-related tags. We believe that content retrieval can be further improved by making the relations between tags explicit. In this paper we propose the semantic enrichment of folksonomy tags with explicit relations by harvesting the Semantic Web, i.e., dynamically selecting and combining relevant bits of knowledge from online ontologies. Our experimental results show that, while semantic enrichment needs to be aware of the particular characteristics of folksonomies and the Semantic Web, it is beneficial for both.
Git4Voc: Git-based Versioning for Collaborative Vocabulary Development
Collaborative vocabulary development in the context of data integration is
the process of finding consensus between the experts of the different systems
and domains. The complexity of this process is increased with the number of
involved people, the variety of the systems to be integrated and the dynamics
of their domain. In this paper we advocate that the realization of a powerful
version control system is the heart of the problem. Driven by this idea and the
success of Git in the context of software development, we investigate the
applicability of Git for collaborative vocabulary development. Even though
vocabulary development and software development have much more similarities
than differences there are still important differences. These need to be
considered within the development of a successful versioning and collaboration
system for vocabulary development. Therefore, this paper starts by presenting
the challenges we were faced with during the creation of vocabularies
collaboratively and discusses its distinction to software development. Based on
these insights we propose Git4Voc which comprises guidelines how Git can be
adopted to vocabulary development. Finally, we demonstrate how Git hooks can be
implemented to go beyond the plain functionality of Git by realizing
vocabulary-specific features like syntactic validation and semantic diffs
Knowledge Representation with Ontologies: The Present and Future
Recently, we have seen an explosion of interest in ontologies as
artifacts to represent human knowledge and as critical components in
knowledge management, the semantic Web, business-to-business
applications, and several other application areas. Various research
communities commonly assume that ontologies are the appropriate modeling
structure for representing knowledge. However, little discussion has
occurred regarding the actual range of knowledge an ontology can
successfully represent
A Survey of Volunteered Open Geo-Knowledge Bases in the Semantic Web
Over the past decade, rapid advances in web technologies, coupled with
innovative models of spatial data collection and consumption, have generated a
robust growth in geo-referenced information, resulting in spatial information
overload. Increasing 'geographic intelligence' in traditional text-based
information retrieval has become a prominent approach to respond to this issue
and to fulfill users' spatial information needs. Numerous efforts in the
Semantic Geospatial Web, Volunteered Geographic Information (VGI), and the
Linking Open Data initiative have converged in a constellation of open
knowledge bases, freely available online. In this article, we survey these open
knowledge bases, focusing on their geospatial dimension. Particular attention
is devoted to the crucial issue of the quality of geo-knowledge bases, as well
as of crowdsourced data. A new knowledge base, the OpenStreetMap Semantic
Network, is outlined as our contribution to this area. Research directions in
information integration and Geographic Information Retrieval (GIR) are then
reviewed, with a critical discussion of their current limitations and future
prospects
Ontology Localization
Nuestra meta principal en esta tesis es proponer una solución para construir una ontología multilingüe, a través de la localización automática de una ontología. La noción de localización viene del área de Desarrollo de Software que hace referencia a la adaptación de un producto de software a un ambiente no nativo. En la Ingeniería Ontológica, la localización de ontologías podría ser considerada como un subtipo de la localización de software en el cual el producto es un modelo compartido de un dominio particular, por ejemplo, una ontología, a ser usada por una cierta aplicación. En concreto, nuestro trabajo introduce una nueva propuesta para el problema de multilingüismo, describiendo los métodos, técnicas y herramientas para la localización de recursos ontológicos y cómo el multilingüismo puede ser representado en las ontologías. No es la meta de este trabajo apoyar una única propuesta para la localización de ontologías, sino más bien mostrar la variedad de métodos y técnicas que pueden ser readaptadas de otras áreas de conocimiento para reducir el costo y esfuerzo que significa enriquecer una ontología con información multilingüe. Estamos convencidos de que no hay un único método para la localización de ontologías. Sin embargo, nos concentramos en soluciones automáticas para la localización de estos recursos. La propuesta presentada en esta tesis provee una cobertura global de la actividad de localización para los profesionales ontológicos. En particular, este trabajo ofrece una explicación formal de nuestro proceso general de localización, definiendo las entradas, salidas, y los principales pasos identificados. Además, en la propuesta consideramos algunas dimensiones para localizar una ontología. Estas dimensiones nos permiten establecer una clasificación de técnicas de traducción basadas en métodos tomados de la disciplina de traducción por máquina. Para facilitar el análisis de estas técnicas de traducción, introducimos una estructura de evaluación que cubre sus aspectos principales. Finalmente, ofrecemos una vista intuitiva de todo el ciclo de vida de la localización de ontologías y esbozamos nuestro acercamiento para la definición de una arquitectura de sistema que soporte esta actividad. El modelo propuesto comprende los componentes del sistema, las propiedades visibles de esos componentes, las relaciones entre ellos, y provee además, una base desde la cual sistemas de localización de ontologías pueden ser desarrollados. Las principales contribuciones de este trabajo se resumen como sigue: - Una caracterización y definición de los problemas de localización de ontologías, basado en problemas encontrados en áreas relacionadas. La caracterización propuesta tiene en cuenta tres problemas diferentes de la localización: traducción, gestión de la información, y representación de la información multilingüe. - Una metodología prescriptiva para soportar la actividad de localización de ontologías, basada en las metodologías de localización usadas en Ingeniería del Software e Ingeniería del Conocimiento, tan general como es posible, tal que ésta pueda cubrir un amplio rango de escenarios. - Una clasificación de las técnicas de localización de ontologías, que puede servir para comparar (analíticamente) diferentes sistemas de localización de ontologías, así como también para diseñar nuevos sistemas, tomando ventaja de las soluciones del estado del arte. - Un método integrado para construir sistemas de localización de ontologías en un entorno distribuido y colaborativo, que tenga en cuenta los métodos y técnicas más apropiadas, dependiendo de: i) el dominio de la ontología a ser localizada, y ii) la cantidad de información lingüística requerida para la ontología final. - Un componente modular para soportar el almacenamiento de la información multilingüe asociada a cada término de la ontología. Nuestra propuesta sigue la tendencia actual en la integración de la información multilingüe en las ontologías que sugiere que el conocimiento de la ontología y la información lingüística (multilingüe) estén separados y sean independientes. - Un modelo basado en flujos de trabajo colaborativos para la representación del proceso normalmente seguido en diferentes organizaciones, para coordinar la actividad de localización en diferentes lenguajes naturales. - Una infraestructura integrada implementada dentro del NeOn Toolkit por medio de un conjunto de plug-ins y extensiones que soporten el proceso colaborativo de localización de ontologías
Challenges to knowledge representation in multilingual contexts
To meet the increasing demands of the complex inter-organizational processes and the demand for
continuous innovation and internationalization, it is evident that new forms of organisation are
being adopted, fostering more intensive collaboration processes and sharing of resources, in what
can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are
crucial resources in collaborative networks, being their management fundamental processes to
optimize.
Knowledge organisation and collaboration systems are thus important instruments for the success of
collaborative networks of organisations having been researched in the last decade in the areas of
computer science, information science, management sciences, terminology and linguistics.
Nevertheless, research in this area didn’t give much attention to multilingual contexts of
collaboration, which pose specific and challenging problems. It is then clear that access to and
representation of knowledge will happen more and more on a multilingual setting which implies the
overcoming of difficulties inherent to the presence of multiple languages, through the use of
processes like localization of ontologies.
Although localization, like other processes that involve multilingualism, is a rather well-developed
practice and its methodologies and tools fruitfully employed by the language industry in the
development and adaptation of multilingual content, it has not yet been sufficiently explored as an
element of support to the development of knowledge representations - in particular ontologies -
expressed in more than one language. Multilingual knowledge representation is then an open
research area calling for cross-contributions from knowledge engineering, terminology, ontology
engineering, cognitive sciences, computational linguistics, natural language processing, and
management sciences.
This workshop joined researchers interested in multilingual knowledge representation, in a
multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge
engineering, terminology, ontology engineering, cognitive sciences, computational linguistics,
natural language processing, and management sciences applied to contexts where multilingualism
continuously creates new and demanding challenges to current knowledge representation methods
and techniques.
In this workshop six papers dealing with different approaches to multilingual knowledge
representation are presented, most of them describing tools, approaches and results obtained in the
development of ongoing projects.
In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a
software module that is part of a workbench for terminological and ontological mining,
Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific
texts for extracting terminological and ontological information. The crawler is part of a tool
suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned
multilingual terminological databases. In this paper the authors describe the basic principles
behind the crawler and summarized the research setting in which the tool is currently tested.
In the second paper, Fumiko Kano presents a work comparing four feature-based similarity
measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized
pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for
Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose
of the comparison is to verify the similarity measures based on the objectively developed datasets.
According to the author the results demonstrate that the Bayesian Model of Generalization provides
for the most effective cognitive model for identifying the most similar corresponding concepts
existing for a targeted socio-cultural community.
In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an
ongoing work and propose an approach to automatic extraction of information from multilingual
financial Web resources, to provide candidate terms for building ontology elements or instances of
ontology concepts. The authors present a complementary approach to the direct
localization/translation of ontology labels, by acquiring terminologies through the access and
harvesting of multilingual Web presences of structured information providers in the field of finance,
leading to both the detection of candidate terms in various multilingual sources in the financial
domain that can be used not only as labels of ontology classes and properties but also for the
possible generation of (multilingual) domain ontologies themselves.
In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the
availability of tools, resources and techniques aimed at the construction of ontological artifacts,
developing a shared conceptualization of a given reality still raises questions about the principles
and methods that support the initial phases of conceptualization. These questions become, according
to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle
these issues the authors present a collaborative platform – conceptME - where terminological and
knowledge representation processes support domain experts throughout a conceptualization
framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and
enhance conceptualization and support a multilingual ontology specification.
In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project
dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation
office of the ministry of justice, a project which aims at developing an advanced tool including
expert knowledge in the algorithms that extract specialized language from textual data (legal
documents) and whose outcome is a knowledge database including Dutch/French equivalents for
legal concepts, enriched with the phraseology related to the terms under discussion.
Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary
results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to
subject librarians, employed in large and multilingual Academic Institutions, the model used by
translators working within European Union Institutions. The authors are using User Experience
(UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology
tables” depicting conceptual linking and connections of words with concepts presented according to
their semantic and linguistic meaning.
The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation
Supporting collaboration in multilingual ontology specification: the conceptME approach
Despite the availability of tools, resources and techniques aimed at
the construction of ontological artifacts, developing a shared conceptualization
of a given reality still raises questions about the principles and methods that
support the initial phases of conceptualization. These questions become more
complex when the conceptualization occurs in a multilingual setting. To tackle
these issues a collaborative platform – conceptME - was developed where terminological
and knowledge representation processes support domain experts
throughout a conceptualization framework, allowing the inclusion of multilingual
data to promote knowledge sharing and enhance conceptualization.info:eu-repo/semantics/publishedVersio
Web 2.0, language resources and standards to automatically build a multilingual named entity lexicon
This paper proposes to advance in the current state-of-the-art of automatic Language Resource (LR) building by taking into consideration three elements: (i) the knowledge available in existing LRs, (ii) the vast amount of information available from the collaborative paradigm that has emerged from the Web 2.0 and (iii) the use of standards to improve interoperability. We present a case study in which a set of LRs for different languages (WordNet for English and Spanish and Parole-Simple-Clips for Italian) are
extended with Named Entities (NE) by exploiting Wikipedia and the aforementioned LRs. The practical result is a multilingual NE lexicon connected to these LRs and to two ontologies: SUMO and SIMPLE. Furthermore, the paper addresses an important problem which affects the Computational Linguistics area in the present, interoperability, by making use of the ISO LMF standard to encode this lexicon. The different steps of the procedure (mapping, disambiguation, extraction, NE identification and postprocessing) are comprehensively explained and evaluated. The resulting resource contains 974,567, 137,583 and 125,806 NEs for English, Spanish and Italian respectively. Finally, in order to check the usefulness of the constructed resource, we apply it into a state-of-the-art Question Answering system and evaluate its impact; the NE lexicon improves the system’s accuracy by 28.1%. Compared to previous approaches to build NE repositories, the current proposal represents a step forward in terms of automation, language independence, amount of NEs acquired and richness of the information represented
AgroPortal : a proposition for ontology-based services in the agronomic domain
Our project is to develop and support a reference ontology repository for the agronomic domain. By reusing the NCBO BioPortal technology, we have already designed and implemented a prototype ontology repository for plants and a few crops. We plan to turn that prototype into a real service to the community. The AgroPortal project aims at reusing the scientific outcomes and experience of the biomedical domain in the context of plant, agronomic and environment sciences. We will offer an ontology portal which features ontology hosting, search, versioning, visualization, comment, but we will also offer services for semantically annotating data with the ontologies, as well as storing and exploiting ontology alignments and data annotations. All of these within a fully semantic web compliant infrastructure. The main objective of this project is to enable straightforward use of agronomic related ontologies, avoiding data managers and researchers the burden to deal with complex knowledge engineering issues to annotate the research data. The AgroPortal project will specifically pay attention to respect the requirements of the agronomic community and the specificities of the crop domain. We will first focus on the outputs of a few existing driving agronomic use cases related to rice and wheat, with the goal of generalizing to other Crop Ontology related use cases. AgroPortal will offer a robust and stable platform that we anticipate will be highly valued by the community
- …