2,570 research outputs found
Towards Best Practices for Crowdsourcing Ontology Alignment Benchmarks
Ontology alignment systems establish the semantic links between ontologies that enable knowledge from various sources and domains to be used by automated applications in many different ways. Unfortunately, these systems are not perfect. Currently, the results of even the best-performing automated alignment systems need to be manually verified in order to be fully trusted. Ontology alignment researchers have turned to crowdsourcing platforms such as Amazon\u27s Mechanical Turk to accomplish this. However, there has been little systematic analysis of the accuracy of crowdsourcing for alignment verification and the establishment of best practices. In this work, we analyze the impact of the presentation of the context of potential matches and the way in which the question is presented to workers on the accuracy of crowdsourcing for alignment verification. Our overall recommendations are that users interested in high precision are likely to achieve the best results by presenting the definitions of the entity labels and allowing workers to respond with true/false to the question of whether or not an equivalence relationship exists. Conversely, if the alignment researcher is interested in high recall, they are better off presenting workers with a graphical depiction of the entity relationships and a set of options about the type of relation that exists, if any
Is the crowd better as an assistant or a replacement in ontology engineering? An exploration through the lens of the Gene Ontology
Biomedical ontologies contain errors. Crowdsourcing, defined as taking a job traditionally performed by a designated agent and outsourcing it to an undefined large group of people, provides scalable access to humans. Therefore, the crowd has the potential overcome the limited accuracy and scalability found in current ontology quality assurance approaches. Crowd-based methods have identified errors in SNOMED CT, a large, clinical ontology, with an accuracy similar to that of experts, suggesting that crowdsourcing is indeed a feasible approach for identifying ontology errors. This work uses that same crowd-based methodology, as well as a panel of experts, to verify a subset of the Gene Ontology (200 relationships). Experts identified 16 errors, generally in relationships referencing acids and metals. The crowd performed poorly in identifying those errors, with an area under the receiver operating characteristic curve ranging from 0.44 to 0.73, depending on the methods configuration. However, when the crowd verified what experts considered to be easy relationships with useful definitions, they performed reasonably well. Notably, there are significantly fewer Google search results for Gene Ontology concepts than SNOMED CT concepts. This disparity may account for the difference in performance – fewer search results indicate a more difficult task for the worker. The number of Internet search results could serve as a method to assess which tasks are appropriate for the crowd. These results suggest that the crowd fits better as an expert assistant, helping experts with their verification by completing the easy tasks and allowing experts to focus on the difficult tasks, rather than an expert replacement
Visual Affect Around the World: A Large-scale Multilingual Visual Sentiment Ontology
Every culture and language is unique. Our work expressly focuses on the
uniqueness of culture and language in relation to human affect, specifically
sentiment and emotion semantics, and how they manifest in social multimedia. We
develop sets of sentiment- and emotion-polarized visual concepts by adapting
semantic structures called adjective-noun pairs, originally introduced by Borth
et al. (2013), but in a multilingual context. We propose a new
language-dependent method for automatic discovery of these adjective-noun
constructs. We show how this pipeline can be applied on a social multimedia
platform for the creation of a large-scale multilingual visual sentiment
concept ontology (MVSO). Unlike the flat structure in Borth et al. (2013), our
unified ontology is organized hierarchically by multilingual clusters of
visually detectable nouns and subclusters of emotionally biased versions of
these nouns. In addition, we present an image-based prediction task to show how
generalizable language-specific models are in a multilingual context. A new,
publicly available dataset of >15.6K sentiment-biased visual concepts across 12
languages with language-specific detector banks, >7.36M images and their
metadata is also released.Comment: 11 pages, to appear at ACM MM'1
An ontology roadmap for crowdsourcing innovation intermediaries
Ontologies have proliferated in the last years, essentially justified by the need of achieving a consensus in
the multiple representations of reality inside computers, and therefore the accomplishment of
interoperability between machines and systems. Ontologies provide an explicit conceptualization that
describes the semantics of the data. Crowdsourcing innovation intermediaries are organizations that mediate
the communication and relationship between companies that aspire to solve some problem or to take
advantage of any business opportunity with a crowd that is prone to give ideas based on their knowledge,
experience and wisdom, taking advantage of web 2.0 tools. Various ontologies have emerged, but at the best
of our knowledge, there isn’t any ontology that represents the entire process of intermediation of
crowdsourcing innovation. In this paper we present an ontology roadmap for developing crowdsourcing
innovation ontology of the intermediation process. Over the years, several authors have proposed some
distinct methodologies, by different proposals of combining practices, activities, languages, according to the
project they were involved in. We start making a literature review on ontology building, and analyse and
compare ontologies that propose the development from scratch with the ones that propose reusing other
ontologies. We also review enterprise and innovation ontologies known in literature. Finally, are presented
the criteria for selecting the methodology and the roadmap for building crowdsourcing innovation
intermediary ontology.(undefined
- …