46,981 research outputs found
A Large Scale Dataset for the Evaluation of Ontology Matching Systems
Recently, the number of ontology matching techniques and systems has increased significantly. This makes the issue of their evaluation and comparison more severe. One of the challenges of the ontology matching evaluation is in building large scale evaluation datasets. In fact, the number of possible correspondences between two ontologies grows quadratically with respect to the numbers of entities in these ontologies. This often makes the manual construction of the evaluation datasets demanding to the point of being infeasible for large scale matching tasks. In this paper we present an ontology matching evaluation dataset composed of thousands of matching tasks, called TaxME2. It was built semi-automatically out of the Google, Yahoo and Looksmart web directories. We evaluated TaxME2 by exploiting the results of almost two dozen of state of the art ontology matching systems. The experiments indicate that the dataset possesses the desired key properties, namely it is error-free, incremental, discriminative, monotonic, and hard for the state of the art ontology matching systems. The paper has been accepted for publication in "The Knowledge Engineering Review", Cambridge Universty Press (ISSN: 0269-8889, EISSN: 1469-8005)
Recommended from our members
Using background knowledge for ontology evolution
One of the current bottlenecks for automating ontology evolution is resolving the right links between newly arising information and the existing knowledge in the ontology. Most of existing approaches mainly rely on the user when it comes to capturing and representing new knowledge. Our ontology evolution framework intends to reduce or even eliminate user input through the use of background knowledge. In this paper, we show how various sources of background knowledge could be exploited for relation discovery. We perform a relation discovery experiment focusing on the use of WordNet and Semantic Web ontologies as sources of background knowledge. We back our experiment with a thorough analysis that highlights various issues on how to improve and validate relation discovery in the future, which will directly improve the task of automatically performing ontology changes during evolution
Computing word-of-mouth trust relationships in social networks from Semantic Web and Web 2.0 data sources
Social networks can serve as both a rich source of new information and as a filter to identify the information most relevant to our specific needs. In this paper we present a methodology and algorithms that, by exploiting existing Semantic Web and Web2.0 data sources, help individuals identify who in their social network knows what, and who is the most trustworthy source of information on that topic. Our approach improves upon previous work in a number of ways, such as incorporating topic-specific rather than global trust metrics. This is achieved by generating topic experience profiles for each network member, based on data from Revyu and del.icio.us, to indicate who knows what. Identification of the most trustworthy sources is enabled by a rich trust model of information and recommendation seeking in social networks. Reviews and ratings created on Revyu provide source data for algorithms that generate topic expertise and person to person affinity metrics. Combining these metrics, we are implementing a user-oriented application for searching and automated ranking of information sources within social networks
Learning Relatedness Measures for Entity Linking
Entity Linking is the task of detecting, in text documents, relevant mentions to entities of a given knowledge base. To this end, entity-linking algorithms use several signals and features extracted from the input text or from the knowl- edge base. The most important of such features is entity relatedness. Indeed, we argue that these algorithms benefit from maximizing the relatedness among the relevant enti- ties selected for annotation, since this minimizes errors in disambiguating entity-linking.
The definition of an e↵ective relatedness function is thus a crucial point in any entity-linking algorithm. In this paper we address the problem of learning high-quality entity relatedness functions. First, we formalize the problem of learning entity relatedness as a learning-to-rank problem. We propose a methodology to create reference datasets on the basis of manually annotated data. Finally, we show that our machine-learned entity relatedness function performs better than other relatedness functions previously proposed, and, more importantly, improves the overall performance of dif- ferent state-of-the-art entity-linking algorithms
Spoken content retrieval: A survey of techniques and technologies
Speech media, that is, digital audio and video containing spoken content, has blossomed in recent years. Large collections are accruing on the Internet as well as in private and enterprise settings. This growth has motivated extensive research on techniques and technologies that facilitate reliable indexing and retrieval. Spoken content retrieval (SCR) requires the combination of audio and speech processing technologies with methods from information retrieval (IR). SCR research initially investigated planned speech structured in document-like units, but has subsequently shifted focus to more informal spoken content produced spontaneously, outside of the studio and in conversational settings. This survey provides an overview of the field of SCR encompassing component technologies, the relationship of SCR to text IR and automatic speech recognition and user interaction issues. It is aimed at researchers with backgrounds in speech technology or IR who are seeking deeper insight on how these fields are integrated to support research and development, thus addressing the core challenges of SCR
Joint Deep Modeling of Users and Items Using Reviews for Recommendation
A large amount of information exists in reviews written by users. This source
of information has been ignored by most of the current recommender systems
while it can potentially alleviate the sparsity problem and improve the quality
of recommendations. In this paper, we present a deep model to learn item
properties and user behaviors jointly from review text. The proposed model,
named Deep Cooperative Neural Networks (DeepCoNN), consists of two parallel
neural networks coupled in the last layers. One of the networks focuses on
learning user behaviors exploiting reviews written by the user, and the other
one learns item properties from the reviews written for the item. A shared
layer is introduced on the top to couple these two networks together. The
shared layer enables latent factors learned for users and items to interact
with each other in a manner similar to factorization machine techniques.
Experimental results demonstrate that DeepCoNN significantly outperforms all
baseline recommender systems on a variety of datasets.Comment: WSDM 201
Robust Place Categorization With Deep Domain Generalization
Traditional place categorization approaches in robot vision assume that training and test images have similar visual appearance. Therefore, any seasonal, illumination, and environmental changes typically lead to severe degradation in performance. To cope with this problem, recent works have been proposed to adopt domain adaptation techniques. While effective, these methods assume that some prior information about the scenario where the robot will operate is available at training time. Unfortunately, in many cases, this assumption does not hold, as we often do not know where a robot will be deployed. To overcome this issue, in this paper, we present an approach that aims at learning classification models able to generalize to unseen scenarios. Specifically, we propose a novel deep learning framework for domain generalization. Our method develops from the intuition that, given a set of different classification models associated to known domains (e.g., corresponding to multiple environments, robots), the best model for a new sample in the novel domain can be computed directly at test time by optimally combining the known models. To implement our idea, we exploit recent advances in deep domain adaptation and design a convolutional neural network architecture with novel layers performing a weighted version of batch normalization. Our experiments, conducted on three common datasets for robot place categorization, confirm the validity of our contribution
- …