Skip to main content
Article thumbnail
Location of Repository

Optimize first, buy later: Analyzing metrics to ramp-up very large knowledge bases

By Paea Lependu, Natalya F. Noy, Clement Jonquet, Paul R. Alex, Nigam H. Shah and Mark A. Musen

Abstract

Abstract. As knowledge bases move into the landscape of larger ontologies having terabytes of related data, we must work on optimizing the performance of our tools. We are easily tempted to buy bigger machines or to fill rooms with armies of little ones to address the scalability problem. Yet, careful analysis and evaluation of the characteristics of our data—using metrics—often leads to dramatic improvements in performance. Firstly, are current scalable systems scalable enough? It is hard to say: we found that current benchmarks obscure the load-time costs for large or deep ontologies (some as large as 500,000 classes). Therefore, we have synthesized a set of representative ontologies which helps to expose those costs. Secondly, in designing for scalability, how do we squeeze-out more performance? We found that optimizing for data distribution and ontology evolution significantly improves load time (including materialization of the transitive closure) for the NCBO Resource Index—a database of 16.4 billion annotations linking 2.4 million ontology terms to 3.5 million data elements—from one week to less than one hour on the same machine for one of our larger datasets.

Publisher: Springer
Year: 2010
OAI identifier: oai:CiteSeerX.psu:10.1.1.352.5847
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://bmir.stanford.edu/file_... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.