4,136 research outputs found
LODE: Linking Digital Humanities Content to the Web of Data
Numerous digital humanities projects maintain their data collections in the
form of text, images, and metadata. While data may be stored in many formats,
from plain text to XML to relational databases, the use of the resource
description framework (RDF) as a standardized representation has gained
considerable traction during the last five years. Almost every digital
humanities meeting has at least one session concerned with the topic of digital
humanities, RDF, and linked data. While most existing work in linked data has
focused on improving algorithms for entity matching, the aim of the
LinkedHumanities project is to build digital humanities tools that work "out of
the box," enabling their use by humanities scholars, computer scientists,
librarians, and information scientists alike. With this paper, we report on the
Linked Open Data Enhancer (LODE) framework developed as part of the
LinkedHumanities project. With LODE we support non-technical users to enrich a
local RDF repository with high-quality data from the Linked Open Data cloud.
LODE links and enhances the local RDF repository without compromising the
quality of the data. In particular, LODE supports the user in the enhancement
and linking process by providing intuitive user-interfaces and by suggesting
high-quality linking candidates using tailored matching algorithms. We hope
that the LODE framework will be useful to digital humanities scholars
complementing other digital humanities tools
Feature importance for machine learning redshifts applied to SDSS galaxies
We present an analysis of importance feature selection applied to photometric
redshift estimation using the machine learning architecture Decision Trees with
the ensemble learning routine Adaboost (hereafter RDF). We select a list of 85
easily measured (or derived) photometric quantities (or `features') and
spectroscopic redshifts for almost two million galaxies from the Sloan Digital
Sky Survey Data Release 10. After identifying which features have the most
predictive power, we use standard artificial Neural Networks (aNN) to show that
the addition of these features, in combination with the standard magnitudes and
colours, improves the machine learning redshift estimate by 18% and decreases
the catastrophic outlier rate by 32%. We further compare the redshift estimate
using RDF with those from two different aNNs, and with photometric redshifts
available from the SDSS. We find that the RDF requires orders of magnitude less
computation time than the aNNs to obtain a machine learning redshift while
reducing both the catastrophic outlier rate by up to 43%, and the redshift
error by up to 25%. When compared to the SDSS photometric redshifts, the RDF
machine learning redshifts both decreases the standard deviation of residuals
scaled by 1/(1+z) by 36% from 0.066 to 0.041, and decreases the fraction of
catastrophic outliers by 57% from 2.32% to 0.99%.Comment: 10 pages, 4 figures, updated to match version accepted in MNRA
BlogForever D2.4: Weblog spider prototype and associated methodology
The purpose of this document is to present the evaluation of different solutions for capturing blogs, established methodology and to describe the developed blog spider prototype
Mapping Large Scale Research Metadata to Linked Data: A Performance Comparison of HBase, CSV and XML
OpenAIRE, the Open Access Infrastructure for Research in Europe, comprises a
database of all EC FP7 and H2020 funded research projects, including metadata
of their results (publications and datasets). These data are stored in an HBase
NoSQL database, post-processed, and exposed as HTML for human consumption, and
as XML through a web service interface. As an intermediate format to facilitate
statistical computations, CSV is generated internally. To interlink the
OpenAIRE data with related data on the Web, we aim at exporting them as Linked
Open Data (LOD). The LOD export is required to integrate into the overall data
processing workflow, where derived data are regenerated from the base data
every day. We thus faced the challenge of identifying the best-performing
conversion approach.We evaluated the performances of creating LOD by a
MapReduce job on top of HBase, by mapping the intermediate CSV files, and by
mapping the XML output.Comment: Accepted in 0th Metadata and Semantics Research Conferenc
- …