11,641 research outputs found
Ontology-Based Recommendation of Editorial Products
Major academic publishers need to be able to analyse their vast catalogue of products and select the best items to be marketed in scientific venues. This is a complex exercise that requires characterising with a high precision the topics of thousands of books and matching them with the interests of the relevant communities. In Springer Nature, this task has been traditionally handled manually by publishing editors. However, the rapid growth in the number of scientific publications and the dynamic nature of the Computer Science landscape has made this solution increasingly inefficient. We have addressed this issue by creating Smart Book Recommender (SBR), an ontology-based recommender system developed by The Open University (OU) in collaboration with Springer Nature, which supports their Computer Science editorial team in selecting the products to market at specific venues. SBR recommends books, journals, and conference proceedings relevant to a conference by taking advantage of a semantically enhanced representation of about 27K editorial products. This is based on the Computer Science Ontology, a very large-scale, automatically generated taxonomy of research areas. SBR also allows users to investigate why a certain publication was suggested by the system. It does so by means of an interactive graph view that displays the topic taxonomy of the recommended editorial product and compares it with the topic-centric characterization of the input conference. An evaluation carried out with seven Springer Nature editors and seven OU researchers has confirmed the effectiveness of the solution
A Multi-Relational Network to Support the Scholarly Communication Process
The general pupose of the scholarly communication process is to support the
creation and dissemination of ideas within the scientific community. At a finer
granularity, there exists multiple stages which, when confronted by a member of
the community, have different requirements and therefore different solutions.
In order to take a researcher's idea from an initial inspiration to a community
resource, the scholarly communication infrastructure may be required to 1)
provide a scientist initial seed ideas; 2) form a team of well suited
collaborators; 3) located the most appropriate venue to publish the formalized
idea; 4) determine the most appropriate peers to review the manuscript; and 5)
disseminate the end product to the most interested members of the community.
Through the various delinieations of this process, the requirements of each
stage are tied soley to the multi-functional resources of the community: its
researchers, its journals, and its manuscritps. It is within the collection of
these resources and their inherent relationships that the solutions to
scholarly communication are to be found. This paper describes an associative
network composed of multiple scholarly artifacts that can be used as a medium
for supporting the scholarly communication process.Comment: keywords: digital libraries and scholarly communicatio
Graph Signal Processing: Overview, Challenges and Applications
Research in Graph Signal Processing (GSP) aims to develop tools for
processing data defined on irregular graph domains. In this paper we first
provide an overview of core ideas in GSP and their connection to conventional
digital signal processing. We then summarize recent developments in developing
basic GSP tools, including methods for sampling, filtering or graph learning.
Next, we review progress in several application areas using GSP, including
processing and analysis of sensor network data, biological data, and
applications to image processing and machine learning. We finish by providing a
brief historical perspective to highlight how concepts recently developed in
GSP build on top of prior research in other areas.Comment: To appear, Proceedings of the IEE
The distorted mirror of Wikipedia: a quantitative analysis of Wikipedia coverage of academics
Activity of modern scholarship creates online footprints galore. Along with
traditional metrics of research quality, such as citation counts, online images
of researchers and institutions increasingly matter in evaluating academic
impact, decisions about grant allocation, and promotion. We examined 400
biographical Wikipedia articles on academics from four scientific fields to
test if being featured in the world's largest online encyclopedia is correlated
with higher academic notability (assessed through citation counts). We found no
statistically significant correlation between Wikipedia articles metrics
(length, number of edits, number of incoming links from other articles, etc.)
and academic notability of the mentioned researchers. We also did not find any
evidence that the scientists with better WP representation are necessarily more
prominent in their fields. In addition, we inspected the Wikipedia coverage of
notable scientists sampled from Thomson Reuters list of "highly cited
researchers". In each of the examined fields, Wikipedia failed in covering
notable scholars properly. Both findings imply that Wikipedia might be
producing an inaccurate image of academics on the front end of science. By
shedding light on how public perception of academic progress is formed, this
study alerts that a subjective element might have been introduced into the
hitherto structured system of academic evaluation.Comment: To appear in EPJ Data Science. To have the Additional Files and
Datasets e-mail the corresponding autho
Information visualization: conceptualizing new paths for filtering and navigate in scientific knowledge objects
More than 6,849.32 new research journal articles are published every day! Who has time to read every article or document that’s relevant to their research? Access to the right and relevant information is paramount for scientific discoveries. Filtering relevant information has become a fundamental challenge in the actual scientific deluge panorama. As information glut grows ever worse, understanding and visualizing the science social behavior may become our only hope for handling a growing deluge of scientific information. It is therefore fundamental to analyze and interactively visualize the science social space. This paper theoretically conceptualizes an approach aimed at the filtering and navigation of relevant Scientific Knowledge Objects (SKOs) based on a symbiosis between different sub-disciplines domains. We present two main contributions, a comparison among several projects with some relevant use of information visualization in scholarly scientific navigation; and an architecture which will be in line with the most recent international standards and good practices for Open Data, especially those related to Linked Open Data capable to perform an innovative information visualization of relevant SKOs. These contributions are relevant to scholarly and to practitioner’s communities and to who want to access and navigate in relevant SKOs.This work has been supported by COMPETE: POCI-01-
0145-FEDER-007043 and FCT – Fundação para a Ciência e
Tecnologia within the Project Scope: UID/CEC/00319/2013.info:eu-repo/semantics/publishedVersio
Encoding models for scholarly literature
We examine the issue of digital formats for document encoding, archiving and
publishing, through the specific example of "born-digital" scholarly journal
articles. We will begin by looking at the traditional workflow of journal
editing and publication, and how these practices have made the transition into
the online domain. We will examine the range of different file formats in which
electronic articles are currently stored and published. We will argue strongly
that, despite the prevalence of binary and proprietary formats such as PDF and
MS Word, XML is a far superior encoding choice for journal articles. Next, we
look at the range of XML document structures (DTDs, Schemas) which are in
common use for encoding journal articles, and consider some of their strengths
and weaknesses. We will suggest that, despite the existence of specialized
schemas intended specifically for journal articles (such as NLM), and more
broadly-used publication-oriented schemas such as DocBook, there are strong
arguments in favour of developing a subset or customization of the Text
Encoding Initiative (TEI) schema for the purpose of journal-article encoding;
TEI is already in use in a number of journal publication projects, and the
scale and precision of the TEI tagset makes it particularly appropriate for
encoding scholarly articles. We will outline the document structure of a
TEI-encoded journal article, and look in detail at suggested markup patterns
for specific features of journal articles
DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs
In this work we address the task of semantic image segmentation with Deep
Learning and make three main contributions that are experimentally shown to
have substantial practical merit. First, we highlight convolution with
upsampled filters, or 'atrous convolution', as a powerful tool in dense
prediction tasks. Atrous convolution allows us to explicitly control the
resolution at which feature responses are computed within Deep Convolutional
Neural Networks. It also allows us to effectively enlarge the field of view of
filters to incorporate larger context without increasing the number of
parameters or the amount of computation. Second, we propose atrous spatial
pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP
probes an incoming convolutional feature layer with filters at multiple
sampling rates and effective fields-of-views, thus capturing objects as well as
image context at multiple scales. Third, we improve the localization of object
boundaries by combining methods from DCNNs and probabilistic graphical models.
The commonly deployed combination of max-pooling and downsampling in DCNNs
achieves invariance but has a toll on localization accuracy. We overcome this
by combining the responses at the final DCNN layer with a fully connected
Conditional Random Field (CRF), which is shown both qualitatively and
quantitatively to improve localization performance. Our proposed "DeepLab"
system sets the new state-of-art at the PASCAL VOC-2012 semantic image
segmentation task, reaching 79.7% mIOU in the test set, and advances the
results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and
Cityscapes. All of our code is made publicly available online.Comment: Accepted by TPAM
- …