105,947 research outputs found
Enriched biodiversity data as a resource and service
Background: Recent years have seen a surge in projects that produce large volumes of structured, machine-readable biodiversity data. To make these data amenable to processing by generic, open source “data enrichment” workflows, they are increasingly being represented in a variety of standards-compliant interchange formats. Here, we report on an initiative in which software developers and taxonomists came together to address the challenges and highlight the opportunities in the enrichment of such biodiversity data by engaging in intensive, collaborative software development: The Biodiversity Data Enrichment Hackathon.
Results: The hackathon brought together 37 participants (including developers and taxonomists, i.e. scientific professionals that gather, identify, name and classify species) from 10 countries: Belgium, Bulgaria, Canada, Finland, Germany, Italy, the Netherlands, New Zealand, the UK, and the US. The participants brought expertise in processing structured data, text mining, development of ontologies, digital identification keys, geographic information systems, niche modeling, natural language processing, provenance annotation, semantic integration, taxonomic name resolution, web service interfaces, workflow tools and visualisation. Most use cases and exemplar data were provided by taxonomists.
One goal of the meeting was to facilitate re-use and enhancement of biodiversity knowledge by a broad range of stakeholders, such as taxonomists, systematists, ecologists, niche modelers, informaticians and ontologists. The suggested use cases resulted in nine breakout groups addressing three main themes: i) mobilising heritage biodiversity knowledge; ii) formalising and linking concepts; and iii) addressing interoperability between service platforms. Another goal was to further foster a community of experts in biodiversity informatics and to build human links between research projects and institutions, in response to recent calls to further such integration in this research domain.
Conclusions: Beyond deriving prototype solutions for each use case, areas of inadequacy were discussed and are being pursued further. It was striking how many possible applications for biodiversity data there were and how quickly solutions could be put together when the normal constraints to collaboration were broken down for a week. Conversely, mobilising biodiversity knowledge from their silos in heritage literature and natural history collections will continue to require formalisation of the concepts (and the links between them) that define the research domain, as well as increased interoperability between the software platforms that operate on these concepts
Recommended from our members
Client-server-based LBS architecture: A novel positioning module for improved positioning performance
Permission to distribute obtained from publisher.This work presents a new efficient positioning module that operates over client-server LBS architectures. The
aim of the proposed module is to fulfil the position information requirements for LBS pedestrian applications
by ensuring the availability of reliable, highly accurate and precise position solutions based on GPS single
frequency (L1) positioning service. The positioning module operates at both LBS architecture sides; the client
(mobile device), and the server (positioning server). At the server side, the positioning module is responsible
for correcting user’s location information based on WADGPS corrections. In addition, at the mobile side,
the positioning module is continually in charge for monitoring the integrity and available of the position
solutions as well as managing the communication with the server. The integrity monitoring was based on
EGNOS integrity methods. A prototype of the proposed module was developed and used in experimental trials
to evaluate the efficiency of the module in terms of the achieved positioning performance. The positioning
module was capable of achieving a horizontal accuracy of less than 2 meters with a 95% confidence level
with integrity improvement of more than 30% from existing GPS/EGNOS services
EARLINET: towards an advanced sustainable European aerosol lidar network
The European Aerosol Research Lidar Network, EARLINET, was founded in 2000 as a research project for establishing a quantitative, comprehensive, and statistically significant database for the horizontal, vertical, and temporal distribution of aerosols on a continental scale. Since then EARLINET has continued to provide the most extensive collection of ground-based data for the aerosol vertical distribution over Europe.
This paper gives an overview of the network's main developments since 2000 and introduces the dedicated EARLINET special issue, which reports on the present innovative and comprehensive technical solutions and scientific results related to the use of advanced lidar remote sensing techniques for the study of aerosol properties as developed within the network in the last 13 years.
Since 2000, EARLINET has developed greatly in terms of number of stations and spatial distribution: from 17 stations in 10 countries in 2000 to 27 stations in 16 countries in 2013. EARLINET has developed greatly also in terms of technological advances with the spread of advanced multiwavelength Raman lidar stations in Europe. The developments for the quality assurance strategy, the optimization of instruments and data processing, and the dissemination of data have contributed to a significant improvement of the network towards a more sustainable observing system, with an increase in the observing capability and a reduction of operational costs.
Consequently, EARLINET data have already been extensively used for many climatological studies, long-range transport events, Saharan dust outbreaks, plumes from volcanic eruptions, and for model evaluation and satellite data validation and integration.
Future plans are aimed at continuous measurements and near-real-time data delivery in close cooperation with other ground-based networks, such as in the ACTRIS (Aerosols, Clouds, and Trace gases Research InfraStructure Network) www.actris.net, and with the modeling and satellite community, linking the research community with the operational world, with the aim of establishing of the atmospheric part of the European component of the integrated global observing system.Peer ReviewedPostprint (published version
Local Type Checking for Linked Data Consumers
The Web of Linked Data is the cumulation of over a decade of work by the Web
standards community in their effort to make data more Web-like. We provide an
introduction to the Web of Linked Data from the perspective of a Web developer
that would like to build an application using Linked Data. We identify a
weakness in the development stack as being a lack of domain specific scripting
languages for designing background processes that consume Linked Data. To
address this weakness, we design a scripting language with a simple but
appropriate type system. In our proposed architecture some data is consumed
from sources outside of the control of the system and some data is held
locally. Stronger type assumptions can be made about the local data than
external data, hence our type system mixes static and dynamic typing.
Throughout, we relate our work to the W3C recommendations that drive Linked
Data, so our syntax is accessible to Web developers.Comment: In Proceedings WWV 2013, arXiv:1308.026
Who watches the watchers: Validating the ProB Validation Tool
Over the years, ProB has moved from a tool that complemented proving, to a
development environment that is now sometimes used instead of proving for
applications, such as exhaustive model checking or data validation. This has
led to much more stringent requirements on the integrity of ProB. In this paper
we present a summary of our validation efforts for ProB, in particular within
the context of the norm EN 50128 and safety critical applications in the
railway domain.Comment: In Proceedings F-IDE 2014, arXiv:1404.578
Earthquake Size Distribution: Power-Law with Exponent Beta = 1/2?
We propose that the widely observed and universal Gutenberg-Richter relation
is a mathematical consequence of the critical branching nature of earthquake
process in a brittle fracture environment. These arguments, though preliminary,
are confirmed by recent investigations of the seismic moment distribution in
global earthquake catalogs and by the results on the distribution in crystals
of dislocation avalanche sizes. We consider possible systematic and random
errors in determining earthquake size, especially its seismic moment. These
effects increase the estimate of the parameter beta of the power-law
distribution of earthquake sizes. In particular, we find that estimated
beta-values may be inflated by 1-3% because relative moment uncertainties
decrease with increasing earthquake size. Moreover, earthquake clustering
greatly influences the beta-parameter. If clusters (aftershock sequences) are
taken as the entity to be studied, then the exponent value for their size
distribution would decrease by 5-10%. The complexity of any earthquake source
also inflates the estimated beta-value by at least 3-7%. The centroid depth
distribution also should influence the beta-value, an approximate calculation
suggests that the exponent value may be increased by 2-6%. Taking all these
effects into account, we propose that the recently obtained beta-value of 0.63
could be reduced to about 0.52--0.56: near the universal constant value (1/2)
predicted by theoretical arguments. We also consider possible consequences of
the universal beta-value and its relevance for theoretical and practical
understanding of earthquake occurrence in various tectonic and Earth structure
environments. Using comparative crystal deformation results may help us
understand the generation of seismic tremors and slow earthquakes and
illuminate the transition from brittle fracture to plastic flow.Comment: 46 pages, 2 tables, 11 figures 53 pages, 2 tables, 12 figure
- …