3,866 research outputs found
How FAIR can you get? Image Retrieval as a Use Case to calculate FAIR Metrics
A large number of services for research data management strive to adhere to
the FAIR guiding principles for scientific data management and stewardship. To
evaluate these services and to indicate possible improvements, use-case-centric
metrics are needed as an addendum to existing metric frameworks. The retrieval
of spatially and temporally annotated images can exemplify such a use case. The
prototypical implementation indicates that currently no research data
repository achieves the full score. Suggestions on how to increase the score
include automatic annotation based on the metadata inside the image file and
support for content negotiation to retrieve the images. These and other
insights can lead to an improvement of data integration workflows, resulting in
a better and more FAIR approach to manage research data.Comment: This is a preprint for a paper accepted for the 2018 IEEE conferenc
Solving Quadratic Programs to High Precision using Scaled Iterative Refinement
Quadratic optimization problems (QPs) are ubiquitous, and solution algorithms
have matured to a reliable technology. However, the precision of solutions is
usually limited due to the underlying floating-point operations. This may cause
inconveniences when solutions are used for rigorous reasoning. We contribute on
three levels to overcome this issue. First, we present a novel refinement
algorithm to solve QPs to arbitrary precision. It iteratively solves refined
QPs, assuming a floating-point QP solver oracle. We prove linear convergence of
residuals and primal errors. Second, we provide an efficient implementation,
based on SoPlex and qpOASES that is publicly available in source code. Third,
we give precise reference solutions for the Maros and M\'esz\'aros benchmark
library
Can Computational Meta-Documentary Linguistics Provide for Accountability and Offer an Alternative to "Reproducibility" in Linguistics?
As an answer to the need for accountability in linguistics, computational methodology and big data approaches offer an interesting perspective to the field of meta-documentary linguistics. The focus of this paper lies on the scientific process of citing published data and the insights this gives to the workings of a discipline. The proposed methodology shall aid to bring out the narratives of linguistic research within the literature. This can be seen as an alternative, philological approach to documentary linguistics
Report on the 9th International Conference on Austroasiatic Linguistics (ICAAL9) at Lund University, Sweden, November 18â19, 2021
This is a report on the 9th International Conference on Austroasiatic Linguistics (ICAAL9) at Lund University, Sweden, November 18â19, 2021, as well as a summary of the history and future plans of ICAAL
Numerical Field Simulations of Composite Material
In the following work, results are presented which increase the understanding of the properties of composite material consisting of a non-magnetic matrix and ferromagnetic, spherical inclusions which fulfill the conditions of a homogeneous effective medium. Especially, we are interested in the shift of the ferromagnetic resonance frequency and the effective permeability tensor in dependence of the material properties and the microstructure of the composite. For generating the data of interest, various numerical simulation methods, including calculation of the static orientation of the magnetic moments, modelling of the of waveguide-based transmission and reflection experiments and corresponding evaluation methods, are used. With the methods at hand, we are able to analyze as well composite bulk material as finite samples and consider different kinds of inclusion arrangements from simple cubic lattices to random insertion. One of the main tasks during this work was to find possibilites to produce results with low inclusion numbers, to which we are restricted due to high memory consumptions in the high-frequency simulations, which also coincide with the large system limit. Even if we come up against limits due to finite memory resources leading to artifacts, we identify and isolate different, counteracting effects which cause a shifting of the ferromagnetic resonance frequency.In der vorliegenden Arbeit werden Ergebnisse prĂ€sentiert, die das VerstĂ€ndnis von Eigenschaften von Kompositmaterialien erhöhen, die aus einer nichtmagnetischen Matrix und ferromagnetischen, kugelförmigen Inklusionen bestehen, die die Bedingungen des homogenen effektiven Mediums erfĂŒllen. Besonderes Interesse gilt dabei der Verschiebung der ferromagnetischen Resonanz und dem effektiven
PermeabilitĂ€tstensors in AbhĂ€ngigkeit der Materialeigenschaften und der Mikrostruktur des Komposits. FĂŒr die Untersuchung werden zahlreiche Simulationsmethoden zur Berechnung der statischen Orientierung der magnetischen Momente, zur Modellierung der wellenleitergestĂŒtzten Transmissions- und Reflexionsexperimente und entsprechende Auswertemethoden eingesetzt. Mit den zur VerfĂŒgung stehenden Methoden können sowohl unendlich ausgedehnte als auch endliche Kompositproben und verschiedene Anordnungen der Inklusionen, vom einfach kubischen Gitter bis zum zufĂ€lligen Einwurf, untersucht werden. Eine der Hauptaufgaben dieser Arbeit war es, Möglichkeiten zu finden, um mit geringen Inklusionszahlen, auf die wir in den Hochfrequenzsimulationen aufgrund deren hohen Speicherverbrauchs beschrĂ€nkt sind, Ergebnisse zu produzieren, die auch fĂŒr den Grenzfall groĂer Systeme gelten. Obwohl wir dabei auf Grenzen und daraus resultierende Artefakte stoĂen, isolieren wir verschiedene, sich entgegenwirkende Effekte, die eine Verschiebung der ferromagnetischen Resonanzfrequenz verursachen
Motion Planning for Triple-Axis Spectrometers
We present the free and open source software TAS-Paths, a novel system which
calculates optimal, collision-free paths for the movement of triple-axis
spectrometers. The software features an easy to use graphical user interface,
but can also be scripted and used as a library. It allows the user to plan and
visualise the motion of the instrument before the experiment and can be used
during measurements to circumvent obstacles. The instrument path is calculated
in angular configuration space in order to keep a maximum angular distance from
any obstacle.Comment: 6 pages, 4 figure
Machine-actionable assessment of research data products
Research data management is a relevant topic for academic research which is why many concepts and technologies emerge to face the challenges involved, such as data growth, reproducibility, or heterogeneity of tools, services, and standards. The basic concept of research data management is a research data product; it has three dimensions: the data, the metadata describing them, and the services providing both. Traditionally, the assessment of a research data product has been carried out either manually via peer-review by human experts or automated by counting certain events. We present a novel mechanism to assess research data products.
The current state-of-the-art of machine-actionable assessment of research data products is based on the assumption that its quality, impact, or relevance are linked to the likeliness of peers or others to interact with it: event-based metrics include counting citations, social media interactions, or usage statistics. The shortcomings of event-based metrics are systematically discussed in this thesis; they include dependance on the date of publication and the impact of social effects.
In contrast to event-based metrics benchmarks for research data products simulate technical interactions with a research data product and check its compliance with best practices. Benchmarks operate on the assumption that the effort invested in producing a research data product increases the chances that its quality, impact, or relevance are high. This idea is translated into a software architecture and a step-by-step approach to create benchmarks based on it.
For a proof-of-concept we use a prototypical benchmark on more than 795,000 research data products deposited at the Zenodo repository to showcase its effectiveness, even with many research data products. A comparison of the benchmarkâs scores with event-based metrics indicate that benchmarks have the potential to complement event-based metrics and that both weakly correlate under certain circumstances. These findings provide the methodological basis for a new tool to answer scientometric questions and to support decision-making in the distribution of sparse resources. Future research can further explore those aspects of benchmarks that allow to improve the reproducibility of scientific findings.Dass das Management von Forschungsdaten ein relevantes Thema ist, zeigt sich an der Vielzahl an konzeptioneller und technischer Antworten auf die damit einhergehenden Herausforderungen, wie z.B. Datenwachstum, Reproduzierbarkeit oder HeterogenitĂ€t der genutzten Tools, Dienste und Standards. Das Forschungsdatenprodukt ist in diesem Kontext ein grundlegender, dreiteilig aufgebauter Begriff: Daten, Metadaten und Dienste, die Zugriffe auf die beiden vorgenannten Komponenten ermöglichen. Die Beurteilung eines Forschungsdatenprodukts ist bisher hĂ€ndisch durch den Peer Review oder durch das ZĂ€hlen von bestimmten Ereignissen realisiert.
Der heutige Stand der Technik, um automatisiert QualitÀt, Impact oder Relevanz eines Forschungsdatenprodukts zu beurteilen, basiert auf der Annahme, dass diese drei Eigenschaften mit der Wahrscheinlichkeit von Interaktionen korrelieren. Event-basierte Metriken umfassen das ZÀhlen von Zitationen, Interaktionen auf sozialen Medien oder technische Zugriffe. Defizite solcher Metriken werden in dieser Arbeit systematisch erörtert; besonderes Augenmerk wird dabei auf deren ZeitabhÀngigkeit und den Einfluss sozialer Mechanismen gelegt.
Benchmarks sind Programme, die Interaktionen mit einem Forschungsdatenprodukt simulieren und dabei die Einhaltung guter Praxis prĂŒfen. Benchmarks operieren auf der Annahme, dass der Aufwand, der in die Erzeugung und Wartung von Forschungsdatenprodukte investiert wurde, mit deren QualitĂ€t, Impact und
Relevanz korreliert. Diese Idee wird in dieser Arbeit in eine Software-Architektur gegossen, fĂŒr deren Implementierung geeignete Hilfsmittel bereitgestellt werden.
Ein prototypischer Benchmark wird auf mehr als 795.000 DatensĂ€tzen des Zenodo Repositorys evaluiert, um die EffektivitĂ€t der Architektur zu demonstrieren.Ein Vergleich zwischen Benchmark Scores und event-basierten Metriken legt nahe, dass beide unter bestimmten UmstĂ€nden schwach korrelieren. Dieses Ergebnis rechtfertigt den Einsatz von Benchmarks als neues szientrometrisches Tool und als Entscheidungshilfe in der Verteilung knapper Ressourcen. Der Einsatz von Benchmarks in der Sicherstellung von reproduzierbaren wissenschaftlichen Erkenntnissen ist ein vielversprechender Gegenstand zukĂŒnftiger Forschung
- âŠ