8 research outputs found

    Report on research data management interviews conducted for HMC Hub Energy in 2022

    Get PDF
    The Energy Hub of the Helmholtz Metadata Collaboration (HMC) conducted interviews with various stakeholders from the Helmholtz Research Field Energy on the topic of research data management (RDM) in 2022. The intentions were to build and serve a metadata community in the energy research field and to extend the Helmholtz-wide survey conducted by HMC in 2021 Arndt et al., 2022). Besides the deeper insight into the current state of RDM and metadata handling at the Helmholtz sites relevant to the Energy Hub the interviews focused on the related needs and difficulties of researchers and their satisfaction with the current state. Furthermore, we tried to discover already existing workflows and software solutions, to establish contacts and to make HMC better known

    Helmholtz Metadata Collaboration (HMC) - FAIr Metadata for Energy = FAIRe Metadaten fĂŒr die Energieforschung

    Get PDF
    Ein Teil des Helmholtz-Inkubators Information und Data Science ist die Helmholtz Metadata Collaboration (HMC). HMC soll die Beschreibung von Forschungsdaten durch Metadaten zu deren besseren Auffindbarkeit vorantreiben sowie organisatorisch und technisch umsetzen. Metadaten sind essentielle InforÂŹmationen ĂŒber Forschungsdaten, die fĂŒr deren Auffinden und Verstehen sowie fĂŒr deren Vernetzung und NachnutÂŹzung im Sinne der FAIR-Prinzipien erforderlich sind. Zur Umsetzung wird die wissenschaftliche Expertise zum Thema Metadaten aus einzelnen Fachdomainen in sogenannten Metadata Hubs der einzelnen Forschungsbereiche zusammengefasst, auf ĂŒbergeordneter Ebene harmonisiert und, mit Hilfe zentral entwickelter Methoden und Werkzeugen, Metadatenplattformen bereitgestellt. FĂŒr den Forschungsbereich Energie ist der HMC Hub Energie verantwortlich. Aufgabe ist hierbei die vorhandenen Standards zur Energiedaten- und Metadatenbeschreibung, etablierte Beschreibungs- und Erfassungsprozesse sowie zugehörige Softwarewerkzeuge zu erfassen, LĂŒcken zu identifizieren und Szenarien zur ErgĂ€nzung und Weiterentwicklung in der DomĂ€ne Energie zu entwerfen. Einheitliche Ziele von HMC sind die einfache und FAIRe Erschließung und Nutzung vorhandener und zukĂŒnftiger Datensammlungen der Forschungsbereiche sowie die BefĂ€higung der Forschenden FAIRe Daten (semi-) automatisch zu erstellen. Das Poster beschreibt die Struktur von HMC allgemein und dem Hub Energie im speziellen, die entwickelten Methoden und Werkzeuge und gibt anhand von Anwendungsbeispielen Impulse fĂŒr die Umsetzung der Methoden und Werkzeuge hin zu FAIRen Metadaten. Weiterhin werden VerknĂŒpfungen zu Trainings- und Schulungsunterlagen von HMC hergestellt. Das Poster soll dazu einladen mit dem HMC Hub Energie Kontakt aufzunehmen um von den Arbeiten von HMC profitieren zu können

    Welche Macht darf es denn Sein? Tracing ‘Power’ in German Foreign Policy Discourse

    Get PDF
    The relationship between ‘Germany’ and ‘power’ remains a sensitive issue. While observers tend to agree that Germany has regained the status of the most powerful country in Europe, there is debate whether that is to be welcomed or whether that is a problem. Underpinning this debate are views, both within Germany and amongst its neighbours, regarding the kind of power Germany has, or should (not) have. Against this backdrop, the article reviews the dominant role conceptions used in the expert discourse on German foreign policy since the Cold War that depict Germany as a particular type of ‘power’. Specifically, we sketch the evolution of three prominent conceptions (constrained power, civilian power, hegemonic power) and the recent emergence of a new one (shaping power). The article discusses how these labels have emerged to give meaning to Germany’s position in international relations, points to their normative and political function, and to the limited ability of such role images to tell us much about how Germany actually exercises power

    Knowledge Graph Development as a Collaborative Process

    No full text
    <p>Establishing semantic data and knowledge graphs in scientific working groups is no easy feat. In most cases there is neither a user friendly tool chain nor experience with ontologies for the respective research field. But without a start, said experience can never be gained. The same is true for individuals that want to start into the field.</p><p>We thus see knowledge graph development not as a task of expert individuals that already know everything, but as a collaborative (learning) process of working groups and organisations. At the start of this process the right ontologies are not known and the individuals do not yet have experience with expressing information in knowledge graphs. Thus, a tool chain must provide basic knowledge to help newcomers to get started. It must also support the learning process and the selection of terms and ontologies, while users are already working with their own data and metadata. Additionally, the tool chain must support cooperation and lateral transfer of knowledge within organisations and working groups as well as between working groups world wide.</p><p>We therefore propose to establish a data infrastructure in every research organisation consisting of the following elements: An organisational knowledge graph, integration of (global) ID services, links to FAIR ontologies, policies, and a graph editing tool. This editing tool must support simultaneously the input of graph data, the extension of ontologies, the development of data structures, and finding and reusing existing ontologies and data structures not only from other persons inside the organisation but also from globally emerging metadata standards. While searching for a fitting term from a predefined set of ontologies, the tool would also allow for the creation of an internal term, when no fitting one is found. While trying to create a new term, fitting ones are automatically searched and proposed. The here proposed graph editing tool would provide the possibility to refactor existing data to newly selected ontologies, e.g. through replacing terms or whole structures, while keeping the original history in a git+GitLab like structure. This would also allow for access control and cooperation within the organisation and beyond. Such refactoring translations would also be described in terms of graph data and be published, so that others considering the same transition could use them without much effort.</p><p>We think that in the presented infrastructure users could establish processes that would foster harmonization and convergence of ontologies and data structures, while not impeding the collection of data and learning processes of individuals before harmonization is achieved.</p&gt

    Benchmarking airborne laser scanning tree segmentation algorithms in broadleaf forests shows high accuracy only for canopy trees

    No full text
    Individual tree segmentation from airborne laser scanning data is a longstanding and important challenge in forest remote sensing. Tree segmentation algorithms are widely available, but robust intercomparison studies are rare due to the difficulty of obtaining reliable reference data. Here we provide a benchmark data set for temperate and tropical broadleaf forests generated from labelled terrestrial laser scanning data. We compared the performance of four widely used tree segmentation algorithms against this benchmark data set. All algorithms performed reasonably well on the canopy trees. The point cloud-based algorithm AMS3D (Adaptive Mean Shift 3D) had the highest overall accuracy, closely followed by the 2D raster based region growing algorithm Dalponte2016 +. However, all algorithms failed to accurately segment the understory trees. This result was consistent across both forest types. This study emphasises the need to assess tree segmentation algorithms directly using benchmark data, rather than comparing with forest indices such as biomass or the number and size distribution of trees. We provide the first openly available benchmark data set for tropical forests and we hope future studies will extend this work to other regions

    A survey on research data management practices among researchers in the Helmholtz Association

    No full text
    Annotation of research data with rich metadata is important to make that data findable, accessible, interoperable, and reusable (Wilkinson et al. [2016]). This ensures the conducted research data is durable. Within the Helmholtz Association, the Helmholtz Metadata Collaboration (HMC) coordinates the mission to enrich Helmholtz-based research data with metadata by providing (information about) technical solutions, advice and ensuring uniform scientific standards for the use of metadata. In 2021, HMC conducted its first community survey to align its services with the needs of Helmholtz researchers. A question catalogue with 49 (sub-)questions was designed and disseminated among researchers in all six Helmholtz research fields. The conditional succession of the questions was aligned with predetermined expertise levels ("no prior knowledge", "intermediate prior knowledge", "high level of prior knowledge"). 631 completed survey replies were obtained for analysis. The HMC Community Survey 2021 provides insight into the management of research data as well as the data publication practices of researchers in the Helmholtz Association. The characterization of research-field-dependent communities will enable HMC to further develop targeted, community-directed support for the documentation of research data with metadata

    Structure Control of Polysaccharide Derivatives for Efficient Separation of Enantiomers by Chromatography

    No full text
    corecore