30 research outputs found

    Cactaceae at Caryophyllales.org- A dynamic online species-level taxonomic backbone for the family

    Get PDF
    This data paper presents a largely phylogeny-based online taxonomic backbone for the Cactaceae compiled from literature and online sources using the tools of the EDIT Platform for Cybertaxonomy. The data will form a contribution of the Caryophyllales Network for the World Flora Online and serve as the base for further integration of research results from the systematic research community. The final aim is to treat all effectively published scientific names in the family. The checklist includes 150 accepted genera, 1851 accepted species, 91 hybrids, 746 infraspecific taxa (458 heterotypic, 288 with autonyms), 17,932 synonyms of accepted taxa, 16 definitely excluded names, 389 names of uncertain application, 672 unresolved names and 454 names belonging to (probably artificial) named hybrids, totalling 22,275 names. The process of compiling this database is described and further editorial rules for the compilation of the taxonomic backbone for the Caryophyllales Network are proposed. A checklist depicting the current state of the taxonomic backbone is provided as supplemental material. All results are also available online on the website of the Caryophyllales Network and will be constantly updated and expanded in the future. Citation: Korotkova N., Aquino D., Arias S., Eggli U., Franck A., Gómez-Hinostrosa C., Guerrero P. C., Hernández H. M., Kohlbecker A., Köhler M., Luther K., Majure L. C., Müller A., Metzing D., Nyffeler R., Sánchez D., Schlumpberger B. & Berendsohn W. G. 2021: Cactaceae at Caryophyllales.org- A dynamic online species-level taxonomic backbone for the family.-Willdenowia 51: 251-270. Version of record first published online on 31 August 2021 ahead of inclusion in August 2021 issue. Data published through: Http://caryophyllales.org/cactaceae/Checklis

    Prevalence of abnormal Alzheimer’s disease biomarkers in patients with subjective cognitive decline: cross-sectional comparison of three European memory clinic samples

    Get PDF
    Introduction: Subjective cognitive decline (SCD) in cognitively unimpaired older individuals has been recognized as an early clinical at-risk state for Alzheimer's disease (AD) dementia and as a target population for future dementia prevention trials. Currently, however, SCD is heterogeneously defined across studies, potentially leading to variations in the prevalence of AD pathology. Here, we compared the prevalence and identified common determinants of abnormal AD biomarkers in SCD across three European memory clinics participating in the European initiative on harmonization of SCD in preclinical AD (Euro-SCD). Methods: We included three memory clinic SCD samples with available cerebrospinal fluid (CSF) biomaterial (IDIBAPS, Barcelona, Spain, n = 44; Amsterdam Dementia Cohort (ADC), The Netherlands, n = 50; DELCODE multicenter study, Germany, n = 42). CSF biomarkers (amyloid beta (Aβ)42, tau, and phosphorylated tau (ptau181)) were centrally analyzed in Amsterdam using prespecified cutoffs to define prevalence of pathological biomarker concentrations. We used logistic regression analysis in the combined sample across the three centers to investigate center effects with regard to likelihood of biomarker abnormality while taking potential common predictors (e.g., age, sex, apolipoprotein E (APOE) status, subtle cognitive deficits, depressive symptoms) into account. Results: The prevalence of abnormal Aβ42, but not tau or ptau181, levels was different across centers (64% DELCODE, 57% IDIBAPS, 22% ADC; p < 0.001). Logistic regression analysis revealed that the likelihood of abnormal Aβ42 (and also abnormal tau or ptau181) levels was predicted by age and APOE status. For Aβ42 abnormality, we additionally observed a center effect, indicating between-center heterogeneity not explained by age, APOE, or the other included covariates. Conclusions: While heterogeneous frequency of abnormal Aβ42 was partly explained by between-sample differences in age range and APOE status, the additional observation of center effects indicates between-center heterogeneity that may be attributed to different recruitment procedures. These findings highlight the need for the development of harmonized recruitment protocols for SCD case definition in multinational studies to achieve similar enrichment rates of preclinical AD

    Qualitätsentwicklung an Ganztagsschulen

    Get PDF
    Durch die Verlagerung bzw. Stärkung von Entscheidungskompetenzen auf die bzw. der Ebene der Einzelschule wird es ermöglicht, Lösungs-/Gestaltungsansätze zu entwickeln, die auf die jeweiligen Bedürfnisse und Gegebenheiten vor Ort zugeschnitten werden können. Die kritische Auseinandersetzung mit den Erfahrungen anderer, die auf entsprechenden Fortbildungsveranstaltungen kommuniziert werden können, lässt Good-practice-Beispiele entstehen, aus denen sich Anregungen zur Realisierung eigener Vorhaben im Zuge der Ganztagsschulentwicklung ableiten lassen. Der dritte bayerische Ganztagsschulkongress "Qualitätsentwicklung an Ganztagsschulen" am 1. und 2. März 2012 in Forchheim bot den Teilnehmerinnen und Teilnehmern anhand diverser Vorträge, Workshops und Schulbesuchen die Möglichkeit zu Diskussion und Austausch. Der vorliegende Band dokumentiert die Veranstaltung

    A cooperative ais framework for intrusion detection

    No full text
    We present a cooperative intrusion detection approach inspired by biological immune system principles and P2P communication techniques to develop a distributed anomaly detection scheme. We utilize dynamic collaboration between individual artificial immune system (AIS) agents to address the well-known false positive problem in anomaly detection. The AIS agents use a set of detectors obtained through negative selection during a training phase and exchange status information and detectors on a periodical and event-driven basis, respectively. This cooperation scheme follows peer-to-peer communication principles in order to avoid a single point of failure and increase the robustness of the system. We illustrate our approach by means of two specific example scenarios in a novel network security simulator

    NFDI4Biodiversity: a German infrastructure for biodiversity data

    No full text
    Digital data have become an indispensable basis for biodiversity research. Sustainable curation, archiving, accessibility and integrability according to the FAIR principles ("Findable, Accessible, Interoperable and Reusable", Wilkinson et al. 2016) are essential for re-use to answer pressing questions in a rapidly changing environment.As part of the German multidisciplinary National Research Data Infrastructure (NFDI), the NFDI4Biodiversity consortium with 49 partners, spanning a broad spectrum from academia, to agencies, learned societies, and citizen science, has set itself the goal of providing a sustainable data infrastructure for biodiversity research. NFDI4Biodiversity builds on the German Federation for Biological Data (GFBio) project (2014–2021) and the GFBio e.V. founded in it, both organisationally and in the provision of services. These include a data submission and archiving system, support for the creation of data management plans and certification, portal functions with extensive data visualization and terminology services flanked by helpdesk, support and outreach activities*1 (Diepenbroek et al. 2014).Within the framework of NFDI4Biodiversity, these services will be expanded, based on (and calibrated by) the requirements of 23 concrete use cases from manifold biodiversity research domains. A central new component is the development of a capable multi-cloud platform, the "Research Data Commons" (RDC) where data can be aggregated, semantically linked and enriched with external services (Glöckner et al. 2020).Alongside the development of the services, the potential for joint use of standards and service components will be exploited through cooperation with existing data infrastructures. In addition to other NFDI consortia, international infrastructures and comparable national initiatives will play a special role; this process was already started with a symposium during the 2019 Biodiversity Next conference

    Edit Platform for Cybertaxonomy, TaxEditor, User manual, appendix

    No full text
    The Common Data Model (CDM) is the underlying data structure of the EDIT Platform for Cybertaxonomy, representing a complete model of data used in biological taxonomy and systematics. CDM-light is a set of relational tables produced by one of the export functions of the EDIT Platform. As compared to the CDM itself, the relational model is simplified and data are partially aggregated. CDM-light may be used as a transfer format, to generate statistics about the data in a CDM database, to control data quality, or to produce document-type output from CDM databases

    A Comprehensive and Standards-Aware Common Data Model (CDM) for Taxonomic Research

    No full text
    The EDIT Common Data Model (CDM) (FUB, BGBM 2008) is the centrepiece of the EDIT Platform for Cybertaxonomy (FUB, BGBM 2011, Ciardelli et al. 2009). Building on modelling efforts reaching back to the 1990ies, it aims to combine existing standards relevant to the taxonomic domain (but often designed for data exchange) with requirements of modern taxonomic tools. Modelled in the Unified Modelling Language (UML) (Booch et al. 2005), it offers an object oriented view on the information domain managed by expert taxonomists that is implemented independent of the used operating system and database management system (DBMS). Being used in various national and international research projects with diverse foci over the past decade, the model evolved and became the common base of a variety of taxonomic projects, such as floras, faunas and checklists (see FUB, BGBM 2016 for a number of data portals created and made publicly available by different projects). The CDM is strictly oriented towards the needs of the taxonomic experts community. Where requirements are complex it tries to reflect them reasonably rather than introducing ambiguity or reduced functionality via (over-)simplification. Where simplification is possible it tries to stay or become simple. Simplification on the model level is achieved by implementing business rules via constraints rather than via typification and subclassing. Simplification on the user interface level is achieved by numerous options for customisation. Being used as a generic model for a variety of application types and use cases, it is adaptable and extendable by users and developers. It uses a combination of static and dynamic typification to allow both efficient handling of complex but well-defined data domains such as taxonomic classifications and nomenclature as well as less well-defined flexible domains like factual and descriptive data. Additionally it allows the creation of more than 30 types of user defined vocabularies such as those for taxonomic rank, nomenclatural status, name-to-name relationships, geographic area, presence status, etc. A strong focus is set on good scientific praxis by making the source of almost all data citable in detail and offering data lineage to trace data back to its roots. It is also easy to reflect multiple opinions in parallel, e.g. differing taxonomic concepts (Berendsohn 1995, Berendsohn & al., this session) or several descriptive treatments obtained from different regional floras or faunas. The CDM attempts to comprehensively cover the data used in the taxonomic domain - nomenclature, taxonomy (including concepts), taxon distribution data, descriptive data of all kinds, including morphological data referring to taxa and/or specimens, images and multimedia data of various kinds, and a complex system covering specimens and specimen derivatives down to DNA samples and sequences (Kilian et al. 2015, Stöver and Müller 2015) that mirrors the complexity of knowledge accumulation in the taxonomic research process. In the context of the EDIT Platform, several applications have been developed based on the CDM and the library that provides the API and web Service interfaces based on the CDM (see Kohlbecker & al. and Güntsch & al., this session). In some areas the CDM is still evolving - although the basic structures are present, questions of application development feed back into modelling decisions. However, a "no-shortcuts" approach to modelling has variously delayed application development in the past, but it now pays off: the Platform can rapidly adapt to changing requirements from different projects and taxonomic specialists

    EDIT Platform Web Services in the Biodiversity Infrastructure Landscape

    No full text
    The EDIT Platform for Cybertaxonomy is a standards based suite of software components supporting the taxonomic research workflow from field work to publication in journals and dynamic web portals (FUB, BGBM 2011). The underlying Common Data Model (CDM) covers the main biodiversity informatics foci such as names, classifications, descriptions, literature, multimedia, literature as well as specimens and observations and their derived objects. Today, more than 30 instances of the platform are serving data to the international biodiversity research communities. An often overlooked feature of the platform is its well defined web service layer which provides capable functions for machine access and integration into the growing service-based biodiversity informatics landscape (FUB, BGBM 2010). All platform instances have a pre-installed and open service layer serving three different use cases: The CDM REST API provides a platform independent RESTful (read-only) interface to all resources represented in the CDM. In addition, a set of portal services have been designed to meet the special functional requirements of CDM data portals and their advanced navigation capabilities. While the "raw" REST API has already all functions for searching and browsing the entire information space spanned by the CDM, the integration of CDM services into external infrastructures and workflows requires an additional set of streamlined service endpoints with a special focus on documentation and version stability. To this end, the platform provides a set of "catalogue services" with optimized functions for (fuzzy) name, taxon, and occurrence data searches (FUB, BGBM 2013, FUB, BGBM 2014). A good example for the integration of EDIT platform catalogue services into broader workflows is the "Taxonomic Data Refinement Workflow" implemented in the context of the EU 7th Framework Program Project BioVeL (Hardisty et al. 2016). The workflow uses the service layer of an EDIT Platform based instance of the Catalogue of Life (CoL) for resolving taxonomic discrepancies between specimen datasets (Mathew et al. 2014). The same service is also part of the Unified Taxonomic Information Service (UTIS) providing an easy-to-use interface for running simultaneous searches across multiple taxonomic checklists (FUB, BGBM 2016)
    corecore