158 research outputs found

    Deriving Digital Energy Platform Archetypes for Manufacturing – A Data-Driven Clustering Approach

    Get PDF
    External factors such as climate change and the current energy crisis due to global conflicts are leading to the increasing relevance of energy consumption and energy procurement in the manufacturing industry. In addition to the growing call for sustainability, companies are increasingly struggling with rising energy costs and the reliability of the power grid, which endangers the competitiveness of companies and regions affected by high energy prices. Appropriate measures for energy-efficient and, not least, energy-flexible production are necessary. In addition to innovations and optimizations of plants and processes, digital energy platforms for the visualization, analysis, optimization, and control of energy flows are becoming essential. Over time, several digital energy platforms emerged on the market. The number and the different functionalities of the platforms make it challenging for classic manufacturing companies to keep track and select the right digital energy platform. In literature, the characteristics and functionalities of digital energy platforms have already been identified and structured. However, a classification of existing platforms into archetypes makes it easier for companies to select the platforms providing the missing functionality. To tackle this issue, we conducted an explorative and data-driven cluster analysis based on 49 existing digital energy platforms to identify digital energy platform archetypes and derive implications for research and practice. The results show five different archetypes that differ primarily in terms of functionalities on energy market integration. The identified archetypes provide a well-founded overview of the similarities and differences of digital energy platforms. Decision makers in manufacturing companies will benefit from the archetypes in future analyses as decision support in procurement processes and modifications of digital energy platforms

    Streamlining Homogeneous Glycoprotein Production for Biophysical and Structural Applications by Targeted Cell Line Development

    Get PDF
    Studying the biophysical characteristics of glycosylated proteins and solving their three-dimensional structures requires homogeneous recombinant protein of high quality.We introduce here a new approach to produce glycoproteins in homogenous form with the well-established, glycosylation mutant CHO Lec3.2.8.1 cells. Using preparative cell sorting, stable, high-expressing GFP ‘master’ cell lines were generated that can be converted fast and reliably by targeted integration via Flp recombinase-mediated cassette exchange (RMCE) to produce any glycoprotein. Small-scale transient transfection of HEK293 cells was used to identify genetically engineered constructs suitable for constructing stable cell lines. Stable cell lines expressing 10 different proteins were established. The system was validated by expression, purification, deglycosylation and crystallization of the heavily glycosylated luminal domains of lysosome-associated membrane proteins (LAMP)

    FAIR Data Commons / Essential Services and Tools for Metadata Management Supporting Science

    Get PDF
    A sophisticated ensemble of services and tools enables high-level research data and research metadata management in science. On a technical level, research datasets need to be registered, preserved, and made interactively accessible using repositories that meet the specific requirements of scientists in terms of flexibility and performance. These requirements are fulfilled by the Base Repo and the MetaStore of the KIT Data Manager Framework. In our data management architecture, data and metadata are represented as FAIR Digital Objects that are machine actionable. The Typed PID Maker and the FAIR Digital Object Lab provide support for the creation and management of data objects. Other tools enable editing of metadata documents, annotation of data and metadata, building collections of data objects, and creating controlled vocabularies. Information systems such as the Metadata Standards Catalog and the Data Collections Explorer help researchers select domain-specific metadata standards and schemas and identify data collections of interest. Infrastructure developers search the Catalog of Repository Systems for information on modern repository systems, and the FAIR Digital Object Cookbook for recipes for creating FAIR Digital Objects. Existing knowledge about metadata management, services, tools, and information systems has been applied to create research data management architectures for a variety of fields, including digital humanities, materials science, biology, and nanoscience. For Scanning Electron Microscopy, Transmission Electron Microscopy and Magnetic Resonance Imaging, metadata schemas were developed in close cooperation with the domain specialists and incorporated in the research data management architectures. This research has been supported by the research program ‘Engineering Digital Futures’ of the Helmholtz Association of German Research Centers, the Helmholtz Metadata Collaboration (HMC) Platform, the German National Research Data Infrastructure (NFDI), the German Research Foundation (DFG) and the Joint Lab “Integrated Model and Data Driven Materials Characterization (MDMC)”. Also, this project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 101007417 within the framework of the NFFA-Europe Pilot (NEP) Joint Activities

    In vivo alkaline comet assay: Statistical considerations on historical negative and positive control data

    Get PDF
    The alkaline comet assay is frequently used as in vivo follow-up test within different regulatory environments to characterize the DNA-damaging potential of different test items. The corresponding OECD Test guideline 489 highlights the importance of statistical analyses and historical control data (HCD) but does not provide detailed procedures. Therefore, the working group “Statistics” of the German-speaking Society for Environmental Mutation Research (GUM) collected HCD from five laboratories and >200 comet assay studies and performed several statistical analyses. Key results included that (I) observed large inter-laboratory effects argue against the use of absolute quality thresholds, (II) > 50% zero values on a slide are considered problematic, due to their influence on slide or animal summary statistics, (III) the type of summarizing measure for single-cell data (e.g., median, arithmetic and geometric mean) may lead to extreme differences in resulting animal tail intensities and study outcome in the HCD. These summarizing values increase the reliability of analysis results by better meeting statistical model assumptions, but at the cost of information loss. Furthermore, the relation between negative and positive control groups in the data set was always satisfactorily (or sufficiently) based on ratio, difference and quantile analyses

    Automatic post-picking improves particle image detection from Cryo-EM micrographs

    Full text link
    Cryo-electron microscopy (cryo-EM) studies using single particle reconstruction is extensively used to reveal structural information of macromolecular complexes. Aiming at the highest achievable resolution, state of the art electron microscopes acquire thousands of high-quality images. Having collected these data, each single particle must be detected and windowed out. Several fully- or semi-automated approaches have been developed for the selection of particle images from digitized micrographs. However they still require laborious manual post processing, which will become the major bottleneck for next generation of electron microscopes. Instead of focusing on improvements in automated particle selection from micrographs, we propose a post-picking step for classifying small windowed images, which are output by common picking software. A supervised strategy for the classification of windowed micrograph images into particles and non-particles reduces the manual workload by orders of magnitude. The method builds on new powerful image features, and the proper training of an ensemble classifier. A few hundred training samples are enough to achieve a human-like classification performance.Comment: 14 pages, 5 figure

    Properties of individual contrails: a compilation of observations and some comparisons

    Get PDF
    Mean properties of individual contrails are characterized for a wide range of jet aircraft as a function of age during their life cycle from seconds to 11.5 h (7.4-18.7 km altitude, -88 to -31 degrees C ambient temperature), based on a compilation of about 230 previous in situ and remote sensing measurements. The airborne, satellite, and ground-based observations encompass exhaust contrails from jet aircraft from 1972 onwards, as well as a few older data for propeller aircraft. The contrails are characterized by mean ice particle sizes and concentrations, extinction, ice water content, optical depth, geometrical depth, and contrail width. Integral contrail properties include the cross-section area and total number of ice particles, total ice water content, and total extinction (area integral of extinction) per contrail length. When known, the contrail-causing aircraft and ambient conditions are characterized. The individual datasets are briefly described, including a few new analyses performed for this study, and compiled together to form a "contrail library" (COLI). The data are compared with results of the Contrail Cirrus Prediction (CoCiP) model. The observations confirm that the number of ice particles in contrails is controlled by the engine exhaust and the formation process in the jet phase, with some particle losses in the wake vortex phase, followed later by weak decreases with time. Contrail cross sections grow more quickly than expected from exhaust dilution. The cross-section-integrated extinction follows an algebraic approximation. The ratio of volume to effective mean radius decreases with time. The ice water content increases with increasing temperature, similar to non-contrail cirrus, while the equivalent relative humidity over ice saturation of the contrail ice mass increases at lower temperatures in the data. Several contrails were observed in warm air above the Schmidt-Appleman threshold temperature. The "emission index" of ice particles, i.e., the number of ice particles formed in the young contrail per burnt fuel mass, is estimated from the measured concentrations for estimated dilution;maximum values exceed 10(15) kg(-1). The dependence of the data on the observation methods is discussed. We find no obvious indication for significant contributions from spurious particles resulting from shattering of ice crystals on the microphysical probes

    Why Are Outcomes Different for Registry Patients Enrolled Prospectively and Retrospectively? Insights from the Global Anticoagulant Registry in the FIELD-Atrial Fibrillation (GARFIELD-AF).

    Get PDF
    Background: Retrospective and prospective observational studies are designed to reflect real-world evidence on clinical practice, but can yield conflicting results. The GARFIELD-AF Registry includes both methods of enrolment and allows analysis of differences in patient characteristics and outcomes that may result. Methods and Results: Patients with atrial fibrillation (AF) and ≥1 risk factor for stroke at diagnosis of AF were recruited either retrospectively (n = 5069) or prospectively (n = 5501) from 19 countries and then followed prospectively. The retrospectively enrolled cohort comprised patients with established AF (for a least 6, and up to 24 months before enrolment), who were identified retrospectively (and baseline and partial follow-up data were collected from the emedical records) and then followed prospectively between 0-18 months (such that the total time of follow-up was 24 months; data collection Dec-2009 and Oct-2010). In the prospectively enrolled cohort, patients with newly diagnosed AF (≤6 weeks after diagnosis) were recruited between Mar-2010 and Oct-2011 and were followed for 24 months after enrolment. Differences between the cohorts were observed in clinical characteristics, including type of AF, stroke prevention strategies, and event rates. More patients in the retrospectively identified cohort received vitamin K antagonists (62.1% vs. 53.2%) and fewer received non-vitamin K oral anticoagulants (1.8% vs . 4.2%). All-cause mortality rates per 100 person-years during the prospective follow-up (starting the first study visit up to 1 year) were significantly lower in the retrospective than prospectively identified cohort (3.04 [95% CI 2.51 to 3.67] vs . 4.05 [95% CI 3.53 to 4.63]; p = 0.016). Conclusions: Interpretations of data from registries that aim to evaluate the characteristics and outcomes of patients with AF must take account of differences in registry design and the impact of recall bias and survivorship bias that is incurred with retrospective enrolment. Clinical Trial Registration: - URL: http://www.clinicaltrials.gov . Unique identifier for GARFIELD-AF (NCT01090362)

    Search for dark matter produced in association with bottom or top quarks in √s = 13 TeV pp collisions with the ATLAS detector

    Get PDF
    A search for weakly interacting massive particle dark matter produced in association with bottom or top quarks is presented. Final states containing third-generation quarks and miss- ing transverse momentum are considered. The analysis uses 36.1 fb−1 of proton–proton collision data recorded by the ATLAS experiment at √s = 13 TeV in 2015 and 2016. No significant excess of events above the estimated backgrounds is observed. The results are in- terpreted in the framework of simplified models of spin-0 dark-matter mediators. For colour- neutral spin-0 mediators produced in association with top quarks and decaying into a pair of dark-matter particles, mediator masses below 50 GeV are excluded assuming a dark-matter candidate mass of 1 GeV and unitary couplings. For scalar and pseudoscalar mediators produced in association with bottom quarks, the search sets limits on the production cross- section of 300 times the predicted rate for mediators with masses between 10 and 50 GeV and assuming a dark-matter mass of 1 GeV and unitary coupling. Constraints on colour- charged scalar simplified models are also presented. Assuming a dark-matter particle mass of 35 GeV, mediator particles with mass below 1.1 TeV are excluded for couplings yielding a dark-matter relic density consistent with measurements

    Risk profiles and one-year outcomes of patients with newly diagnosed atrial fibrillation in India: Insights from the GARFIELD-AF Registry.

    Get PDF
    BACKGROUND: The Global Anticoagulant Registry in the FIELD-Atrial Fibrillation (GARFIELD-AF) is an ongoing prospective noninterventional registry, which is providing important information on the baseline characteristics, treatment patterns, and 1-year outcomes in patients with newly diagnosed non-valvular atrial fibrillation (NVAF). This report describes data from Indian patients recruited in this registry. METHODS AND RESULTS: A total of 52,014 patients with newly diagnosed AF were enrolled globally; of these, 1388 patients were recruited from 26 sites within India (2012-2016). In India, the mean age was 65.8 years at diagnosis of NVAF. Hypertension was the most prevalent risk factor for AF, present in 68.5% of patients from India and in 76.3% of patients globally (P < 0.001). Diabetes and coronary artery disease (CAD) were prevalent in 36.2% and 28.1% of patients as compared with global prevalence of 22.2% and 21.6%, respectively (P < 0.001 for both). Antiplatelet therapy was the most common antithrombotic treatment in India. With increasing stroke risk, however, patients were more likely to receive oral anticoagulant therapy [mainly vitamin K antagonist (VKA)], but average international normalized ratio (INR) was lower among Indian patients [median INR value 1.6 (interquartile range {IQR}: 1.3-2.3) versus 2.3 (IQR 1.8-2.8) (P < 0.001)]. Compared with other countries, patients from India had markedly higher rates of all-cause mortality [7.68 per 100 person-years (95% confidence interval 6.32-9.35) vs 4.34 (4.16-4.53), P < 0.0001], while rates of stroke/systemic embolism and major bleeding were lower after 1 year of follow-up. CONCLUSION: Compared to previously published registries from India, the GARFIELD-AF registry describes clinical profiles and outcomes in Indian patients with AF of a different etiology. The registry data show that compared to the rest of the world, Indian AF patients are younger in age and have more diabetes and CAD. Patients with a higher stroke risk are more likely to receive anticoagulation therapy with VKA but are underdosed compared with the global average in the GARFIELD-AF. CLINICAL TRIAL REGISTRATION-URL: http://www.clinicaltrials.gov. Unique identifier: NCT01090362

    Improved risk stratification of patients with atrial fibrillation: an integrated GARFIELD-AF tool for the prediction of mortality, stroke and bleed in patients with and without anticoagulation.

    Get PDF
    OBJECTIVES: To provide an accurate, web-based tool for stratifying patients with atrial fibrillation to facilitate decisions on the potential benefits/risks of anticoagulation, based on mortality, stroke and bleeding risks. DESIGN: The new tool was developed, using stepwise regression, for all and then applied to lower risk patients. C-statistics were compared with CHA2DS2-VASc using 30-fold cross-validation to control for overfitting. External validation was undertaken in an independent dataset, Outcome Registry for Better Informed Treatment of Atrial Fibrillation (ORBIT-AF). PARTICIPANTS: Data from 39 898 patients enrolled in the prospective GARFIELD-AF registry provided the basis for deriving and validating an integrated risk tool to predict stroke risk, mortality and bleeding risk. RESULTS: The discriminatory value of the GARFIELD-AF risk model was superior to CHA2DS2-VASc for patients with or without anticoagulation. C-statistics (95% CI) for all-cause mortality, ischaemic stroke/systemic embolism and haemorrhagic stroke/major bleeding (treated patients) were: 0.77 (0.76 to 0.78), 0.69 (0.67 to 0.71) and 0.66 (0.62 to 0.69), respectively, for the GARFIELD-AF risk models, and 0.66 (0.64-0.67), 0.64 (0.61-0.66) and 0.64 (0.61-0.68), respectively, for CHA2DS2-VASc (or HAS-BLED for bleeding). In very low to low risk patients (CHA2DS2-VASc 0 or 1 (men) and 1 or 2 (women)), the CHA2DS2-VASc and HAS-BLED (for bleeding) scores offered weak discriminatory value for mortality, stroke/systemic embolism and major bleeding. C-statistics for the GARFIELD-AF risk tool were 0.69 (0.64 to 0.75), 0.65 (0.56 to 0.73) and 0.60 (0.47 to 0.73) for each end point, respectively, versus 0.50 (0.45 to 0.55), 0.59 (0.50 to 0.67) and 0.55 (0.53 to 0.56) for CHA2DS2-VASc (or HAS-BLED for bleeding). Upon validation in the ORBIT-AF population, C-statistics showed that the GARFIELD-AF risk tool was effective for predicting 1-year all-cause mortality using the full and simplified model for all-cause mortality: C-statistics 0.75 (0.73 to 0.77) and 0.75 (0.73 to 0.77), respectively, and for predicting for any stroke or systemic embolism over 1 year, C-statistics 0.68 (0.62 to 0.74). CONCLUSIONS: Performance of the GARFIELD-AF risk tool was superior to CHA2DS2-VASc in predicting stroke and mortality and superior to HAS-BLED for bleeding, overall and in lower risk patients. The GARFIELD-AF tool has the potential for incorporation in routine electronic systems, and for the first time, permits simultaneous evaluation of ischaemic stroke, mortality and bleeding risks. CLINICAL TRIAL REGISTRATION: URL: http://www.clinicaltrials.gov. Unique identifier for GARFIELD-AF (NCT01090362) and for ORBIT-AF (NCT01165710)
    corecore