80 research outputs found

    On the reproducibility of extrusion-based bioprinting: round robin study on standardization in the field

    Get PDF
    The outcome of three-dimensional (3D) bioprinting heavily depends, amongst others, on the interaction between the developed bioink, the printing process, and the printing equipment. However, if this interplay is ensured, bioprinting promises unmatched possibilities in the health care area. To pave the way for comparing newly developed biomaterials, clinical studies, and medical applications (i.e. printed organs, patient-specific tissues), there is a great need for standardization of manufacturing methods in order to enable technology transfers. Despite the importance of such standardization, there is currently a tremendous lack of empirical data that examines the reproducibility and robustness of production in more than one location at a time. In this work, we present data derived from a round robin test for extrusion-based 3D printing performance comprising 12 different academic laboratories throughout Germany and analyze the respective prints using automated image analysis (IA) in three independent academic groups. The fabrication of objects from polymer solutions was standardized as much as currently possible to allow studying the comparability of results from different laboratories. This study has led to the conclusion that current standardization conditions still leave room for the intervention of operators due to missing automation of the equipment. This affects significantly the reproducibility and comparability of bioprinting experiments in multiple laboratories. Nevertheless, automated IA proved to be a suitable methodology for quality assurance as three independently developed workflows achieved similar results. Moreover, the extracted data describing geometric features showed how the function of printers affects the quality of the printed object. A significant step toward standardization of the process was made as an infrastructure for distribution of material and methods, as well as for data transfer and storage was successfully established

    On the reproducibility of extrusion-based bioprinting: round robin study on standardization in the field

    Get PDF
    The outcome of three-dimensional (3D) bioprinting heavily depends, amongst others, on the interaction between the developed bioink, the printing process, and the printing equipment. However, if this interplay is ensured, bioprinting promises unmatched possibilities in the health care area. To pave the way for comparing newly developed biomaterials, clinical studies, and medical applications (i.e. printed organs, patient-specific tissues), there is a great need for standardization of manufacturing methods in order to enable technology transfers. Despite the importance of such standardization, there is currently a tremendous lack of empirical data that examines the reproducibility and robustness of production in more than one location at a time. In this work, we present data derived from a round robin test for extrusion-based 3D printing performance comprising 12 different academic laboratories throughout Germany and analyze the respective prints using automated image analysis (IA) in three independent academic groups. The fabrication of objects from polymer solutions was standardized as much as currently possible to allow studying the comparability of results from different laboratories. This study has led to the conclusion that current standardization conditions still leave room for the intervention of operators due to missing automation of the equipment. This affects significantly the reproducibility and comparability of bioprinting experiments in multiple laboratories. Nevertheless, automated IA proved to be a suitable methodology for quality assurance as three independently developed workflows achieved similar results. Moreover, the extracted data describing geometric features showed how the function of printers affects the quality of the printed object. A significant step toward standardization of the process was made as an infrastructure for distribution of material and methods, as well as for data transfer and storage was successfully established

    A many-analysts approach to the relation between religiosity and well-being

    Get PDF
    The relation between religiosity and well-being is one of the most researched topics in the psychology of religion, yet the directionality and robustness of the effect remains debated. Here, we adopted a many-analysts approach to assess the robustness of this relation based on a new cross-cultural dataset (N=10,535 participants from 24 countries). We recruited 120 analysis teams to investigate (1) whether religious people self-report higher well-being, and (2) whether the relation between religiosity and self-reported well-being depends on perceived cultural norms of religion (i.e., whether it is considered normal and desirable to be religious in a given country). In a two-stage procedure, the teams first created an analysis plan and then executed their planned analysis on the data. For the first research question, all but 3 teams reported positive effect sizes with credible/confidence intervals excluding zero (median reported β=0.120). For the second research question, this was the case for 65% of the teams (median reported β=0.039). While most teams applied (multilevel) linear regression models, there was considerable variability in the choice of items used to construct the independent variables, the dependent variable, and the included covariates

    A Many-analysts Approach to the Relation Between Religiosity and Well-being

    Get PDF
    The relation between religiosity and well-being is one of the most researched topics in the psychology of religion, yet the directionality and robustness of the effect remains debated. Here, we adopted a many-analysts approach to assess the robustness of this relation based on a new cross-cultural dataset (N = 10, 535 participants from 24 countries). We recruited 120 analysis teams to investigate (1) whether religious people self-report higher well-being, and (2) whether the relation between religiosity and self-reported well-being depends on perceived cultural norms of religion (i.e., whether it is considered normal and desirable to be religious in a given country). In a two-stage procedure, the teams first created an analysis plan and then executed their planned analysis on the data. For the first research question, all but 3 teams reported positive effect sizes with credible/confidence intervals excluding zero (median reported β = 0.120). For the second research question, this was the case for 65% of the teams (median reported β = 0.039). While most teams applied (multilevel) linear regression models, there was considerable variability in the choice of items used to construct the independent variables, the dependent variable, and the included covariates

    Progression of conventional cardiovascular risk factors and vascular disease risk in individuals: insights from the PROG-IMT consortium

    Get PDF
    Aims: Averaged measurements, but not the progression based on multiple assessments of carotid intima-media thickness, (cIMT) are predictive of cardiovascular disease (CVD) events in individuals. Whether this is true for conventional risk factors is unclear. Methods and results: An individual participant meta-analysis was used to associate the annualised progression of systolic blood pressure, total cholesterol, low-density lipoprotein cholesterol and high-density lipoprotein cholesterol with future cardiovascular disease risk in 13 prospective cohort studies of the PROG-IMT collaboration (n = 34,072). Follow-up data included information on a combined cardiovascular disease endpoint of myocardial infarction, stroke, or vascular death. In secondary analyses, annualised progression was replaced with average. Log hazard ratios per standard deviation difference were pooled across studies by a random effects meta-analysis. In primary analysis, the annualised progression of total cholesterol was marginally related to a higher cardiovascular disease risk (hazard ratio (HR) 1.04, 95% confidence interval (CI) 1.00 to 1.07). The annualised progression of systolic blood pressure, low-density lipoprotein cholesterol and high-density lipoprotein cholesterol was not associated with future cardiovascular disease risk. In secondary analysis, average systolic blood pressure (HR 1.20 95% CI 1.11 to 1.29) and low-density lipoprotein cholesterol (HR 1.09, 95% CI 1.02 to 1.16) were related to a greater, while high-density lipoprotein cholesterol (HR 0.92, 95% CI 0.88 to 0.97) was related to a lower risk of future cardiovascular disease events. Conclusion: Averaged measurements of systolic blood pressure, low-density lipoprotein cholesterol and high-density lipoprotein cholesterol displayed significant linear relationships with the risk of future cardiovascular disease events. However, there was no clear association between the annualised progression of these conventional risk factors in individuals with the risk of future clinical endpoints

    Predictive value for cardiovascular events of common carotid intima media thickness and its rate of change in individuals at high cardiovascular risk - Results from the PROG-IMT collaboration.

    Get PDF
    AIMS: Carotid intima media thickness (CIMT) predicts cardiovascular (CVD) events, but the predictive value of CIMT change is debated. We assessed the relation between CIMT change and events in individuals at high cardiovascular risk. METHODS AND RESULTS: From 31 cohorts with two CIMT scans (total n = 89070) on average 3.6 years apart and clinical follow-up, subcohorts were drawn: (A) individuals with at least 3 cardiovascular risk factors without previous CVD events, (B) individuals with carotid plaques without previous CVD events, and (C) individuals with previous CVD events. Cox regression models were fit to estimate the hazard ratio (HR) of the combined endpoint (myocardial infarction, stroke or vascular death) per standard deviation (SD) of CIMT change, adjusted for CVD risk factors. These HRs were pooled across studies. In groups A, B and C we observed 3483, 2845 and 1165 endpoint events, respectively. Average common CIMT was 0.79mm (SD 0.16mm), and annual common CIMT change was 0.01mm (SD 0.07mm), both in group A. The pooled HR per SD of annual common CIMT change (0.02 to 0.43mm) was 0.99 (95% confidence interval: 0.95-1.02) in group A, 0.98 (0.93-1.04) in group B, and 0.95 (0.89-1.04) in group C. The HR per SD of common CIMT (average of the first and the second CIMT scan, 0.09 to 0.75mm) was 1.15 (1.07-1.23) in group A, 1.13 (1.05-1.22) in group B, and 1.12 (1.05-1.20) in group C. CONCLUSIONS: We confirm that common CIMT is associated with future CVD events in individuals at high risk. CIMT change does not relate to future event risk in high-risk individuals

    Automatic identification of variables in epidemiological datasets using logic regression

    Get PDF
    textabstractBackground: For an individual participant data (IPD) meta-analysis, multiple datasets must be transformed in a consistent format, e.g. using uniform variable names. When large numbers of datasets have to be processed, this can be a time-consuming and error-prone task. Automated or semi-automated identification of variables can help to reduce the workload and improve the data quality. For semi-automation high sensitivity in the recognition of matching variables is particularly important, because it allows creating software which for a target variable presents a choice of source variables, from which a user can choose the matching one, with only low risk of having missed a correct source variable. Methods: For each variable in a set of target variables, a number of simple rules were manually created. With logic regression, an optimal Boolean combination of these rules was searched for every target variable, using a random subset of a large database of epidemiological and clinical cohort data (construction subset). In a second subset of this database (validation subset), this optimal combination rules were validated. Results: In the construction sample, 41 target variables were allocated on average with a positive predictive value (PPV) of 34%, and a negative predictive value (NPV) of 95%. In the validation sample, PPV was 33%, whereas NPV remained at 94%. In the construction sample, PPV was 50% or less in 63% of all variables, in the validation sample in 71% of all variables. Conclusions: We demonstrated that the application of logic regression in a complex data management task in large epidemiological IPD meta-analyses is feasible. However, the performance of the algorithm is poor, which may require backup strategies

    Medicinal plants – prophylactic and therapeutic options for gastrointestinal and respiratory diseases in calves and piglets? A systematic review

    Full text link

    Measurement of the W boson polarisation in ttˉt\bar{t} events from pp collisions at s\sqrt{s} = 8 TeV in the lepton + jets channel with ATLAS

    Get PDF
    corecore