121 research outputs found

    Dietitians’ Attitudes and Understanding of the Promotion of Grains, Whole Grains, and Ultra-Processed Foods

    Full text link
    NOVA is a food-classification system based on four levels of processing, from minimally processed to ultra-processed foods (UPFs). Whole-grain-containing commercial breads and ready-to-eat breakfast cereals are considered ultra-processed within NOVA, despite being considered core foods in the Australian Dietary Guidelines. These food categories contribute the greatest quantities of whole grain in the Australian diet, although consumption is less than half of the 48 g/day daily target intake. Dietitians are key to disseminating messages about nutrition and health; therefore, an accurate understanding of whole grains and the effects of processing is critical to avoid the unnecessary exclusion of nutritionally beneficial foods. The aim was to utilise an online structured questionnaire to investigate dietitians’ attitudes to the promotion of grains and whole grains and understand their level of knowledge about and attitudes towards NOVA and the classification of specific whole-grain foods. Whole-grain foods were perceived positively and are regularly promoted in dietetic practice (n = 150). The dietitians tended not to consider whole-grain breads and ready-to-eat breakfast cereals as excessively processed, although most generally agreed with the classification system based on the extent of processing. If dietitians intend to incorporate NOVA and concepts of UPFs in their counselling advice, the anomalies regarding the categorisation of whole-grain choices and optimum intakes should be addressed

    Evaluating Modeling and Validation Strategies for Tooth Loss

    Get PDF
    Prediction models learn patterns from available data (training) and are then validated on new data (testing). Prediction modeling is increasingly common in dental research. We aimed to evaluate how different model development and validation steps affect the predictive performance of tooth loss prediction models of patients with periodontitis. Two independent cohorts (627 patients, 11,651 teeth) were followed over a mean ± SD 18.2 ± 5.6 y (Kiel cohort) and 6.6 ± 2.9 y (Greifswald cohort). Tooth loss and 10 patient- and tooth-level predictors were recorded. The impact of different model development and validation steps was evaluated: 1) model complexity (logistic regression, recursive partitioning, random forest, extreme gradient boosting), 2) sample size (full data set or 10%, 25%, or 75% of cases dropped at random), 3) prediction periods (maximum 10, 15, or 20 y or uncensored), and 4) validation schemes (internal or external by centers/time). Tooth loss was generally a rare event (880 teeth were lost). All models showed limited sensitivity but high specificity. Patients' age and tooth loss at baseline as well as probing pocket depths showed high variable importance. More complex models (random forest, extreme gradient boosting) had no consistent advantages over simpler ones (logistic regression, recursive partitioning). Internal validation (in sample) overestimated the predictive power (area under the curve up to 0.90), while external validation (out of sample) found lower areas under the curve (range 0.62 to 0.82). Reducing the sample size decreased the predictive power, particularly for more complex models. Censoring the prediction period had only limited impact. When the model was trained in one period and tested in another, model outcomes were similar to the base case, indicating temporal validation as a valid option. No model showed higher accuracy than the no-information rate. In conclusion, none of the developed models would be useful in a clinical setting, despite high accuracy. During modeling, rigorous development and external validation should be applied and reported accordingly

    Artificial Intelligence for Caries Detection: Value of Data and Information.

    Get PDF
    If increasing practitioners' diagnostic accuracy, medical artificial intelligence (AI) may lead to better treatment decisions at lower costs, while uncertainty remains around the resulting cost-effectiveness. In the present study, we assessed how enlarging the data set used for training an AI for caries detection on bitewings affects cost-effectiveness and also determined the value of information by reducing the uncertainty around other input parameters (namely, the costs of AI and the population's caries risk profile). We employed a convolutional neural network and trained it on 10%, 25%, 50%, or 100% of a labeled data set containing 29,011 teeth without and 19,760 teeth with caries lesions stemming from bitewing radiographs. We employed an established health economic modeling and analytical framework to quantify cost-effectiveness and value of information. We adopted a mixed public-private payer perspective in German health care; the health outcome was tooth retention years. A Markov model, allowing to follow posterior teeth over the lifetime of an initially 12-y-old individual, and Monte Carlo microsimulations were employed. With an increasing amount of data used to train the AI sensitivity and specificity increased nonlinearly, increasing the data set from 10% to 25% had the largest impact on accuracy and, consequently, cost-effectiveness. In the base-case scenario, AI was more effective (tooth retention for a mean [2.5%-97.5%] 62.8 [59.2-65.5] y) and less costly (378 [284-499] euros) than dentists without AI (60.4 [55.8-64.4] y; 419 [270-593] euros), with considerable uncertainty. The economic value of reducing the uncertainty around AI's accuracy or costs was limited, while information on the population's risk profile was more relevant. When developing dental AI, informed choices about the data set size may be recommended, and research toward individualized application of AI for caries detection seems warranted to optimize cost-effectiveness

    Interaction of α-synuclein with vesicles that mimic mitochondrial membranes

    Get PDF
    Abstractα-Synuclein, an intrinsically-disordered protein associated with Parkinson's disease, interacts with mitochondria, but the details of this interaction are unknown. We probed the interaction of α-synuclein and its A30P variant with lipid vesicles by using fluorescence anisotropy and 19F nuclear magnetic resonance. Both proteins interact strongly with large unilamellar vesicles whose composition is similar to that of the inner mitochondrial membrane, which contains cardiolipin. However, the proteins have no affinity for vesicles mimicking the outer mitochondrial membrane, which lacks cardiolipin. The 19F data show that the interaction involves α-synuclein's N-terminal region. These data indicate that the middle of the N-terminal region, which contains the KAKEGVVAAAE repeats, is involved in binding, probably via electrostatic interactions between the lysines and cardiolipin. We also found that the strength of α-synuclein binding depends on the nature of the cardiolipin acyl side chains. Eliminating one double bond increases affinity, while complete saturation dramatically decreases affinity. Increasing the temperature increases the binding of wild-type, but not the A30P variant. The data are interpreted in terms of the properties of the protein, cardiolipin demixing within the vesicles upon binding of α-synuclein, and packing density. The results advance our understanding of α-synuclein's interaction with mitochondrial membranes

    Benchmarking Deep Learning Models for Tooth Structure Segmentation.

    Get PDF
    A wide range of deep learning (DL) architectures with varying depths are available, with developers usually choosing one or a few of them for their specific task in a nonsystematic way. Benchmarking (i.e., the systematic comparison of state-of-the art architectures on a specific task) may provide guidance in the model development process and may allow developers to make better decisions. However, comprehensive benchmarking has not been performed in dentistry yet. We aimed to benchmark a range of architecture designs for 1 specific, exemplary case: tooth structure segmentation on dental bitewing radiographs. We built 72 models for tooth structure (enamel, dentin, pulp, fillings, crowns) segmentation by combining 6 different DL network architectures (U-Net, U-Net++, Feature Pyramid Networks, LinkNet, Pyramid Scene Parsing Network, Mask Attention Network) with 12 encoders from 3 different encoder families (ResNet, VGG, DenseNet) of varying depth (e.g., VGG13, VGG16, VGG19). On each model design, 3 initialization strategies (ImageNet, CheXpert, random initialization) were applied, resulting overall into 216 trained models, which were trained up to 200 epochs with the Adam optimizer (learning rate = 0.0001) and a batch size of 32. Our data set consisted of 1,625 human-annotated dental bitewing radiographs. We used a 5-fold cross-validation scheme and quantified model performances primarily by the F1-score. Initialization with ImageNet or CheXpert weights significantly outperformed random initialization (P < 0.05). Deeper and more complex models did not necessarily perform better than less complex alternatives. VGG-based models were more robust across model configurations, while more complex models (e.g., from the ResNet family) achieved peak performances. In conclusion, initializing models with pretrained weights may be recommended when training models for dental radiographic analysis. Less complex model architectures may be competitive alternatives if computational resources and training time are restricting factors. Models developed and found superior on nondental data sets may not show this behavior for dental domain-specific tasks

    Living on thin abstractions: more power/economic knowledge

    Get PDF
    Debates over the role of knowledge and know-how as key economic assets in the contemporary economy, although far from new, are now increasingly couched in terms of a new-found economic immateriality which allows for their costless reproduction and widespread geographical dissemination. In the rush to tie down and reproduce economic know-how in abstract codifiable form, it has become almost baffling to argue that our stock of economic knowledge may rest upon affects as much as analysis, expressive symbolism as much as abstract symbolism. This paper is an attempt to think through how such 'elusive' economic knowledges may be grasped, yet neither formalized nor codified in abstract terms. It is also a plea to consider the geography of economic knowledge outside of the tacit - explicit distinction

    Machine Learning for Health: Algorithm Auditing & Quality Control

    Get PDF
    Developers proposing new machine learning for health (ML4H) tools often pledge to match or even surpass the performance of existing tools, yet the reality is usually more complicated. Reliable deployment of ML4H to the real world is challenging as examples from diabetic retinopathy or Covid-19 screening show. We envision an integrated framework of algorithm auditing and quality control that provides a path towards the effective and reliable application of ML systems in healthcare. In this editorial, we give a summary of ongoing work towards that vision and announce a call for participation to the special issue Machine Learning for Health: Algorithm Auditing & Quality Control in this journal to advance the practice of ML4H auditing

    Uncertain Health Insurance Coverage and Unmet Children’s Health Care Needs

    Get PDF
    BACKGROUND AND OBJECTIVES: The State Children\u27s Health Insurance Program (SCHIP) has improved insurance coverage rates. However, children\u27s enrollment status in SCHIP frequently changes, which can leave families with uncertainty about their children\u27s coverage status. We examined whether insurance uncertainty was associated with unmet health care needs. METHODS: We compared self-reported survey data from 2,681 low-income Oregon families to state administrative data and identified children with uncertain coverage. We conducted cross-sectional multivariate analyses using a series of logistic regression models to test the association between uncertain coverage and unmet health care needs. RESULTS: The health insurance status for 13.2% of children was uncertain. After adjustments, children in this uncertain gray zone had higher odds of reporting unmet medical (odds ratio [OR] =1.73; 95% confidence interval [CI]=1.07, 2.79), dental (OR=2.41; 95% CI=1.63, 3.56), prescription (OR=1.64, 95% CI=1.08, 2,48), and counseling needs (OR=3.52; 95% CI=1.56, 7.98), when compared with publicly insured children whose parents were certain about their enrollment status. CONCLUSIONS: Uncertain children\u27s insurance coverage was associated with higher rates of unmet health care needs. Clinicians and educators can play a role in keeping patients out of insurance gray zones by (1) developing practice interventions to assist families in confirming enrollment and maintaining coverage and (2) advocating for policy changes that minimize insurance enrollment and retention barriers
    • …
    corecore