100 research outputs found
Happy to help? A systematic review and meta-analysis of the effects of performing acts of kindness on the well-being of the actor
© 2018 The Authors. Do acts of kindness improve the well-being of the actor? Recent advances in the behavioural sciences have provided a number of explanations of human social, cooperative and altruistic behaviour. These theories predict that people will be ‘happy to help’ family, friends, community members, spouses, and even strangers under some conditions. Here we conduct a systematic review and meta-analysis of the experimental evidence that kindness interventions (for example, performing ‘random acts of kindness’) boost subjective well-being. Our initial search of the literature identified 489 articles; of which 24 (27 studies) met the inclusion criteria (total N = 4045). These 27 studies, some of which included multiple control conditions and dependent measures, yielded 52 effect sizes. Multi-level modeling revealed that the overall effect of kindness on the well-being of the actor is small-to-medium (δ = 0.28). The effect was not moderated by sex, age, type of participant, intervention, control condition or outcome measure. There was no indication of publication bias. We discuss the limitations of the current literature, and recommend that future research test more specific theories of kindness: taking kindness-specific individual differences into account; distinguishing between the effects of kindness to specific categories of people; and considering a wider range of proximal and distal outcomes. Such research will advance our understanding of the causes and consequences of kindness, and help practitioners to maximise the effectiveness of kindness interventions to improve well-being
Cytokine concentrations in people with eating disorders:A comprehensive updated systematic review and meta-analysis
Background: Prior research has found altered levels of cytokines in people with eating disorders (EDs). This study is an update of a previous meta-analysis, including longitudinal analyses and machine learning heterogeneity analyses (MetaForest). Methods: This pre-registered (https://osf.io/g6d3f) systematic review and meta-analysis following PRISMA guidelines assessed studies from four databases (PubMed, Web of Science, MEDLINE, PsycINFO) reporting cytokine concentrations in people with EDs until 10th November 2024. Random-effects models were utilised for all meta-analyses. Results: Twenty-four new studies are incorporated, resulting in a total of 43 studies included in meta-analyses. Interleukin (IL)-6 and IL-15 are higher, and IL-7 lower, in anorexia nervosa (AN) compared with controls. When controlling for outliers, tumour necrosis factor (TNF)-α, IL-1β, IL-4, IL-8, IL-10, interferon (IFN)-γ, monocyte chemoattractant protein (MCP) and transforming growth factor (TGF)-β are similar between AN and controls. Longitudinally, IL-6 is lower in AN at follow-up compared to baseline, although this may be an artefact of publication bias. TNF-α and IL-1β do not change longitudinally. There are largely no differences in IL-6 and TNF-α in bulimia nervosa (BN) and there are insufficient studies to perform meta-analyses for binge eating disorder or other EDs.Conclusions: In acute AN, concentrations of IL-6 and IL-15 are elevated and IL-7 is decreased, with preliminary but unconclusive evidence for small decreases in IL-6 over the course of weight restoration. Other cytokines considered to have broadly pro-inflammatory functions are not increased in AN. In BN, there is little evidence for increases in pro-inflammatory cytokines, but the evidence base is limited. <br/
Deep neural networks excel in COVID-19 disease severity prediction - a meta-regression analysis
COVID-19 is a disease in which early prognosis of severity is critical for desired patient outcomes and for the management of limited resources like intensive care unit beds and ventilation equipment. Many prognostic statistical tools have been developed for the prediction of disease severity, but it is still unclear which ones should be used in practice. We aim to guide clinicians in choosing the best available tools to make optimal decisions and assess their role in resource management and assess what can be learned from the COVID-19 scenario for development of prediction models in similar medical applications. Using the five major medical databases: MEDLINE (via PubMed), Embase, Cochrane Library (CENTRAL), Cochrane COVID-19 Study Register, and Scopus, we conducted a comprehensive systematic review of prediction tools between 2020 January and 2023 April for hospitalized COVID-19 patients. We identified both the relevant confounding factors of tool performance using the MetaForest algorithm and the best tools-comparing linear, machine learning, and deep learning methods-with mixed-effects meta-regression models. The risk of bias was evaluated using the PROBAST tool. Our systematic search identified eligible 27,312 studies, out of which 290 were eligible for data extraction, reporting on 430 independent evaluations of severity prediction tools with ~ 2.8 million patients. Neural Network-based tools have the highest performance with a pooled AUC of 0.893 (0.748-1.000), 0.752 (0.614-0.853) sensitivity, 0.914 (0.849-0.952) specificity, using clinical, laboratory, and imaging data. The relevant confounders of performance are the geographic region of patients, the rate of severe cases, and the use of C-Reactive Protein as input data. 88% of studies have a high risk of bias, mostly because of deficiencies in the data analysis. All investigated tools in use aid decision-making for COVID-19 severity prediction, but Machine Learning tools, specifically Neural Networks clearly outperform other methods, especially in cases when the basic characteristics of severe and non-severe patient groups are similar, and without the need for more data. When highly specific biomarkers are not available-such as in the case of COVID-19-practitioners should abandon general clinical severity scores and turn to disease specific Machine Learning tools
Placebo response and its predictors in Attention Deficit Hyperactivity Disorder: a meta-analysis and comparison of meta-regression and MetaForest
BACKGROUND: High placebo response in attention deficit hyperactivity disorder (ADHD) can reduce medication-placebo differences, jeopardizing the development of new medicines. This research aims to (1) determine placebo response in ADHD, (2) compare the accuracy of meta-regression and MetaForest in predicting placebo response, and (3) determine the covariates associated with placebo response. METHODS: A systematic review with meta-analysis of randomized, placebo-controlled clinical trial investigating pharmacological interventions for ADHD was performed. Placebo response was defined as the change from baseline in ADHD symptom severity assessed according to the 18-item, clinician-rated, DSM-based rating scale. The effect of study design-, intervention-, and patient-related covariates in predicting placebo response was studied by means of meta-regression and MetaForest. RESULTS: Ninety-four studies including 6614 patients randomized to placebo were analyzed. Overall, placebo response was -8.9 points, representing a 23.1% reduction in the severity of ADHD symptoms. Cross-validated accuracy metrics for meta-regression were R2 = 0.0012 and root mean squared error = 3.3219 for meta-regression and 0.0382 and 3.2599 for MetaForest. Placebo response among ADHD patients increased by 63% between 2001 and 2020 and was larger in the United States than in other regions of the world. CONCLUSIONS: Strong placebo response was found in ADHD patients. Both meta-regression and MetaForest showed poor performance in predicting placebo response. ADHD symptom improvement with placebo has markedly increased over the last 2 decades and is greater in the United States than the rest of the world
Small sample size solutions : A Guide for Applied Researchers and Practitioners
Researchers often have difficulties collecting enough data to test their hypotheses,
either because target groups are small (e.g., patients with severe burn injuries); data
are sparse (e.g., rare diseases), hard to access (e.g., infants of drug-dependent
mothers), or data collection entails prohibitive costs (e.g., fMRI, measuring phonological difficulties of babies); or the study participants come from a population that
is prone to drop-out (e.g., because they are homeless or institutionalized). Such obstacles may result in data sets that are too small for the complexity of the statistical
model needed to answer the research question. Researchers could reduce the
required sample size for the analysis by simplifying their statistical models. However,
this may leave the “true” research questions unanswered. As such, limitations associated with small data sets can restrict the usefulness of the scientific conclusions and
might even hamper scientific breakthroughs
Global systematic review with meta-analysis reveals yield advantage of legume-based rotations and its drivers
Funding Information: This study was funded by the National Natural Science Foundation of China (32101850, H.D.Z.; 32172125, Z.H.Z.), the Young Elite Scientists Sponsorship Program by CAST (2020QNRC001, H.D.Z.), the Joint Funds of the National Natural Science Foundation of China (U21A20218, Z.H.Z.) and the earmarked fund for China Agriculture Research System (CARS-07-B-5, Z.H.Z.). Contributions from Dr. Ji Chen are funded by H2020 Marie Skłodowska-Curie Actions (No. 839806), Aarhus University Research Foundation (AUFF-E-2019-7-1), Danish Independent Research Foundation (1127-00015B), and Nordic Committee of Agriculture and Food Research. We thank the authors whose work is included in this meta-analysis. We also thank Beibei Xin and Zhen Qin for their assistance on high-performance computing and the High-performance Computing Platform of China Agricultural University.Peer reviewedPublisher PD
Changes on CRAN
In the past 7 months, 1178 new packages were added to the CRAN package repository. 18 packages were unarchived, 493 archived and none removed. The following shows the growth of the number of active packages in the CRAN package repository
Using machine learning to identify important predictors of COVID-19 infection prevention behaviors during the early phase of the pandemic
Before vaccines for coronavirus disease 2019 (COVID-19) became available, a set of infection-prevention behaviors constituted the primary means to mitigate the virus spread. Our study aimed to identify important predictors of this set of behaviors. Whereas social and health psychological theories suggest a limited set of predictors, machine-learning analyses can identify correlates from a larger pool of candidate predictors. We used random forests to rank 115 candidate correlates of infection-prevention behavior in 56,072 participants across 28 countries, administered in March to May 2020. The machine-learning model predicted 52% of the variance in infection-prevention behavior in a separate test sample—exceeding the performance of psychological models of health behavior. Results indicated the two most important predictors related to individual-level injunctive norms. Illustrating how data-driven methods can complement theory, some of the most important predictors were not derived from theories of health behavior—and some theoretically derived predictors were relatively unimportant
Using machine learning to identify important predictors of COVID-19 infection prevention behaviors during the early phase of the pandemic
Small Sample Size Solutions
Researchers often have difficulties collecting enough data to test their hypotheses, either because target groups are small or hard to access, or because data collection entails prohibitive costs. Such obstacles may result in data sets that are too small for the complexity of the statistical model needed to answer the research question. This unique book provides guidelines and tools for implementing solutions to issues that arise in small sample research. Each chapter illustrates statistical methods that allow researchers to apply the optimal statistical model for their research question when the sample is too small. This essential book will enable social and behavioral science researchers to test their hypotheses even when the statistical model required for answering their research question is too complex for the sample sizes they can collect. The statistical models in the book range from the estimation of a population mean to models with latent variables and nested observations, and solutions include both classical and Bayesian methods. All proposed solutions are described in steps researchers can implement with their own data and are accompanied with annotated syntax in R. The methods described in this book will be useful for researchers across the social and behavioral sciences, ranging from medical sciences and epidemiology to psychology, marketing, and economics
- …
