21 research outputs found

    Effectiveness of digital-based interventions for children with mathematical learning difficulties : A meta-analysis

    Get PDF
    Abstract The purpose of this work was to meta-analyze empirical evidence about the effectiveness of digital-based interventions for students with mathematical learning difficulties. Furthermore, we investigated whether the school level of the participants and the software instructional approach were decisive modulated factors. A systematic search of randomized controlled studies published between 2003 and 2019 was conducted. A total of 15 studies with 1073 participants met the study selection criterion. A random effects meta-analysis indicated that digital-based interventions generally improved mathematical performance (mean ES = 0.55), though there was a significant heterogeneity across studies. There was no evidence that videogames offer additional advantages with respect to digital-based drilling and tutoring approaches. Moreover, effect size was not moderated when interventions were delivered in primary school or in preschool

    Proprioceptive accuracy in Immersive Virtual Reality: A developmental perspective

    Get PDF
    Proprioceptive development relies on a variety of sensory inputs, among which vision is hugely dominant. Focusing on the developmental trajectory underpinning the integration of vision and proprioception, the present research explores how this integration is involved in interactions with Immersive Virtual Reality (IVR) by examining how proprioceptive accuracy is affected by Age, Perception, and Environment. Individuals from 4 to 43 years old completed a self-turning task which asked them to manually return to a previous location with different sensory modalities available in both IVR and reality. Results were interpreted from an exploratory perspective using Bayesian model comparison analysis, which allows the phenomena to be described using probabilistic statements rather than simplified reject/not-reject decisions. The most plausible model showed that 4\u20138-year-old children can generally be expected to make more proprioceptive errors than older children and adults. Across age groups, proprioceptive accuracy is higher when vision is available, and is disrupted in the visual environment provided by the IVR headset. We can conclude that proprioceptive accuracy mostly develops during the first eight years of life and that it relies largely on vision. Moreover, our findings indicate that this proprioceptive accuracy can be disrupted by the use of an IVR headset

    COVID-19 in rheumatic diseases in Italy: first results from the Italian registry of the Italian Society for Rheumatology (CONTROL-19)

    Get PDF
    OBJECTIVES: Italy was one of the first countries significantly affected by the coronavirus disease 2019 (COVID-19) epidemic. The Italian Society for Rheumatology promptly launched a retrospective and anonymised data collection to monitor COVID-19 in patients with rheumatic and musculoskeletal diseases (RMDs), the CONTROL-19 surveillance database, which is part of the COVID-19 Global Rheumatology Alliance. METHODS: CONTROL-19 includes patients with RMDs and proven severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) updated until May 3rd 2020. In this analysis, only molecular diagnoses were included. The data collection covered demographic data, medical history (general and RMD-related), treatments and COVID-19 related features, treatments, and outcome. In this paper, we report the first descriptive data from the CONTROL-19 registry. RESULTS: The population of the first 232 patients (36% males) consisted mainly of elderly patients (mean age 62.2 years), who used corticosteroids (51.7%), and suffered from multi-morbidity (median comorbidities 2). Rheumatoid arthritis was the most frequent disease (34.1%), followed by spondyloarthritis (26.3%), connective tissue disease (21.1%) and vasculitis (11.2%). Most cases had an active disease (69.4%). Clinical presentation of COVID-19 was typical, with systemic symptoms (fever and asthenia) and respiratory symptoms. The overall outcome was severe, with high frequencies of hospitalisation (69.8%), respiratory support oxygen (55.7%), non-invasive ventilation (20.9%) or mechanical ventilation (7.5%), and 19% of deaths. Male patients typically manifested a worse prognosis. Immunomodulatory treatments were not significantly associated with an increased risk of intensive care unit admission/mechanical ventilation/death. CONCLUSIONS: Although the report mainly includes the most severe cases, its temporal and spatial trend supports the validity of the national surveillance system. More complete data are being acquired in order to both test the hypothesis that RMD patients may have a different outcome from that of the general population and determine the safety of immunomodulatory treatments

    Formalizzazione delle Ipotesi di Ricerca in Psicologia: Design Analysis e Model Comparison

    No full text
    La valutazione di ipotesi definite in accordo con le aspettative dei ricercatori o di prospettive teoriche è uno degli obiettivi principali della ricerca empirica. Quando viene condotto uno studio, infatti, i ricercatori di solito vogliono valutare la plausibilità delle loro ipotesi sulla base dei dati osservati. Per fare ciò, sono stati sviluppati diversi approcci statistici come, ad esempio, il Null Hypothesis Significance Testing (NHST). In psicologia, il NHST è l'approccio statistico dominante per valutare le ipotesi di ricerca. In realtà, tuttavia, l'approccio NHST non consente ai ricercatori di rispondere alla domanda a cui di solito sono interessati. Infatti, l'approccio NHST non quantifica l'evidenza a favore di un'ipotesi, ma quantifica solo l'evidenza contro l'ipotesi nulla. Ciò può facilmente portare a un'errata interpretazione dei risultati che, insieme all'applicazione meccanica ad insensata dell'approccio NHST, è considerata una delle cause dell'attuale crisi di replicabilità. Nella prima parte della tesi, introduciamo il framework della Design Analysis che ci permette di valutare i rischi inferenziali legati alla stima della dimensione dell'effetto quando si seleziona per la significatività. Nel caso di studi con campioni ridotti che valutano fenomeni complessi e con grande variabilità nei dati (tutte condizioni molto comuni in psicologia), la selezione per significatività può facilmente portare a risultati fuorvianti ed inaffidabili. Questo aspetto è spesso trascurato nella Power Analysis tradizionale. La Design Analysis, invece, mette in evidenza questo importante problema. Nella seconda parte della tesi, ci spostiamo dal NHST verso l'approccio del Model Comparison. Il Model Comparison ci consente di valutare correttamente l'evidenza relativa a favore di un'ipotesi in base ai dati. In primo luogo, le ipotesi di ricerca vengono formalizzate sotto forma di diversi modelli statistici. Successivamente, queste vengono valutate secondo diversi possibili criteri come, ad esempio, gli Information Criteria e il Bayes Factor con encompassing prior. Gli Information Criteria valutano la capacità predittiva dei modelli penalizzando per la complessità del modello. Il Bayes Factor con encompassing prior, invece, consente ai ricercatori di valutare facilmente ipotesi informative con vincoli di uguaglianza e disuguaglianza sui parametri del modello.The evaluation of research and theoretical hypotheses is one of the principal goals of empirical research. In fact, when conducting a study, researchers usually have expectations based on hypotheses or theoretical perspectives they want to evaluate according to the observed data. To do that, different statistical approaches have been developed, for example, the Null Hypothesis Significance Testing (NHST). In psychology, the NHST is the dominant statistical approach to evaluate research hypotheses. In reality, however, the NHST approach does not allow researchers to answer the question they usually are interested in. In fact, the NHST approach does not quantify the evidence in favour of a hypothesis, but it only quantifies the evidence against the null hypothesis. This can easily lead to the misinterpretation of the results that, together with a mindless and mechanical application of the NHST approach, is considered as one of the causes of the ongoing replicability crisis. In the first part of the thesis, we introduce the Design Analysis framework that allows us to evaluate the inferential risks related to effect size estimation when selecting for significance. In the case of underpowered studies evaluating complex multivariate phenomena with noisy data (all very common conditions in psychology), selecting for significance can easily lead to misleading and unreliable results. This aspect is often neglected in traditional power Analysis. Design analysis, instead, highlights this relevant issue. In the second part of the thesis, we move away from the NHST towards the model comparison approach. Model comparison allows us to properly evaluate the relative evidence in favour of one hypothesis according to the data. First, research hypotheses are formalized into different statistical models, subsequently, these are evaluated according to different possible criteria. We consider the information criteria and the Bayes Factor with encompassing prior. Information criteria assess models predictive ability penalizing for model complexity. Bayes Factor with encompassing prior, instead, allows researchers to easily evaluate informative hypotheses with equality and inequality constraints on the model parameters

    Designing Studies and Evaluating Research Results: Type M and Type S Errors for Pearson Correlation Coefficient

    Get PDF
    It is widely appreciated that many studies in psychological science suffer from low statistical power. One of the consequences of analyzing underpowered studies with thresholds of statistical significance is a high risk of finding exaggerated effect size estimates, in the right or the wrong direction. These inferential risks can be directly quantified in terms of Type M (magnitude) error and Type S (sign) error, which directly communicate the consequences of design choices on effect size estimation. Given a study design, Type M error is the factor by which a statistically significant effect is on average exaggerated. Type S error is the probability to find a statistically significant result in the opposite direction to the plausible one. Ideally, these errors should be considered during a prospective design analysis in the design phase of a study to determine the appropriate sample size. However, they can also be considered when evaluating studies’ results in a retrospective design analysis. In the present contribution, we aim to facilitate the considerations of these errors in the research practice in psychology. For this reason, we illustrate how to consider Type M and Type S errors in a design analysis using one of the most common effect size measures in psychology: Pearson correlation coefficient. We provide various examples and make the R functions freely available to enable researchers to perform design analysis for their research projects

    Designing Studies and Evaluating Research Results: Type M and Type S Errors for Pearson Correlation Coefficient

    No full text
    It is widely appreciated that many studies in psychological science suffer from low statistical power. One of the consequences of analyzing underpowered studies with thresholds of statistical significance, is a high risk of finding exaggerated effect size estimates, in the right or in the wrong direction. These inferential risks can be directly quantified in terms of Type M (magnitude) error and Type S (sign) error, which directly communicate the consequences of design choices on effect size estimation. Given a study design, Type M error is the factor by which a statistically significant effect is on average exaggerated. Type S error is the probability to find a statistically significant result in the opposite direction to the plausible one. Ideally, these errors should be considered during a prospective design analysis in the design phase of a study to determine the appropriate sample size. However, they can also be considered when evaluating studies’ results in a retrospective design analysis. In the present contribution we aim to facilitate the considerations of these errors in the research practice in psychology. For this reason we illustrate how to consider Type M and Type S errors in a design analysis using one of the most common effect size measures in psychology: Pearson correlation coefficient. We provide various examples and make the R functions freely available to enable researchers to perform design analysis for their research projects

    Effects of digital games on student motivation in mathematics: A meta-analysis in K-12

    No full text
    Background: Motivation is an important factor in the learning process and supporting students' motivation in mathematics is a significant challenge for educators. Educational technologies, such as digital games, offer potential for engagement in mathematics learning activities. Objectives: To contrast the general decrement in student motivation in mathematics, a multilevel meta-analysis was carried out to synthesize the results of studies concerning the impact of digital games on K-12 student motivation in mathematics. Methods: Standardized measure of effect size (dppc2) for pre- post-control group designs was used, and different sources of dependency among the effects were taken into account. Moreover, through meta-regressions, we examined whether specific characteristics of the participants, interventions and outcomes were associated with effect size differences throughout the studies. Results and Conclusions: A total of 20 primary studies (43 effect sizes) meeting eligibility criteria was included. Results showed a significant overall effect (dppc2 = 0.27; 95%CI = [0.14; 0.41]) and a great heterogeneity between studies. Moderator analyses showed differences in effect size associated to the duration of intervention and motivational construct in terms of expectancy and value. Implications: Overall, the findings indicate that digital games are effective tools compared to conventional teaching practices. The results are promising and could be useful for the design of digital educational interventions aimed at promoting motivation in mathematics

    Incorporating Expert Knowledge in Structural Equation Models: Applications in Psychological Research

    No full text
    Structural Equation Modeling (SEM) is used in psychology to model complex structures of data. However, sample sizes often cannot be as large as ideal forSEM, leading to a problem of insufficient power. Bayesian estimation with informed priors can be beneficial in this context. Our simulation study examines this issue over a real case of a mediation model. Parameter recovery, power and coverage were considered. The advantage of a Bayesian approach was evident for the smallest effects. The correct formalization of the theoretical expectations is crucial, and it allows for increased collaboration among researchers in Psychology and Statistics

    Evaluating Informative Hypotheses with Equality and Inequality Constraints: A Tutorial Using the Bayes Factor via the Encompassing Prior Approach

    No full text
    When conducting a study, researchers usually have expectations based on hypotheses or theoretical perspectives they want to evaluate. Equality and inequality constraints on the model parameters are used to formalize researchers' expectations or theoretical perspectives into the so-called informative hypotheses. However, traditional statistical approaches, such as the Null Hypothesis Significance Testing (NHST) or the model comparison using information criteria (e.g., AIC and BIC), are unsuitable for testing complex informative hypotheses. An alternative approach is to use the Bayes factor. In particular, the Bayes factor based on the encompassing prior approach allows researchers to easily evaluate complex informative hypotheses in a wide range of statistical models (e.g., generalized linear). This paper provides a detailed introduction to the Bayes factor with encompassing prior. First, all steps and elements involved in the formalization of informative hypotheses and the computation of the Bayes factor with encompassing prior are described. Next, we apply this method to a real case scenario, considering the attachment theory. Specifically, we analyzed the relative influence of maternal and paternal attachment on children's social-emotional development by comparing the various theoretical perspectives debated in the literature
    corecore