1,146 research outputs found
Randomization tests for ABAB designs: Comparing-data-division-specific and common distributions
Monte Carlo simulations were used to generate data for ABAB designs of different lengths. The points of change in phase are randomly determined before gathering behaviour measurements, which allows the use of a randomization test as an analytic technique. Data simulation and analysis can be based either on data-division-specific or on common distributions. Following one method or another affects the results obtained after the randomization test has been applied. Therefore, the goal of the study was to examine these effects in more detail. The discrepancies in these approaches are obvious when data with zero treatment effect are considered and such approaches have implications for statistical power studies. Data-division-specific distributions provide more detailed information about the performance of the statistical technique
Assigning and combining probabilities in single-case designs: A second study
The present study builds on a previous proposal for assigning probabilities to the outcomes computed using different primary indicators in single-case studies. These probabilities are obtained comparing the outcome to previously tabulated reference values and reflect the likelihood of the results in case there was no intervention effect. The current study explores how well different metrics are translated into p values in the context of simulation data. Furthermore, two published multiple baseline data sets are used to illustrate how well the probabilities could reflect the intervention effectiveness as assessed by the original authors. Finally, the importance of which primary indicator is used in each data set to be integrated is explored; two ways of combining probabilities are used: a weighted average and a binomial test. The results indicate that the translation into p values works well for the two nonoverlap procedures, with the results for the regression-based procedure diverging due to some undesirable features of its performance. These p values, both when taken individually and when combined, were well-aligned with the effectiveness for the real-life data. The results suggest that assigning probabilities can be useful for translating the primary measure into the same metric, using these probabilities as additional evidence on the importance of behavioral change, complementing visual analysis and professional's judgments
Research techniques: applications of probability models & descriptive statistics
Apart from working with this document, we suggested that the recommended readings about statistical content and the R software be consulted. Moreover, attending the classes is also useful for learning and for discussing aby doubts about the content.The current document contains a set of applications of the discrete and continuous probability models and univariate and bivariate statistics. The applications are presented in terms of numerical results and graphical representations, as is usually done for statistical content. The plots are enhanced using the capabilities of the R software in order to gain a better understanding of the data and of what is being done. The R code for obtaining both the numerical and the graphical results is provided so that the reader can use it for his/her analysis beyond this document. Finally, the applications can also be understood as a set of exercises for which the correct answers are provided
Case Study - Bulgaria, Sustainable Agriculture and Soil Conservation (SoCo Project)
This Technical Note 'Case Study ¿ Bulgaria' is part of a series of case studies within the ¿Sustainable Agriculture and Soil Conservation¿ (SoCo) project. Ten case studies were carried out in Belgium, Bul-garia, the Czech Republic, Denmark, France, Germany, Greece, Italy, Spain and the United Kingdom between spring and summer 2008. The selection of case study areas was designed to capture differences in soil degradation processes, soil types, climatic conditions, farm structures and farming prac-tices, institutional settings and policy priorities. A harmonised methodological approach was pursued in order to gather insights from a range of contrasting conditions over a geographically diverse area. The case studies were carried out by local experts to reflect the specificities of the selected case studies.JRC.DDG.J.5-Agriculture and Life Sciences in the Econom
How can single-case data be analyzed? Software resources, tutorial, and reflections on analysis
The present article aims to present a series of software developments in the quantitative analysis of data obtained via single-case experimental designs (SCEDs), as well as the tutorial describing these developments. The tutorial focuses on software implementations based on freely available platforms such as R and aims to bring statistical advances closer to applied researchers and help them become autonomous agents in the data analysis stage of a study. The range of analyses dealt with in the tutorial is illustrated on a typical single-case dataset, relying heavily on graphical data representations. We illustrate how visual and quantitative analyses can be used jointly, giving complementary information and helping the researcher decide whether there is an intervention effect, how large it is, and whether it is practically significant. To help applied researchers in the use of the analyses, we have organized the data in the different ways required by the different analytical procedures and made these data available online. We also provide Internet links to all free software available, as well as all the main references to the analytical techniques. Finally, we suggest that appropriate and informative data analysis is likely to be a step forward in documenting and communicating results and also for increasing the scientific credibility of SCEDs
Assigning and combining probabilities in single-case studies
There is currently a considerable diversity of quantitative measures available for summarizing the results in single-case studies. Given that the interpretation of some of them is difficult due to the lack of established benchmarks, the current paper proposes an approach for obtaining further numerical evidence on the importance of the results, complementing the substantive criteria, visual analysis, and primary summary measures. This additional evidence consists of obtaining the statistical significance of the outcome when referred to the corresponding sampling distribution. This sampling distribution is formed by the values of the outcomes (expressed as data nonoverlap, R-squared, etc.) in case the intervention is ineffective. The approach proposed here is intended to offer the outcome"s probability of being as extreme when there is no treatment effect without the need for some assumptions that cannot be checked with guarantees. Following this approach, researchers would compare their outcomes to reference values rather than constructing the sampling distributions themselves. The integration of single-case studies is problematic, when different metrics are used across primary studies and not all raw data are available. Via the approach for assigning p values it is possible to combine the results of similar studies regardless of the primary effect size indicator. The alternatives for combining probabilities are discussed in the context of single-case studies pointing out two potentially useful methods- one based on a weighted average and the other on the binomial test
Seeing red: Relearning to read in a case of Balint's Syndrome
BACKGROUND AND AIMS: Balint's Syndrome is a rare condition, often associated with hypoxic brain damage. The major characteristic is an inability to localise objects in space, another is simultanagnosia frequently resulting in reading difficulties. We present RN, a 37 year old woman whose major problem with reading was her inability to recognise individual letters correctly in either lower or upper case. We noted, however, that she was better if the letters were shown in red type. The aims were to determine if RN could relearn letters of the alphabet, investigate whether colour affected her ability to learn, and to explore more specifically whether the red type also helped her to read words. METHOD: Using a single case experimental ABA design, we first determined that the optimal font for RN was size 16. In the baseline (A) phase, we assessed her ability to read all lower and upper case letters of the alphabet in black ink. In the intervention (B) phase we used font size 16 in red ink and an errorless learning approach to teaching the letters. Sessions ran 5 times per week (20 minutes per session). The intervention was then applied to picture recognition and word reading with four sets of 10 words and corresponding pictures. RESULTS: A consistent difference was noted between initial baseline and intervention. Improvement carried over when we returned to baseline. CONCLUSION: Using red type and an errorless learning approach enabled RN to re-learn letters of the alphabet and read words she was previously unable to read. This did not however generalise to her everyday life
Retención de componentes principales para variables discretas
The present study discusses retention criteria for principal components analysis (PCA) applied to Likert scale items typical in psychological questionnaires. The main aim is to recommend applied researchers to restrain from relying only on the eigenvalue-than-one criterion; alternative procedures are suggested for adjusting for sampling error. An additional objective is to add evidence on the consequences of applying this rule when PCA is used with discrete variables. The experimental conditions were studied by means of
Monte Carlo sampling including several sample sizes, different number of variables and answer alternatives, and four non-normal distributions. The results suggest that even when all the items and thus the underlying dimensions are independent, eigenvalues greater than one are frequent and they can explain up to 80% of the variance in data, meeting the empirical criterion. The consequences of using Kaiser’s rule are illustrated with a clinical psychology example. The size of the eigenvalues resulted to be a function of the sample size and the number of variables, which is also the case for parallel analysis as previous research shows. To enhance the application of alternative criteria, an R package was developed for deciding the number of principal components to retain by means of confidence intervals constructed about the eigenvalues corresponding to lack of relationship between discrete variables.El presente estudio trata sobre diferentes criterios para la retención de componentes en el análisis de componentes principales (PCA) aplicado a escalas tipo Likert, que son comunes en los cuestionarios psicológicos. El principal objetivo del estudio es recomendar a los investigadores no confiar en el criterio de extracción fundamentado en criterio del autovalor mayor que uno, sugiriendo procedimientos alternativos que se ajusten al error muestral. Un objetivo adicional consiste en añadir evidencia sobre las consecuencias de utilizar el criterio antes mencionado cuando el PCA se usa con variables discretas. Las condiciones experimentales se estudiaron por medio de remuestreo Monte Carlo, incluyendo distintos tamaños de muestra, diversas cantidades de reactivos y alternativas de respuesta y, finalmente, diferentes distribuciones de probabilidad para las opciones de respuesta. Los resultados sugieren que, incluso cuando todos los ítems y las dimensiones subyacentes son independientes, los autovalores mayores que uno son frecuentes y pueden dar cuenta de hasta el 80% de la varianza de los datos, alcanzándose el criterio empírico. Las consecuencias de utilizar el criterio de Kaiser se ilustran con un ejemplo propio de la Psicología clínica. Se halló que el tamaño de los autovalores es una función del tamaño de la muestra y del número de variables, que se corresponde con lo encontrado previamente para el parallel analysis. Para potenciar la aplicación de criterios alternativos, un paquete en R fue desarrollado para decidir el número de componentes principales que deben retenerse y recurriendo a intervalos de confianza fundamentados en los autovalores asociados a la inexistencia de asociación entre las variables discretas
Single-case experimental designs: Reflections on conduct and analysis
In this editorial discussion we reflect on the issues addressed by, and arising from, the papers in this Special Issue on Single Case Experimental Design (SCED) study methodology. We identify areas of consensus and disagreement regarding the conduct and analysis of SCED studies. Despite the long history of application of SCEDs in studies of interventions in clinical and educational settings, the field is still developing. There is an emerging consensus on methodological quality criteria for many aspects of SCEDs, but disagreement on what are the most appropriate methods of SCED data analysis. Our aim is to stimulate this ongoing debate and highlight issues requiring further attention from applied researchers and methodologists. In addition we offer tentative criteria to support decision making in relation to selection of analytical techniques in SCED studies. Finally, we stress that large-scale interdisciplinary collaborations, such as the current Special Issue, are necessary if SCEDs are going to play a significant role in the development of the evidence base for clinical practice
Random assignment of intervention points in two phase single-case designs: data-division-specific distributions
The present study explores the statistical properties of a randomization test based on the random assignment of the intervention point in a two-phase (AB) single-case design. The focus is on randomization distributions constructed with the values of the test statistic for all possible random assignments and used to obtain p-values. The shape of those distributions is investigated for each specific data division defined by the moment in which the intervention is introduced. Another aim of the study consisted in testing the detection of inexistent effects (i.e., production of false alarms) in autocorrelated data series, in which the assumption of exchangeability between observations may be untenable. In this way, it was possible to compare nominal and empirical Type I error rates in order to obtain evidence on the statistical validity of the randomization test for each individual data division. The results suggest that when either of the two phases has considerably less measurement times, Type I errors may be too probable and, hence, the decision making process to be carried out by applied researchers may be jeopardized
- …
