92 research outputs found
Randomization tests for ABAB designs: Comparing-data-division-specific and common distributions
Monte Carlo simulations were used to generate data for ABAB designs of different lengths. The points of change in phase are randomly determined before gathering behaviour measurements, which allows the use of a randomization test as an analytic technique. Data simulation and analysis can be based either on data-division-specific or on common distributions. Following one method or another affects the results obtained after the randomization test has been applied. Therefore, the goal of the study was to examine these effects in more detail. The discrepancies in these approaches are obvious when data with zero treatment effect are considered and such approaches have implications for statistical power studies. Data-division-specific distributions provide more detailed information about the performance of the statistical technique
Assigning and combining probabilities in single-case studies
There is currently a considerable diversity of quantitative measures available for summarizing the results in single-case studies. Given that the interpretation of some of them is difficult due to the lack of established benchmarks, the current paper proposes an approach for obtaining further numerical evidence on the importance of the results, complementing the substantive criteria, visual analysis, and primary summary measures. This additional evidence consists of obtaining the statistical significance of the outcome when referred to the corresponding sampling distribution. This sampling distribution is formed by the values of the outcomes (expressed as data nonoverlap, R-squared, etc.) in case the intervention is ineffective. The approach proposed here is intended to offer the outcome"s probability of being as extreme when there is no treatment effect without the need for some assumptions that cannot be checked with guarantees. Following this approach, researchers would compare their outcomes to reference values rather than constructing the sampling distributions themselves. The integration of single-case studies is problematic, when different metrics are used across primary studies and not all raw data are available. Via the approach for assigning p values it is possible to combine the results of similar studies regardless of the primary effect size indicator. The alternatives for combining probabilities are discussed in the context of single-case studies pointing out two potentially useful methods- one based on a weighted average and the other on the binomial test
Bootstrap: fundamentos e introducción a sus aplicaciones
La técnica bootstrap proporciona estimaciones del error estadistico, imponiendo escasas restricciones sobre las variables aleatorias analizadas y estableciéndose como un procedimiento de caracter general, independien- temente del estadístico considerado. En este trabajo se realiza una presen- tacion de los fundamentos teóricos de la técnica bootstrap, mas desde una perspectiva divulgadora que estrictamente tedrica, ademas de realizarse un breve estudio donde se podra comparar la eficacia de la técnica frente a otras mas establecidas. Palabras clave: Bootstrap, error estadistico, remuestreo Monte Car- 10, tasa de error tip0 I, técnica percentil
Analytical options for single-case experimental designs: Review and application to brain impairment.
Single-case experimental designs meeting evidence standards are useful for identifying empirically-supported practices. Part of the research process entails data analysis, which can be performed both visually and numerically. In the current text we discuss several statistical techniques focusing on the descriptive quantifications that they provide on aspects such as overlap, difference in level and in slope. In both cases, the numerical results are interpreted in light of the characteristics of the data as identified via visual inspection. Two previously published data sets from patients with traumatic brain injury are re-analyzed, illustrating several analytical options and the data patterns for which each of these analytical techniques is especially useful, considering their assumptions and limitations. In order to make the current review maximally informative for applied researchers, we point to free user-friendly web applications of the analytical techniques. Moreover, we offer up-to-date references to the potentially useful analytical techniques not illustrated in the article. Finally, we point to some analytical challenges and offer tentative recommendations about how to deal with them
Quantifying differences between conditions in single-case designs: Possible analysis and meta-analysis.
The current paper is a call for and illustration of a way of closing the gap between basic research and professional practice in the field of neurorehabilitation. Methodologically, single-case experimental design s and the guidelines created regarding their conduct are highlighted. Statistically, we review two data analytical options: (a) indices quantifying the difference between pairs of conditions in the same metric as the target behavior and (b) a formal statistical procedure offering a standardized overall quantification. The paper provides guidance in the analysis and suggests free software in order to illustrate, in the context of data from behavioral interventions with children with developmental disorders, that informative analyses are feasible. We also show how the results of individual studies can be made eligible for meta-analyses, which are useful for establishing the evidence basis of interventions. Nevertheless,we also point at decisions that need to be made during the process of data analysis
Assigning and combining probabilities in single-case designs: A second study
The present study builds on a previous proposal for assigning probabilities to the outcomes computed using different primary indicators in single-case studies. These probabilities are obtained comparing the outcome to previously tabulated reference values and reflect the likelihood of the results in case there was no intervention effect. The current study explores how well different metrics are translated into p values in the context of simulation data. Furthermore, two published multiple baseline data sets are used to illustrate how well the probabilities could reflect the intervention effectiveness as assessed by the original authors. Finally, the importance of which primary indicator is used in each data set to be integrated is explored; two ways of combining probabilities are used: a weighted average and a binomial test. The results indicate that the translation into p values works well for the two nonoverlap procedures, with the results for the regression-based procedure diverging due to some undesirable features of its performance. These p values, both when taken individually and when combined, were well-aligned with the effectiveness for the real-life data. The results suggest that assigning probabilities can be useful for translating the primary measure into the same metric, using these probabilities as additional evidence on the importance of behavioral change, complementing visual analysis and professional's judgments
Research techniques: applications of probability models & descriptive statistics
Apart from working with this document, we suggested that the recommended readings about statistical content and the R software be consulted. Moreover, attending the classes is also useful for learning and for discussing aby doubts about the content.The current document contains a set of applications of the discrete and continuous probability models and univariate and bivariate statistics. The applications are presented in terms of numerical results and graphical representations, as is usually done for statistical content. The plots are enhanced using the capabilities of the R software in order to gain a better understanding of the data and of what is being done. The R code for obtaining both the numerical and the graphical results is provided so that the reader can use it for his/her analysis beyond this document. Finally, the applications can also be understood as a set of exercises for which the correct answers are provided
Funciones de R para cuantificar la interdependencia en diseños dídicos estándar y SRM
Interdependence is the main feature of dyadic relationships and, in recent years, various statistical procedures have been proposed for quantifying and testing this social attribute in different dyadic designs. The purpose of this paper is to develop several functions for this kind of statistical tests in an R package, known as nonindependence, for use by applied social researchers. A Graphical User Interface (GUI) is also developed to facilitate the use of the functions included in this package. Examples drawn from psychological research and simulated data are used to illustrate how the software works.La interdependencia es la principal característica de las relaciones diádicas y, recientemente, se han desarrollado varios procedimientos estadísticos para cuantificar y tomar decisiones estadísticas respecto a esta característica social en diversos diseños diádicos. El objetivo de este artículo es presentar un conjunto de funciones para estos procedimientos agrupadas en un paquete en R, llamado nonindependence, para que pueda ser usado por los investigadores sociales. Para facilitar su uso, se ha desarrollado una Interfaz Gráfica de Usuario (GUI) para este paquete de R. Se ilustra además su funcionamiento mediante datos obtenidos en investigación psicológica y simulados
Funciones de R para cuantificar la interdependencia en diseños dídicos estándar y SRM
Interdependence is the main feature of dyadic relationships and, in recent years, various statistical procedures have been proposed for quantifying and testing this social attribute in different dyadic designs. The purpose of this paper is to develop several functions for this kind of statistical tests in an R package, known as nonindependence, for use by applied social researchers. A Graphical User Interface (GUI) is also developed to facilitate the use of the functions included in this package. Examples drawn from psychological research and simulated data are used to illustrate how the software works.La interdependencia es la principal característica de las relaciones diádicas y, recientemente, se han desarrollado varios procedimientos estadísticos para cuantificar y tomar decisiones estadísticas respecto a esta característica social en diversos diseños diádicos. El objetivo de este artículo es presentar un conjunto de funciones para estos procedimientos agrupadas en un paquete en R, llamado nonindependence, para que pueda ser usado por los investigadores sociales. Para facilitar su uso, se ha desarrollado una Interfaz Gráfica de Usuario (GUI) para este paquete de R. Se ilustra además su funcionamiento mediante datos obtenidos en investigación psicológica y simulados
Retención de componentes principales para variables discretas
The present study discusses retention criteria for principal components analysis (PCA) applied to Likert scale items typical in psychological questionnaires. The main aim is to recommend applied researchers to restrain from relying only on the eigenvalue-than-one criterion; alternative procedures are suggested for adjusting for sampling error. An additional objective is to add evidence on the consequences of applying this rule when PCA is used with discrete variables. The experimental conditions were studied by means of
Monte Carlo sampling including several sample sizes, different number of variables and answer alternatives, and four non-normal distributions. The results suggest that even when all the items and thus the underlying dimensions are independent, eigenvalues greater than one are frequent and they can explain up to 80% of the variance in data, meeting the empirical criterion. The consequences of using Kaiser’s rule are illustrated with a clinical psychology example. The size of the eigenvalues resulted to be a function of the sample size and the number of variables, which is also the case for parallel analysis as previous research shows. To enhance the application of alternative criteria, an R package was developed for deciding the number of principal components to retain by means of confidence intervals constructed about the eigenvalues corresponding to lack of relationship between discrete variables.El presente estudio trata sobre diferentes criterios para la retención de componentes en el análisis de componentes principales (PCA) aplicado a escalas tipo Likert, que son comunes en los cuestionarios psicológicos. El principal objetivo del estudio es recomendar a los investigadores no confiar en el criterio de extracción fundamentado en criterio del autovalor mayor que uno, sugiriendo procedimientos alternativos que se ajusten al error muestral. Un objetivo adicional consiste en añadir evidencia sobre las consecuencias de utilizar el criterio antes mencionado cuando el PCA se usa con variables discretas. Las condiciones experimentales se estudiaron por medio de remuestreo Monte Carlo, incluyendo distintos tamaños de muestra, diversas cantidades de reactivos y alternativas de respuesta y, finalmente, diferentes distribuciones de probabilidad para las opciones de respuesta. Los resultados sugieren que, incluso cuando todos los ítems y las dimensiones subyacentes son independientes, los autovalores mayores que uno son frecuentes y pueden dar cuenta de hasta el 80% de la varianza de los datos, alcanzándose el criterio empírico. Las consecuencias de utilizar el criterio de Kaiser se ilustran con un ejemplo propio de la Psicología clínica. Se halló que el tamaño de los autovalores es una función del tamaño de la muestra y del número de variables, que se corresponde con lo encontrado previamente para el parallel analysis. Para potenciar la aplicación de criterios alternativos, un paquete en R fue desarrollado para decidir el número de componentes principales que deben retenerse y recurriendo a intervalos de confianza fundamentados en los autovalores asociados a la inexistencia de asociación entre las variables discretas
- …