160 research outputs found
Approaches to Statistical Efficiency when comparing the embedded adaptive interventions in a SMART
Sequential, multiple assignment randomized trials (SMARTs), which assist in
the optimization of adaptive interventions, are growing in popularity in
education and behavioral sciences. This is unsurprising, as adaptive
interventions reflect the sequential, tailored nature of learning in a
classroom or school. Nonetheless, as is true elsewhere in education research,
observed effect sizes in education-based SMARTs are frequently small. As a
consequence, statistical efficiency is of paramount importance in their
analysis. The contributions of this manuscript are two-fold. First, we provide
an overview of adaptive interventions and SMART designs for researchers in
education science. Second, we propose four techniques that have the potential
to improve statistical efficiency in the analysis of SMARTs. We demonstrate the
benefits of these techniques in SMART settings both through the analysis of a
SMART designed to optimize an adaptive intervention for increasing cognitive
behavioral therapy delivery in school settings and through a comprehensive
simulation study. Each of the proposed techniques is easily implementable,
either with over-the-counter statistical software or through R code provided in
an online supplement.Comment: 36 pages, 2 figure
SMART Binary: Sample Size Calculation for Comparing Adaptive Interventions in SMART studies with Longitudinal Binary Outcomes
Sequential Multiple-Assignment Randomized Trials (SMARTs) play an
increasingly important role in psychological and behavioral health research.
This experimental approach enables researchers to answer scientific questions
about how to sequence and match interventions to the unique, changing needs of
individuals. A variety of sample size planning resources for SMART studies have
been developed in recent years; these enable researchers to plan SMARTs for
addressing different types of scientific questions. However, relatively limited
attention has been given to planning SMARTs with binary (dichotomous) outcomes,
which often require higher sample sizes relative to continuous outcomes.
Existing resources for estimating sample size requirements for SMARTs with
binary outcomes do not consider the potential to improve power by including a
baseline measurement and/or multiple repeated outcome measurements. The current
paper addresses this issue by providing sample size simulation code and
approximate formulas for two-wave repeated measures binary outcomes (i.e., two
measurement times for the outcome variable, before and after receiving the
intervention). The simulation results agree well with the formulas. We also
discuss how to use simulations to calculate power for studies with more than
two outcome measurement occasions. The results show that having at least one
repeated measurement of the outcome can substantially improve power under
certain conditions.Comment: 73 pages, 2 figures, submitted to Multivariate Behavioral Researc
Dietary Saturated Fat Intake Is Negatively Associated With Weight Maintenance Among the PREMIER Participants
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/93652/1/oby.2011.17.pd
Gaussianization of LA-ICP-MS features to improve calibration in forensic glass comparison
The forensic comparison of glass aims to compare a glass sample of an unknown source with a control glass
sample of a known source. In this work, we use multi-elemental features from Laser Ablation Inductively
Coupled Plasma with Mass Spectrometry (LA-ICP-MS) to compute a likelihood ratio. This calculation is a
complex procedure that generally requires a probabilistic model including the within-source and betweensource variabilities of the features. Assuming the within-source variability to be normally distributed is a
practical premise with the available data. However, the between-source variability is generally assumed to
follow a much more complex distribution, typically described with a kernel density function. In this work,
instead of modeling distributions with complex densities, we propose the use of simpler models and the
introduction of a data pre-processing step consisting on the Gaussianization of the glass features. In this
context, to obtain a better fit of the features with the Gaussian model assumptions, we explore the use of
different normalization techniques of the LA-ICP-MS glass features, namely marginal Gaussianization based
on histogram matching, marginal Gaussianization based on Yeo-Johnson transformation and a more
complex joint Gaussianization using normalizing flows. We report an improvement in the performance of
the Likelihood Ratios computed with the previously Gaussianized feature vectors, particularly relevant in
their calibration, which implies a more reliable forensic glass comparisonThis work has been supported by the Spanish Ministerio de
Ciencia e Innovación through grant PID2021-125943OB-I0
Disclosure of temporary exposures as permanent website applications through the patrimonial survey
In a context of web application in the field of the dissemination of cultural heritage, this article advances in a methodology for the optimization of points clouds obtained through the technology of Laser Scanner (TLS). Identifying the potential of TLS surveys as interactive models that allow the cultural heritage to be perpetuated over time.
This point cloud optimization is developed with free software, focusing its exploitation on an interactive web application, which has made it possible to convert two temporary museum exhibitions into permanent exhibitions in virtual format. Developed in conjunction with the Museu d’Història de la Ciutat de Barcelona. The case study focuses on the Palau Reial Major, Gothic style, formed by the chapel of Santa Àgata (built in 1302, on the Roman wall) and Saló del Tinell (built between 1359 and 1370, on the Roman remains). Located in the Plaça del Rei, in the old town of Barcelona. In this application is very important the visual impact, it requires to represent a faithful model of the interior of the building, from the point of view of color and lighting, avoiding the
transparencies of the model through a dense cloud of dots, without occlusions, this requires a great quantity of positions. This implies a clear methodology, using different techniques such as photographic proyection, given the complexity of lighting of the building, as much for the artificial lighting as for the lighting of the stained glass. In this process, there were 84 positions that provide greater density of points, which are optimized with free programs. The temporary exhibitions of the case studies, elaborated by the MUHBA in the Saló del Tinell are:Postprint (published version
Filtering Surfaces in Surveys with Multiple Overlapping: Sagrada Familia
The heritage survey with the Terrestrial Laser Scanner (TLS) allows the document of the geometry of the building and to constitute a 3D point cloud as a register of its conservation state. When complex buildings with architectural and sculptural elements are scanned, there are
a lot of captured data that is not valid, because of the instrumental error and foreign elements of the buildings. For that reason, the point cloud must be cleaned with the objective to obtain a final model from which different products could be created, such as plans, technical documents and 3D models to print. For this cleaning process, in this article with the case of study is Antoni Gaudi’s Sagrada Familia (Fachada del Nacimiento), we propose a methodology based on
applying some filers, considering the fact that more than 3000 positions were realized, 750 of them belong to the same facade with positions that have a lot of overlapping data. Therefore, in a same zone of the building there is data scanned from multiple positions in different ways, so we can find there any kind of error, such as the noise from boundary effects, glass flections and mobile objects, and scans realized from a scissor lift, that have been previously validated.
Different point cloud filtering processes have been studied, through the point cloud itself (position by position and with a unitary cloud), and by meshing it. Every process requires the knowledge of how the scan was realized, what type of error dominates in each zone is analyzed.
Therefore, each filtering option accomplish the requirements established after the analysis.Postprint (author's final draft
Organizational culture and climate as moderators of enhanced outreach for persons with serious mental illness: results from a cluster-randomized trial of adaptive implementation strategies
Abstract
Background
Organizational culture and climate are considered key factors in implementation efforts but have not been examined as moderators of implementation strategy comparative effectiveness. We investigated organizational culture and climate as moderators of comparative effectiveness of two sequences of implementation strategies (Immediate vs. Delayed Enhanced Replicating Effective Programs [REP]) combining Standard REP and REP enhanced with facilitation on implementation of an outreach program for Veterans with serious mental illness lost to care at Veterans Health Administration (VA) facilities nationwide.
Methods
This study is a secondary analysis of the cluster-randomized Re-Engage implementation trial that assigned 3075 patients at 89 VA facilities to either the Immediate or Delayed Enhanced REP sequences. We hypothesized that sites with stronger entrepreneurial culture, task, or relational climate would benefit more from Enhanced REP than Standard REP. Veteran- and site-level data from the Re-Engage trial were combined with site-aggregated measures of entrepreneurial culture and task and relational climate from the 2012 VA All Employee Survey. Longitudinal mixed-effects logistic models examined whether the comparative effectiveness of the Immediate vs. Delayed Enhanced REP sequences were moderated by culture or climate measures at 6 and 12 months post-randomization. Three Veteran-level outcomes related to the engagement with the VA system were assessed: updated documentation, attempted contact by coordinator, and completed contact.
Results
For updated documentation and attempted contact, Veterans at sites with higher entrepreneurial culture and task climate scores benefitted more from Enhanced REP compared to Standard REP than Veterans at sites with lower scores. Few culture or climate moderation effects were detected for the comparative effectiveness of the full sequences of implementation strategies.
Conclusions
Implementation strategy effectiveness is highly intertwined with contextual factors, and implementation practitioners may use knowledge of contextual moderation to tailor strategy deployment. We found that facilitation strategies provided with Enhanced REP were more effective at improving uptake of a mental health outreach program at sites with stronger entrepreneurial culture and task climate; Veterans at sites with lower levels of these measures saw more similar improvement under Standard and Enhanced REP. Within resource-constrained systems, practitioners may choose to target more intensive implementation strategies to sites that will most benefit from them.
Trial registration
ISRCTN:
ISRCTN21059161
. Date registered: April 11, 2013.https://deepblue.lib.umich.edu/bitstream/2027.42/144775/1/13012_2018_Article_787.pd
- …