286 research outputs found
Useful scientific theories are useful: A reply to Rouder, Pratte, and Morey (2010)
In a recognition memory experiment, Mickes, Wixted, and Wais (2007) asked a simple question: Would the same result-namely, a higher mean and variance of the memory strengths for the targets as compared with the lures-be evident if one used a 20-point confidence scale and then simply computed the relevant distributional statistics from the ratings themselves instead of estimating them by fitting a Gaussian model to ROC data? And if an unequal-variance model were suggested by the ratings data, would the magnitude of the estimated ratio of the standard deviations based on the ratings (s Lure / s Target ) be similar to the magnitude of the estimated ratio obtained by fitting a Gaussian model to ROC data ( Lure / Target )? A priori, agreement between the two ratio estimates seems unlikely, because there are many reasons why they might disagree. For example, if the Gaussian assumption is not valid, then disagreement between the two estimates seems more likely than agreement. In addition, if the rating scale does not approximate an interval scale, or if it covers only a limited range of the memory strength dimension, then, again, disagreement seems more likely than agreement. Somewhat surprisingly, From these results, 1. The two experiments reported here support a conclusion that is commonly drawn from ROC analysis-namely, that the memory strengths of the targets are more variable than the memory strengths of the lures. (p. 864) 2. The close agreement between the model-based ROC analysis and the model-free ratings method supports not only an unequal-variance model, but also the idea that the memory strengths are distributed in such a way that fitting a specifically Gaussian model to the data yields accurate conclusions (even if the true underlying distributions are not strictly Gaussian). (p. 864
Filler siphoning theory does not predict the effect of lineup fairness on the ability to discriminate innocent from guilty suspects : reply to Smith, Wells, Smalarz, and Lampinen
Smith, Wells, Smalarz, and Lampinen (2017) claim that we (Colloff, Wade, & Strange, 2016) were wrong to conclude that fair lineups enhanced people’s ability to discriminate between innocent and guilty suspects compared to unfair lineups. They argue our results reflect differential-filler-siphoning, not diagnostic-feature-detection. But a manipulation that decreases identifications of innocent suspects more than guilty suspects (i.e., that increases filler-siphoning or conservative responding) does not necessarily increase people’s ability to discriminate between innocent and guilty suspects. Unlike diagnostic-feature-detection, fillersiphoning does not make a prediction about people’s ability to discriminate between innocent and guilty suspects. Moreover, we replicated Colloff et al.’s results in the absence of fillersiphoning (N=2,078). Finally, a model is needed to measure ability to discriminate between innocent and guilty suspects. Smith et al.’s model-based analysis contained several errors. Correcting those errors shows that our model was not faulty, and Smith et al.’s model supports our original conclusions
Displacement Across a Fracture Gap with Axial Loading of Far Cortical Locking Constructs
Purpose: Far cortical locking has been proposed for reducing stiffness and promoting greater dynamic stability in locked plating constructs. Prior studies have shown reduced stiffness with axial loading of these constructs, leading to a theoretical increase in inter-fragmentary motion and secondary bone healing. The purpose of this study was to examine strain across a fracture gap using far cortical locking constructs in a biomechanical model of distal femoral fractures.
Methods: Fourth generation sawbones were cut transversely along the distal diaphysis and plated with distal femoral buttress plates and cortical locking screws. Far cortical locking (FCL) specimens were predrilled in the lateral cortex and control specimens were plated with a standard locked plating construct. The constructs were loaded sequentially with 100, 200, and 400 lbs of force on a mechanical test frame. Displacement across the fracture gap measured in pixels using an optical system.
Results: Strain across the fracture gap increased with progressive loading from zero to 400 lbs in both groups. Strain also decreased in a linear fashion from medial to lateral across the fracture gap in both constructs (Figure 1). Standard locking constructs exhibited an average 28% greater strain than the far cortical locking constructs at all loading forces. Control specimens exhibited greater lateral displacement of the distal segment relative to the plate (Figure 2), consistent with higher shear forces compared to FCL specimens.
Conclusions: In all specimens, there was considerable strain seen with loading that increased in characteristic fashion from lateral to medial. Overall, FCL constructs exhibited both lower strain, and importantly, lower shear, than measured in controls. This biomechanical model suggests that FCL changes loading across the femoral diaphysis in complex ways, and that assumptions about strain approaching zero on the lateral side of the distal femur with conventional locking or FCL may be incorrect
A direct test of the unequal-variance signal detection model of recognition memory
Analyses of the receiver operating characteristic (ROC) almost invariably suggest that, on a recognition memory test, the standard deviation of memory strengths associated with the lures (sigma(lure)) is smaller than that of the targets (sigma(target)). Often, sigma(lure)/ sigma(target) approximately = 0.80. However, that conclusion is based on a model that assumes that the memory strength distributions are Gaussian in form. In two experiments, we investigated this issue in a more direct way by asking subjects to simply rate the memory strengths of targets and lures using a 20-point or a 99-point strength scale. The results showed that the standard deviation of the ratings made to the targets (S(target)) was, indeed, larger than the standard deviation of the ratings made to the lures (S(lure)). Moreover, across subjects, the ratio S(lure)/ S(target) correlated highly with the estimate of sigma(lure)/ sigma(target) obtained from ROC analysis, and both estimates were, on average, approximately equal to 0.80.</p
Apparatus for a Search for T-violating Muon Polarization in Stopped-Kaon Decays
The detector built at KEK to search for T-violating transverse muon
polarization in K+ --> pi0 mu+ nu (Kmu3) decay of stopped kaons is described.
Sensitivity to the transverse polarization component is obtained from
reconstruction of the decay plane by tracking the mu+ through a toroidal
spectrometer and detecting the pi0 in a segmented CsI(Tl) photon calorimeter.
The muon polarization was obtained from the decay positron asymmetry of muons
stopped in a polarimeter. The detector included features which minimized
systematic errors while maintaining high acceptance.Comment: 56 pages, 30 figures, submitted to NI
Useful scientific theories are useful: A reply to Rouder, Pratte, and Morey (2010)
In a recognition memory experiment, Mickes, Wixted, and Wais (2007) reported that distributional statistics computed from ratings made using a 20-point confidence scale (which showed that the standard deviation of the ratings made to lures was approximately 0.80 times that of the targets) essentially matched the distributional statistics estimated indirectly by fitting a Gaussian signal-detection model to the receiver-operating characteristic (ROC). We argued that the parallel results serve to increase confidence in the Gaussian unequal-variance model of recognition memory. Rouder, Pratte, and Morey (2010) argue that the results are instead uninformative. In their view, parametric models of latent memory strength are not empirically distinguishable. As such, they argue, our conclusions are arbitrary, and parametric ROC analysis should be abandoned. In an attempt to demonstrate the inherent untestability of parametric models, they describe a non-Gaussian equal-variance model that purportedly accounts for our findings just as well as the Gaussian unequal-variance model does. However, we show that their new model-despite being contrived after the fact and in full view of the to-be-explained data-does not account for the results as well as the unequal-variance Gaussian model does. This outcome manifestly demonstrates that parametric models are, in fact, testable. Moreover, the results differentially favor the Gaussian account over the probit model and over several other reasonable distributional forms (such as the Weibull and the lognormal).</p
- …