529 research outputs found

    Past trauma and future choices: Differences in discounting in low-income, urban African Americans

    Get PDF
    AbstractBackgroundExposure to traumatic events is surprisingly common, yet little is known about its effect on decision making beyond the fact that those with post-traumatic stress disorder are more likely to have substance-abuse problems. We examined the effects of exposure to severe trauma on decision making in low-income, urban African Americans, a group especially likely to have had such traumatic experiences.MethodParticipants completed three decision-making tasks that assessed the subjective value of delayed monetary rewards and payments and of probabilistic rewards. Trauma-exposed cases and controls were propensity-matched on demographic measures, treatment for psychological problems, and substance dependence.ResultsTrauma-exposed cases discounted the value of delayed rewards and delayed payments, but not probabilistic rewards, more steeply than controls. Surprisingly, given previous findings that suggested women are more affected by trauma when female and male participants’ data were analyzed separately, only the male cases showed steeper delay discounting. Compared with nonalcoholic males who were not exposed to trauma, both severe trauma and alcohol-dependence produced significantly steeper discounting of delayed rewards.ConclusionsThe current study shows that exposure to severe trauma selectively affects fundamental decision-making processes. Only males were affected, and effects were observed only on discounting delayed outcomes (i.e. intertemporal choice) and not on discounting probabilistic outcomes (i.e. risky choice). These findings are the first to show significant differences in the effects of trauma on men's and women's decision making, and the selectivity of these effects has potentially important implications for treatment and also provides clues as to underlying mechanisms.</jats:sec

    Statistical Inference After Model Selection

    Get PDF
    Conventional statistical inference requires that a model of how the data were generated be known before the data are analyzed. Yet in criminology, and in the social sciences more broadly, a variety of model selection procedures are routinely undertaken followed by statistical tests and confidence intervals computed for a “final” model. In this paper, we examine such practices and show how they are typically misguided. The parameters being estimated are no longer well defined, and post-model-selection sampling distributions are mixtures with properties that are very different from what is conventionally assumed. Confidence intervals and statistical tests do not perform as they should. We examine in some detail the specific mechanisms responsible. We also offer some suggestions for better practice and show though a criminal justice example using real data how proper statistical inference in principle may be obtained

    Developing a Practical Forecasting Screener for Domestic Violence Incidents

    Get PDF
    In this paper, we report on the development of a short screening tool that deputies in the Los Angeles Sheriff\u27s Department could use in the field to help forecast domestic violence incidents in particular households. The data come from over 500 households to which sheriff\u27s deputies were dispatched in the fall of 2003. Information on potential predictors was collected at the scene. Outcomes were measured during a three month follow-up. The data were analyzed with modern data mining procedures in which true forecasts were evaluated. A screening instrument was then developed based on a small fraction of the information collected. Making the screening instrument more complicated did not improve forecasting skill. Taking the relative costs of false positives and false negatives into account, the instrument correctly forecasted future calls for service about 60% of the time. Future calls involving domestic violence misdemeanors and felonies were correctly forecast about 50% of the time. The 50% figure is especially important because such calls require a law enforcement response and yet are a relatively small fraction of all domestic violence calls for service. A number of broader policy implications follow. It is feasible to construct a quick-response, domestic violence screener that is practical to deploy and that can forecast with useful skill. More informed decisions by police officers in the field can follow. Although the same kinds of predictors are likely to be effective in a wide variety of jurisdictions, the particular indicators selected will vary in response to local demographics and the local costs of forecasting errors. It is also feasible to evaluate such quick-response threat assessment tools for their forecasting accuracy. But, the costs of forecasting errors must be taken into account. Also, when the data used to build the forecasting instrument are also used to evaluate its accuracy, inflated estimates of forecasting skill are likely

    The Legalization of Abortion and Subsequent Youth Homicide: A Time Series Analysis

    Get PDF
    In this article, we examine the association between the legalization of abortion with the 1973 Roe v. Wade decision and youth homicide in the 1980s and 1990s. An interrupted time series design was used to examine the deaths of all U.S. 15- to 24-year-olds that were classified as homicides according to the International Classification of Diseases (codes E960-969) from 1970 to 1998. The legalization of abortion is associated over a decade later with a gradual reduction in the homicides of White and non-White young men. The effect on the homicides of young women is minimal. We conclude that the 1990s decline in the homicide of young men is statistically associated with the legalization of abortion. Findings are not consistent with several alternative explanations, such as changes in the crack cocaine drug market. It is almost inconceivable that in the United States of today, policies affecting the choice to have children would be justified as a means to control crime. Yet, if the legalization of abortion had this unintended effect, the full range of policy implications needs to be discussed

    Characteristic QSO Accretion Disk Temperatures from Spectroscopic Continuum Variability

    Full text link
    Using Sloan Digital Sky Survey (SDSS) quasar spectra taken at multiple epochs, we find that the composite flux density differences in the rest frame wavelength range 1300-6000 AA can be fit by a standard thermal accretion disk model where the accretion rate has changed from one epoch to the next (without considering additional continuum emission components). The fit to the composite residual has two free parameters: a normalizing constant and the average characteristic temperature Tˉ∗\bar{T}^*. In turn the characteristic temperature is dependent on the ratio of the mass accretion rate to the square of the black hole mass. We therefore conclude that most of the UV/optical variability may be due to processes involving the disk, and thus that a significant fraction of the UV/optical spectrum may come directly from the disk.Comment: 31 pages, 8 figure

    Misspecified Mean Function Regression: Making Good Use of Regression Models That Are Wrong

    Get PDF
    There are over three decades of largely unrebutted criticism of regression analysis as practiced in the social sciences. Yet, regression analysis broadly construed remains for many the method of choice for characterizing conditional relationships. One possible explanation is that the existing alternatives sometimes can be seen by researchers as unsatisfying. In this article, we provide a different formulation. We allow the regression model to be incorrect and consider what can be learned nevertheless. To this end, the search for a correct model is abandoned. We offer instead a rigorous way to learn from regression approximations. These approximations, not “the truth,” are the estimation targets. There exist estimators that are asymptotically unbiased and standard errors that are asymptotically correct even when there are important specification errors. Both can be obtained easily from popular statistical packages

    Models as Approximations - A Conspiracy of Random Regressors and Model Deviations Against Classical Inference in Regression

    Get PDF
    More than thirty years ago Halbert White inaugurated a “modelrobust” form of statistical inference based on the “sandwich estimator” of standard error. It is asymptotically correct even under “model misspecification,” that is, when models are approximations rather than generative truths. It is well-known to be “heteroskedasticity-consistent”, but it is less well-known to be “nonlinearity-consistent” as well. Nonlinearity, however, raises fundamental issues: When fitted models are approximations, conditioning on the regressor is no longer permitted because the ancillarity argument that justifies it breaks down. Two effects occur: (1) parameters become dependent on the regressor distribution; (2) the sampling variability of parameter estimates no longer derives from the conditional distribution of the response alone. Additional sampling variability arises when the nonlinearity conspires with the randomness of the regressors to generate a 1/ √ N contribution to standard errors. Asymptotically, standard errors from “model-trusting” fixedregressor theories can deviate from those of “model-robust” randomregressor theories by arbitrary magnitudes. In the case of linear models, a test will be proposed for comparing the two types of standard error
    • 

    corecore