49 research outputs found

    Terry Stops and Frisks: The Troubling Use of Common Sense in a World of Empirical Data

    Get PDF
    The investigative detention doctrine first announced in Terry v. Ohio and amplified over the past fifty years has been much analyzed, praised, and criticized from a number of perspectives. Significantly, however, over this time period commentators have only occasionally questioned the Supreme Court’s “common sense” judgments regarding the factors sufficient to establish reasonable suspicion for stops and frisks. For years, the Court has provided no empirical basis for its judgments, due in large part to the lack of reliable data. Now, with the emergence of comprehensive data on these police practices, much can be learned about the predictive power of suspect conduct and other predicates for law enforcement interventions. And what has been learned calls into question a number of factors that have been credited over many years. No observer of the legal system can fail to notice the growing role of data and empirical analysis in the courts. A disparate set of cases have turned in large part on rigorously analyzed data. Yet this trend has not taken root in an important set of cases involving the widely used practice of stop-and-frisk. When stop-and-frisk practices become the subject of litigation, courts generally either have no data to review or have failed to engage in empirical analysis of the data that are available and which could be used to test the claims of reasonable suspicion. Rather, the courts invoke the conventional wisdom that as a matter of common sense certain conduct, for example, furtive movement, flight, bulges in clothing, and suspect location, indicates criminal conduct. We have no argument with common sense propositions; we have no aversion to clear, straightforward thinking. But what this phrase often reflects is a set of unexamined (even if widely held) assumptions. The proliferation of data on these basic questions provides the means for empirical analysis, and it is our argument that courts should do so in assessing reasonable suspicion factors in the same manner that they have engaged in empirical judgments, using both big and targeted data, in other areas

    \u3ci\u3eTerry\u3c/i\u3e Stops and Frisks: The Troubling Use of Common Sense in a World of Empirical Data

    Get PDF
    The investigative detention doctrine first announced in Terry v. Ohio and amplified over the past fifty years has been much analyzed, praised, and criticized from a number of perspectives. Significantly, however, over this time period commentators have only occasionally questioned the Supreme Court’s “common sense” judgments regarding the factors sufficient to establish reasonable suspicion for stops and frisks. For years, the Court has provided no empirical basis for its judgments, due in large part to the lack of reliable data. Now, with the emergence of comprehensive data on these police practices, much can be learned about the predictive power of suspect conduct and other predicates for law enforcement interventions. And what has been learned calls into question a number of factors that have been credited over many years. No observer of the legal system can fail to notice the growing role of data and empirical analysis in the courts. A disparate set of cases have turned in large part on rigorously analyzed data. Yet this trend has not taken root in an important set of cases involving the widely used practice of stop-and-frisk. When stop-and-frisk practices become the subject of litigation, courts generally either have no data to review or have failed to engage in empirical analysis of the data that are available and which could be used to test the claims of reasonable suspicion. Rather, the courts invoke the conventional wisdom that as a matter of common sense certain conduct, for example, furtive movement, flight, bulges in clothing, and suspect location, indicates criminal conduct. We have no argument with common sense propositions; we have no aversion to clear, straightforward thinking. But what this phrase often reflects is a set of unexamined (even if widely held) assumptions. The proliferation of data on these basic questions provides the means for empirical analysis, and it is our argument that courts should do so in assessing reasonable suspicion factors in the same manner that they have engaged in empirical judgments, using both big and targeted data, in other areas

    Short communication: Goat colostrum quality: Litter size and lactation number effects

    Full text link
    The quality of colostrum of Murciano-Granadina goats was studied to establish the transition period and the time when milk can be marketed. Forty-three dairy goats were used: 19 primiparous (15 single births; 4 multiple births) and 24 multiparous (10 single births; 14 multiple births). Samples were collected every 12 h during the first week postpartum. Physicochemical parameters and somatic cell count were determined. Analysis of variance with repeated measures was used to study the effect of different factors: postpartum time, litter size, lactation number, their interactions, and production level on colostrum. Postpartum time had a significant effect on all parameters studied, which decreased along the first week of lactation, whereas lactose, pH, and conductivity increased. Based on these results, colostrum secretion takes place until 36 h postpartum (hpp). In relation to other factors of variation studied, the lactation number influenced most colostrum components, whereas the litter size only affected the pH value, protein and lactose content. The production level influenced only the protein and dry matter contents, with an inverse relationship. Milk produced during the period between 36 and 96 hpp is considered transition milk, which should not be commercialized. Milk collected after 4 d postpartum (96 hpp) could be marketed, ensuring that its composition does not present a risk in the dairy industry.This work was part of the AGL-2009-11524 Project funded by the Spanish Ministry of Science and Innovation (Madrid, Spain). The authors are grateful to ZEU-Inmunotec (Zaragoza, Spain) for their support.Romero Rueda, T.; Beltrán Martínez, MC.; Rodríguez Garcia, M.; Marti-De Olives A.; Molina Pons, MP. (2013). Short communication: Goat colostrum quality: Litter size and lactation number effects. Journal of Dairy Science. 96(12):7526-7531. https://doi.org/10.3168/jds.2013-6900S75267531961

    Magnetic resonance imaging (MRI) contrast agents for tumor diagnosis

    Get PDF
    10.1260/2040-2295.4.1.23Journal of Healthcare Engineering4123-4

    \u3ci\u3eTerry\u3c/i\u3e Stops and Frisks: The Troubling Use of Common Sense in a World of Empirical Data

    No full text
    The investigative detention doctrine first announced in Terry v. Ohio and amplified over the past fifty years has been much analyzed, praised, and criticized from a number of perspectives. Significantly, however, over this time period commentators have only occasionally questioned the Supreme Court’s “common sense” judgments regarding the factors sufficient to establish reasonable suspicion for stops and frisks. For years, the Court has provided no empirical basis for its judgments, due in large part to the lack of reliable data. Now, with the emergence of comprehensive data on these police practices, much can be learned about the predictive power of suspect conduct and other predicates for law enforcement interventions. And what has been learned calls into question a number of factors that have been credited over many years. No observer of the legal system can fail to notice the growing role of data and empirical analysis in the courts. A disparate set of cases have turned in large part on rigorously analyzed data. Yet this trend has not taken root in an important set of cases involving the widely used practice of stop-and-frisk. When stop-and-frisk practices become the subject of litigation, courts generally either have no data to review or have failed to engage in empirical analysis of the data that are available and which could be used to test the claims of reasonable suspicion. Rather, the courts invoke the conventional wisdom that as a matter of common sense certain conduct, for example, furtive movement, flight, bulges in clothing, and suspect location, indicates criminal conduct. We have no argument with common sense propositions; we have no aversion to clear, straightforward thinking. But what this phrase often reflects is a set of unexamined (even if widely held) assumptions. The proliferation of data on these basic questions provides the means for empirical analysis, and it is our argument that courts should do so in assessing reasonable suspicion factors in the same manner that they have engaged in empirical judgments, using both big and targeted data, in other areas

    Terry Stops and Frisks: The Troubling Use of Common Sense in a World of Empirical Data

    No full text
    The investigative detention doctrine first announced in Terry v. Ohio and amplified over the past fifty years has been much analyzed, praised, and criticized from a number of perspectives. Significantly, however, over this time period commentators have only occasionally questioned the Supreme Court’s “common sense” judgments regarding the factors sufficient to establish reasonable suspicion for stops and frisks. For years, the Court has provided no empirical basis for its judgments, due in large part to the lack of reliable data. Now, with the emergence of comprehensive data on these police practices, much can be learned about the predictive power of suspect conduct and other predicates for law enforcement interventions. And what has been learned calls into question a number of factors that have been credited over many years. No observer of the legal system can fail to notice the growing role of data and empirical analysis in the courts. A disparate set of cases have turned in large part on rigorously analyzed data. Yet this trend has not taken root in an important set of cases involving the widely used practice of stop-and-frisk. When stop-and-frisk practices become the subject of litigation, courts generally either have no data to review or have failed to engage in empirical analysis of the data that are available and which could be used to test the claims of reasonable suspicion. Rather, the courts invoke the conventional wisdom that as a matter of common sense certain conduct, for example, furtive movement, flight, bulges in clothing, and suspect location, indicates criminal conduct. We have no argument with common sense propositions; we have no aversion to clear, straightforward thinking. But what this phrase often reflects is a set of unexamined (even if widely held) assumptions. The proliferation of data on these basic questions provides the means for empirical analysis, and it is our argument that courts should do so in assessing reasonable suspicion factors in the same manner that they have engaged in empirical judgments, using both big and targeted data, in other areas

    Measurement of immunoglobulin concentration in goat colostrum

    No full text
    Failure of transfer of passive immunity is a major cause of increased susceptibility to infectious agents in newborn kids. Feeding of high quality colostrum is the most effective way to obtain sufficient immunoglobulin. The aims of the present study are (1) to evaluate the density measurement using a hydrometer to estimate the immunoglobulin concentration in caprine colostrum and (2) to measure the effect of colostrum temperature on density and subsequently on immunoglobulin estimations. First colostrum of 30 multiparous goats has been studied. Colostrum had a dry matter of 29.0 ± 6.3%. The fat concentration was 94.5 ± 39.9 g/L and protein concentration was 148.4 ± 28.9 g/L. Mean total immunoglobulin concentration was 54.4 ± 26.4 g/L measured by ELISA as reference method. Total immunoglobulin was subdivided into subclasses: immunoglobulin G (1 and 2) 49.1 ± 25.7 g/L (90.3%), immunoglobulin M 3.19 ± 1.66 g/L (6.0%) and immunoglobulin A 2.00 ± 1.03 g/L (3.7%). Density measurements (1044.3 ± 7.3 g/L) using a hydrometer devised for cow colostrum were compared to density measured by a pycnometer (1044.6 ± 8.3 g/L) which is the reference method. Colostrum density measured with the hydrometer showed a correlation with results obtained using the reference method (r = 0.99, P < 0.01). As in colostrum of several other species the density is temperature dependent. Therefore, a correction to the temperature for which the hydrometer is designed is necessary. Regression analysis between density and immunoglobulin concentration revealed only a moderate R2 value (0.44). Therefore, the value of density to predict immunoglobulin concentration is limited
    corecore