1,216 research outputs found

    Runaway Feedback Loops in Predictive Policing

    Full text link
    Predictive policing systems are increasingly used to determine how to allocate police across a city in order to best prevent crime. Discovered crime data (e.g., arrest counts) are used to help update the model, and the process is repeated. Such systems have been empirically shown to be susceptible to runaway feedback loops, where police are repeatedly sent back to the same neighborhoods regardless of the true crime rate. In response, we develop a mathematical model of predictive policing that proves why this feedback loop occurs, show empirically that this model exhibits such problems, and demonstrate how to change the inputs to a predictive policing system (in a black-box manner) so the runaway feedback loop does not occur, allowing the true crime rate to be learned. Our results are quantitative: we can establish a link (in our model) between the degree to which runaway feedback causes problems and the disparity in crime rates between areas. Moreover, we can also demonstrate the way in which \emph{reported} incidents of crime (those reported by residents) and \emph{discovered} incidents of crime (i.e. those directly observed by police officers dispatched as a result of the predictive policing algorithm) interact: in brief, while reported incidents can attenuate the degree of runaway feedback, they cannot entirely remove it without the interventions we suggest.Comment: Extended version accepted to the 1st Conference on Fairness, Accountability and Transparency, 2018. Adds further treatment of reported as well as discovered incident

    Text-based recall and extra-textual generations resulting from simplified and authentic texts

    Get PDF
    This study uses a moving windows self-paced reading task to assess text comprehension of beginning and intermediate-level simplified texts and authentic texts by L2 learners engaged in a text-retelling task. Linear mixed effects (LME) models revealed statistically significant main effects for reading proficiency and text level on the number of text-based propositions recalled: More proficient readers recalled more propositions. However, text level was a stronger predictor of propositional recall than reading proficiency. LME models also revealed main effects for language proficiency and text level on the number of extra-textual propositions produced. Text level, however, emerged as a stronger predictor than language proficiency. Post-hoc analyses indicated that there were more irrelevant elaborations for authentic texts and intermediate and authentic texts led to a greater number of relevant elaborations compared to beginning texts

    Deciphering the bacterial glycocode: Recent advances in bacterial glycoproteomics

    Get PDF
    Bacterial glycoproteins represent an attractive target for new antibacterial treatments, as they are frequently linked to pathogenesis and contain distinctive glycans that are absent in humans. Despite their potential therapeutic importance, many bacterial glycoproteins remain uncharacterized. This review focuses on recent advances in deciphering the bacterial glycocode, including metabolic glycan labeling to discover and characterize bacterial glycoproteins, lectin-based microarrays to monitor bacterial glycoprotein dynamics, crosslinking sugars to assess the roles of bacterial glycoproteins, and harnessing bacterial glycosylation systems for the efficient production of industrially important glycoproteins. © 2012 Elsevier Ltd

    Robust Detection of Dynamic Community Structure in Networks

    Get PDF
    We describe techniques for the robust detection of community structure in some classes of time-dependent networks. Specifically, we consider the use of statistical null models for facilitating the principled identification of structural modules in semi-decomposable systems. Null models play an important role both in the optimization of quality functions such as modularity and in the subsequent assessment of the statistical validity of identified community structure. We examine the sensitivity of such methods to model parameters and show how comparisons to null models can help identify system scales. By considering a large number of optimizations, we quantify the variance of network diagnostics over optimizations (`optimization variance') and over randomizations of network structure (`randomization variance'). Because the modularity quality function typically has a large number of nearly-degenerate local optima for networks constructed using real data, we develop a method to construct representative partitions that uses a null model to correct for statistical noise in sets of partitions. To illustrate our results, we employ ensembles of time-dependent networks extracted from both nonlinear oscillators and empirical neuroscience data.Comment: 18 pages, 11 figure

    Text readability and intuitive simplification: A comparison of readability formulas

    Get PDF
    Texts are routinely simplified for language learners with authors relying on a variety of approaches and materials to assist them in making the texts more comprehensible. Readability measures are one such tool that authors can use when evaluating text comprehensibility. This study compares the Coh-Metrix Second Language (L2) Reading Index, a readability formula based on psycholinguistic and cognitive models of reading, to traditional readability formulas on a large corpus of texts intuitively simplified for language learners. The goal of this study is to determine which formula best classifies text level (advanced, intermediate, beginner) with the prediction that text classification relates to the formulas’ capacity to measure text comprehensibility. The results demonstrate that the Coh-Metrix L2 Reading Index performs significantly better than traditional readability formulas, suggesting that the variables used in this index are more closely aligned to the intuitive text processing employed by authors when simplifying texts

    Stable Carbon Isotope Fractionation in Chlorinated Ethene Degradation by Bacteria Expressing Three Toluene Oxygenases

    Get PDF
    One difficulty in using bioremediation at a contaminated site is demonstrating that biodegradation is actually occurring in situ. The stable isotope composition of contaminants may help with this, since they can serve as an indicator of biological activity. To use this approach it is necessary to establish how a particular biodegradation pathway affects the isotopic composition of a contaminant. This study examined bacterial strains expressing three aerobic enzymes for their effect on the 13C/12C ratio when degrading both trichloroethene (TCE) and cis-1,2-dichloroethene (c-DCE): toluene 3-monoxygenase, toluene 4-monooxygenase, and toluene 2,3-dioxygenase. We found no significant differences in fractionation among the three enzymes for either compound. Aerobic degradation of c-DCE occurred with low fractionation producing δ13C enrichment factors of −0.9 ± 0.5 to −1.2 ± 0.5, in contrast to reported anaerobic degradation δ13C enrichment factors of −14.1 to −20.4‰. Aerobic degradation of TCE resulted in δ13C enrichment factors of −11.6 ± 4.1 to −14.7 ± 3.0‰ which overlap reported δ13C enrichment factors for anaerobic TCE degradation of −2.5 to −13.8‰. The data from this study suggest that stable isotopes could serve as a diagnostic for detecting aerobic biodegradation of TCE by toluene oxygenases at contaminated sites

    A Call to Action: Introducing the Initiative for Eradicating Racism

    Get PDF
    In June of 2020, several faculty and staff members in the School of Education and Human Services at Oakland University in Rochester, Michigan began developing plans to launch a new project entitled The Initiative for Eradicating Racism. This article begins by defining key terms used throughout the article, followed by underscoring the purpose and need for this type of academic initiative. Next, the frameworks used to guide the development of this initiative are highlighted, along with a brief introduction of the current diversity, inclusion, and social justice efforts in progress in the School of Education and Human Services. Lastly, we share our plans to move from our current initiative status to a self-sustaining center in the near future

    What's so simple about simplified texts? A computational and psycholinguistic investigation of text comprehension and text processing

    Get PDF
    This study uses a moving windows self-paced reading task to assess both text comprehension and processing time of authentic texts and these same texts simplified to beginning and intermediate levels. Forty-eight second language learners each read 9 texts (3 different authentic, beginning, and intermediate level texts). Repeated measures ANOVAs reported linear effects of text type on reading time (normalized for text length) and true/false comprehension scores indicating that beginning level texts were processed faster and were more comprehensible than intermediate level and authentic texts. The linear effect of text type on comprehension remained significant within an ANCOVA controlling for language proficiency (i.e., TOEFL scores), reading proficiency (i.e., Gates-MacGinitie scores), and background knowledge, but not for reading time. Implications of these findings for materials design, reading pedagogy, and text processing and comprehension are discussed

    L2 Writing Practice: Game Enjoyment as a Key to Engagement

    Get PDF
    corecore