79 research outputs found

    Crime, Control and Complexity On the ‘Crime and Security Complex’ in Modern Western Society

    Get PDF
    The dominant scientific methodology utilised by social scientists to study problems of crime and disorder is a macroscopic perspective that focuses on order and control; the molar. It assumes the ‘outside’ position of the researcher who focuses on functionality. Researchers construct their object of research as a distinct phenomenon and try to find links between it and its environment: the research object is assumed to be goal-driven. However, social reality is much more complex than this dominant perspective is able to research. This thesis argues that the molar cannot be fully understood without the molecular, a concept that expresses the idea of the unpredictable: sentiments, such as misunderstandings, fears and aspirations are key. However, the molar and the molecular are inextricably connected and emerge at the same time. Consequently, small changes on the molecular level could have huge and unpredictable effects on the molar level. Then, it becomes key to study the emergence of systems of control, such as law and partnerships, in relation to these molecular liquidities. Such an approach might teach us how crime policies deviate from the goals intended and start to produce undesirable side-effects. The thesis explores an alternative epistemology for examining issues of criminological concern which centers the molecular. It presents three case studies to illustrate the way both levels are interconnected. The first is concerned with the messiness and unpredictability of everyday relations and interactions in a criminal network. The second explores two Dutch police partnerships. Molecular elements such as personal preferences, frustrations and tensions are found to have a significant impact on the outcome of these partnerships. The third examines a measure introduced to prevent anti-social behaviour in the Netherlands which made shopkeepers and security personnel co-responsible for detecting and punishing acts such as shoplifting and fraud. The case is embedded in civil, not criminal, law and it is the diffuse nature of quasicriminal law that leads shopkeepers to refer to internal rules to justify their own actions. The cases show that the molecular is crucial in understanding crime problems and possible solutions, and the thesis concludes that the molecular should form the basis of a new epistemology for criminology research

    Wie is de baas van de lokale politie? Balans tussen community policing en het gebruik van ingrijpende politietechnieken

    Get PDF
    In dit Cahier worden twee thema’s behandeld: gezagsrelaties bij de lokale politie en de balans tussen community policing en het gebruik van ingrijpende politietechnieken. Beide onderwerpen vormden elk op zich een thema van een studiedag georganiseerd door het Centrum voor Politiestudies. De teksten van de verschillende sprekers werden hier voor u gebundeld. In het kader van de politionele besluitvorming zijn zowel de burgemeester (of het politiecollege) als de procureur des Konings verantwoordelijk voor het lokale politiebeleid en spelen zij een voorname rol bij de totstandkoming van dit beleid. Voor de lokale politionele beleidsvoering houdt dat in dat de korpschef bij het uitvoeren van het beleid rekening moet houden met deze twee overheden, die soms tegenstrijdige belangen hebben. Het eerste deel van deze publicatie besteedt dan ook aandacht aan deze gezagsoverheden en hun onderlinge verhouding tot de lokale politieorganisatie Centraal staat daarbij de vraag: "Wie is de baas van de lokale politie?”. In het tweede deel wordt gefocust op het politiefunctioneren. Het discours over het optreden van de politie zit vaak gewrongen in een tweespalt tussen hard en zacht politieoptreden. Hierbij worden dan vaak gemeenschapsgerichte politiezorg en politionele repressie tegenover elkaar gesteld. Deze tegenstrijdigheid vindt zijn oorsprong in het feit dat in een democratische samenleving tegelijkertijd gevraagd wordt om bescherming door de politie Ă©n tegen de politie. Dit maakt van de politieorganisatie een ambigue organisatie, waardoor de organisatoren reeds vaak geprobeerd hebben om het dilemma tussen het gebruik van ingrijpende politietechnieken en methoden, die vaak gepaard gaan met een repressief politieoptreden, en gemeenschapsgerichte politiezorg samen in de weegschaal te leggen

    New Psychoactive Substances in the Homeless Population: A Cross-Sectional Study in the United Kingdom

    Get PDF
    The last few years have seen the emergence of new psychoactive substance among the homeless population, specifically synthetic cannabinoid receptor agonists. The purpose of this study is to investigate the knowledge and experiences of new psychoactive substances amongst users from the homeless population. An explanatory research design was applied using a semi-structured questionnaire with the focus on gaining insights on the prevalence, motivations and effects. Participants were recruited through convenience sampling from support organisations and charities UK-wide. Descriptive statistics and logistic regression were applied to analyse the data obtained from participant surveys. A total of 105 participants met the inclusion criteria and were in the age range of 18 to 64 years old. Almost 70% consumed new psychoactive substance products, which “Spice” was the most prevalent substance. Homeless users had consumed new psychoactive substance to escape reality and to self-treat themselves and stopped consumption due to the adverse effects. Adverse events were reported from the majority of the participants and led to more than 20% of the participants requiring medical treatment following hospitalisation. Findings from this study can contribute to the development of guidelines and policies that specifically address the needs of the homeless population who use new psychoactive substances

    A Mathematical Model for Interpretable Clinical Decision Support with Applications in Gynecology

    Get PDF
    Over time, methods for the development of clinical decision support (CDS) systems have evolved from interpretable and easy-to-use scoring systems to very complex and non-interpretable mathematical models. In order to accomplish effective decision support, CDS systems should provide information on how the model arrives at a certain decision. To address the issue of incompatibility between performance, interpretability and applicability of CDS systems, this paper proposes an innovative model structure, automatically leading to interpretable and easily applicable models. The resulting models can be used to guide clinicians when deciding upon the appropriate treatment, estimating patient-specific risks and to improve communication with patients.We propose the interval coded scoring (ICS) system, which imposes that the effect of each variable on the estimated risk is constant within consecutive intervals. The number and position of the intervals are automatically obtained by solving an optimization problem, which additionally performs variable selection. The resulting model can be visualised by means of appealing scoring tables and color bars. ICS models can be used within software packages, in smartphone applications, or on paper, which is particularly useful for bedside medicine and home-monitoring. The ICS approach is illustrated on two gynecological problems: diagnosis of malignancy of ovarian tumors using a dataset containing 3,511 patients, and prediction of first trimester viability of pregnancies using a dataset of 1,435 women. Comparison of the performance of the ICS approach with a range of prediction models proposed in the literature illustrates the ability of ICS to combine optimal performance with the interpretability of simple scoring systems.The ICS approach can improve patient-clinician communication and will provide additional insights in the importance and influence of available variables. Future challenges include extensions of the proposed methodology towards automated detection of interaction effects, multi-class decision support systems, prognosis and high-dimensional data

    Responses of competitive understorey species to spatial environmental gradients inaccurately explain temporal changes

    Get PDF
    Understorey plant communities play a key role in the functioning of forest ecosystems. Under favourable environmental conditions, competitive understorey species may develop high abundances and influence important ecosystem processes such as tree regeneration. Thus, understanding and predicting the response of competitive understorey species as a function of changing environmental conditions is important for forest managers. In the absence of sufficient temporal data to quantify actual vegetation changes, space-for-time (SFT) substitution is often used, i.e. studies that use environmental gradients across space to infer vegetation responses to environmental change over time. Here we assess the validity of such SFT approaches and analysed 36 resurvey studies from ancient forests with low levels of recent disturbances across temperate Europe to assess how six competitive understorey plant species respond to gradients of overstorey cover, soil conditions, atmospheric N deposition and climatic conditions over space and time. The combination of historical and contemporary surveys allows (i) to test if observed contemporary patterns across space are consistent at the time of the historical survey, and, crucially, (ii) to assess whether changes in abundance over time given recorded environmental change match expectations from patterns recorded along environmental gradients in space. We found consistent spatial relationships at the two periods: local variation in soil variables and overstorey cover were the best predictors of individual species’ cover while interregional variation in coarse-scale variables, i.e. N deposition and climate, was less important. However, we found that our SFT approach could not accurately explain the large variation in abundance changes over time. We thus recommend to be cautious when using SFT substitution to infer species responses to temporal changes.</p

    Common Limitations of Image Processing Metrics:A Picture Story

    Get PDF
    While the importance of automatic image analysis is continuously increasing, recent meta-research revealed major flaws with respect to algorithm validation. Performance metrics are particularly key for meaningful, objective, and transparent performance assessment and validation of the used automatic algorithms, but relatively little attention has been given to the practical pitfalls when using specific metrics for a given image analysis task. These are typically related to (1) the disregard of inherent metric properties, such as the behaviour in the presence of class imbalance or small target structures, (2) the disregard of inherent data set properties, such as the non-independence of the test cases, and (3) the disregard of the actual biomedical domain interest that the metrics should reflect. This living dynamically document has the purpose to illustrate important limitations of performance metrics commonly applied in the field of image analysis. In this context, it focuses on biomedical image analysis problems that can be phrased as image-level classification, semantic segmentation, instance segmentation, or object detection task. The current version is based on a Delphi process on metrics conducted by an international consortium of image analysis experts from more than 60 institutions worldwide.Comment: This is a dynamic paper on limitations of commonly used metrics. The current version discusses metrics for image-level classification, semantic segmentation, object detection and instance segmentation. For missing use cases, comments or questions, please contact [email protected] or [email protected]. Substantial contributions to this document will be acknowledged with a co-authorshi

    Understanding metric-related pitfalls in image analysis validation

    Get PDF
    Validation metrics are key for the reliable tracking of scientific progress and for bridging the current chasm between artificial intelligence (AI) research and its translation into practice. However, increasing evidence shows that particularly in image analysis, metrics are often chosen inadequately in relation to the underlying research problem. This could be attributed to a lack of accessibility of metric-related knowledge: While taking into account the individual strengths, weaknesses, and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multi-stage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides the first reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Focusing on biomedical image analysis but with the potential of transfer to other fields, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. To facilitate comprehension, illustrations and specific examples accompany each pitfall. As a structured body of information accessible to researchers of all levels of expertise, this work enhances global comprehension of a key topic in image analysis validation.Comment: Shared first authors: Annika Reinke, Minu D. Tizabi; shared senior authors: Paul F. J\"ager, Lena Maier-Hei

    Erratum to: Methods for evaluating medical tests and biomarkers

    Get PDF
    [This corrects the article DOI: 10.1186/s41512-016-0001-y.]

    Machine learning algorithms performed no better than regression models for prognostication in traumatic brain injury

    Get PDF
    Objective: We aimed to explore the added value of common machine learning (ML) algorithms for prediction of outcome for moderate and severe traumatic brain injury. Study Design and Setting: We performed logistic regression (LR), lasso regression, and ridge regression with key baseline predictors in the IMPACT-II database (15 studies, n = 11,022). ML algorithms included support vector machines, random forests, gradient boosting machines, and artificial neural networks and were trained using the same predictors. To assess generalizability of predictions, we performed internal, internal-external, and external validation on the recent CENTER-TBI study (patients with Glasgow Coma Scale <13, n = 1,554). Both calibration (calibration slope/intercept) and discrimination (area under the curve) was quantified. Results: In the IMPACT-II database, 3,332/11,022 (30%) died and 5,233(48%) had unfavorable outcome (Glasgow Outcome Scale less than 4). In the CENTER-TBI study, 348/1,554(29%) died and 651(54%) had unfavorable outcome. Discrimination and calibration varied widely between the studies and less so between the studied algorithms. The mean area under the curve was 0.82 for mortality and 0.77 for unfavorable outcomes in the CENTER-TBI study. Conclusion: ML algorithms may not outperform traditional regression approaches in a low-dimensional setting for outcome prediction after moderate or severe traumatic brain injury. Similar to regression-based prediction models, ML algorithms should be rigorously validated to ensure applicability to new populations
    • 

    corecore