131 research outputs found

    Frameworks for peace in Northern Ireland: Analysis of the 1998 Belfast Agreement

    Get PDF
    The 1998 Belfast Agreement brought to an end over three decades of armed conflict in Northern Ireland. This paper summarizes the role of actors within and outside Northern Ireland, and the processes and mechanics of the Agreement itself. The Agreement is placed in the context of previous unsuccessful peace initiatives in the region, and elements within the political and economic environment at the time that facilitated agreement are identified. The consociational nature of the Agreement is set alongside concern about continuing sectarian division. It is argued that the Agreement was as much a product of previous failed attempts and the changed economic and political environment as it was a product of the negotiations. The Belfast Agreement is evaluated and tentative lessons for the Arab–Israeli and other peace processes are delineated

    Hierarchies of Pain and Responsibility: Victims and War by Other Means in Northern Ireland

    Get PDF
    This article develops an earlier analysis of definitions and disqualifications of victimhood during armed conflict, claims of responsibility and apologies for harm, based on the Northern Ireland case. The significance of political structures is considered by considering the consociational nature of the 1998 Belfast/Good Friday Agreement, which established two parallel political dynasties, allowing the parties to the Northern Ireland conflict to ‘agree to disagree’. The nature of this agreement makes a ‘reconciliation’ between the parties optional and therefore unlikely without some intervention to address the grievances of the past, proposals for which were the responsibility of the Committee on Managing the Past whose report caused controversy

    Identifying potential terrorists: visuality, security and the channel project

    Get PDF
    This article analyses how British counter-radicalization policy in general, and the Channel project in particular, constitute individuals who are vulnerable to radicalization as visible, producing them as subjects of intervention. It thus asks, how can potential terrorists be identified and made knowable? The article first argues that to understand Channel, it is crucial to develop a conceptual account of the security politics of (in)visibilization that draws attention to the ways in which security regimes can, at times, function primarily through the production of regimes of (in)visibility. Using this approach, the article focusses on the role of ‘indicators’ as a technology of (in)visibilization. This role is central to the functioning of Channel, visibilizing certain subjects as threatening. Yet such a production is political. In bringing together a politics of care and a politics of identity, it is a regime of (in)visibility that produces new sites of intervention, contains significant potential consequences for the expression of certain identities, and raises new and troubling possibilities for how contemporary life may be secured

    Interpretation, judgement and dialogue: a hermeneutical recollection of causal analysis in critical terrorism studies

    Get PDF
    This article problematises Critical Terrorism Studies’s (CTS) seem- ing reluctance to engage in causal explanation. An analysis of the meta-theoretical assumptions on causation in both orthodox and critical terrorism studies reveals that the latter’s refusal to incor- porate causal analysis in its broader research agenda reproduces – despite its commitment to epistemological pluralism – the for- mer’s understanding of causation as the only sustainable one. Elemental to this understanding is the idea that causation refers to the regular observation of constant conjunction. Due to the positivist leanings of such a conception, CTS is quick to dismiss it as consolidating Orthodox Terrorism Studies’s lack of critical self- reflexivity, responsibility of the researcher, and dedication towards informing state-led policies of counterterrorism. Drawing on recent work in the philosophy of science and International Relations, this article advances an alternative understanding of causation that emphasises its interpretative, normative and dialo- gical fabric. It is therefore argued that CTS should reclaim causal analysis as an essential element of its research agenda. This not only facilitates a more robust challenge against Orthodox Terrorism Studies’ conventional understanding of causation but also consolidates CTS’s endeavour of deepening and broadening our understanding that (re)embeds terrorist violence in its histor- ical and social context

    Comparative performances of machine learning methods for classifying Crohn Disease patients using genome-wide genotyping data

    Get PDF
    Abstract: Crohn Disease (CD) is a complex genetic disorder for which more than 140 genes have been identified using genome wide association studies (GWAS). However, the genetic architecture of the trait remains largely unknown. The recent development of machine learning (ML) approaches incited us to apply them to classify healthy and diseased people according to their genomic information. The Immunochip dataset containing 18,227 CD patients and 34,050 healthy controls enrolled and genotyped by the international Inflammatory Bowel Disease genetic consortium (IIBDGC) has been re-analyzed using a set of ML methods: penalized logistic regression (LR), gradient boosted trees (GBT) and artificial neural networks (NN). The main score used to compare the methods was the Area Under the ROC Curve (AUC) statistics. The impact of quality control (QC), imputing and coding methods on LR results showed that QC methods and imputation of missing genotypes may artificially increase the scores. At the opposite, neither the patient/control ratio nor marker preselection or coding strategies significantly affected the results. LR methods, including Lasso, Ridge and ElasticNet provided similar results with a maximum AUC of 0.80. GBT methods like XGBoost, LightGBM and CatBoost, together with dense NN with one or more hidden layers, provided similar AUC values, suggesting limited epistatic effects in the genetic architecture of the trait. ML methods detected near all the genetic variants previously identified by GWAS among the best predictors plus additional predictors with lower effects. The robustness and complementarity of the different methods are also studied. Compared to LR, non-linear models such as GBT or NN may provide robust complementary approaches to identify and classify genetic markers

    Localization of type 1 diabetes susceptibility to the MHC class I genes HLA-B and HLA-A

    Get PDF
    The major histocompatibility complex (MHC) on chromosome 6 is associated with susceptibility to more common diseases than any other region of the human genome, including almost all disorders classified as autoimmune. In type 1 diabetes the major genetic susceptibility determinants have been mapped to the MHC class II genes HLA-DQB1 and HLA-DRB1 (refs 1-3), but these genes cannot completely explain the association between type 1 diabetes and the MHC region. Owing to the region's extreme gene density, the multiplicity of disease-associated alleles, strong associations between alleles, limited genotyping capability, and inadequate statistical approaches and sample sizes, which, and how many, loci within the MHC determine susceptibility remains unclear. Here, in several large type 1 diabetes data sets, we analyse a combined total of 1,729 polymorphisms, and apply statistical methods - recursive partitioning and regression - to pinpoint disease susceptibility to the MHC class I genes HLA-B and HLA-A (risk ratios >1.5; Pcombined = 2.01 × 10-19 and 2.35 × 10-13, respectively) in addition to the established associations of the MHC class II genes. Other loci with smaller and/or rarer effects might also be involved, but to find these, future searches must take into account both the HLA class II and class I genes and use even larger samples. Taken together with previous studies, we conclude that MHC-class-I-mediated events, principally involving HLA-B*39, contribute to the aetiology of type 1 diabetes. ©2007 Nature Publishing Group

    Effect of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker initiation on organ support-free days in patients hospitalized with COVID-19

    Get PDF
    IMPORTANCE Overactivation of the renin-angiotensin system (RAS) may contribute to poor clinical outcomes in patients with COVID-19. Objective To determine whether angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) initiation improves outcomes in patients hospitalized for COVID-19. DESIGN, SETTING, AND PARTICIPANTS In an ongoing, adaptive platform randomized clinical trial, 721 critically ill and 58 non–critically ill hospitalized adults were randomized to receive an RAS inhibitor or control between March 16, 2021, and February 25, 2022, at 69 sites in 7 countries (final follow-up on June 1, 2022). INTERVENTIONS Patients were randomized to receive open-label initiation of an ACE inhibitor (n = 257), ARB (n = 248), ARB in combination with DMX-200 (a chemokine receptor-2 inhibitor; n = 10), or no RAS inhibitor (control; n = 264) for up to 10 days. MAIN OUTCOMES AND MEASURES The primary outcome was organ support–free days, a composite of hospital survival and days alive without cardiovascular or respiratory organ support through 21 days. The primary analysis was a bayesian cumulative logistic model. Odds ratios (ORs) greater than 1 represent improved outcomes. RESULTS On February 25, 2022, enrollment was discontinued due to safety concerns. Among 679 critically ill patients with available primary outcome data, the median age was 56 years and 239 participants (35.2%) were women. Median (IQR) organ support–free days among critically ill patients was 10 (–1 to 16) in the ACE inhibitor group (n = 231), 8 (–1 to 17) in the ARB group (n = 217), and 12 (0 to 17) in the control group (n = 231) (median adjusted odds ratios of 0.77 [95% bayesian credible interval, 0.58-1.06] for improvement for ACE inhibitor and 0.76 [95% credible interval, 0.56-1.05] for ARB compared with control). The posterior probabilities that ACE inhibitors and ARBs worsened organ support–free days compared with control were 94.9% and 95.4%, respectively. Hospital survival occurred in 166 of 231 critically ill participants (71.9%) in the ACE inhibitor group, 152 of 217 (70.0%) in the ARB group, and 182 of 231 (78.8%) in the control group (posterior probabilities that ACE inhibitor and ARB worsened hospital survival compared with control were 95.3% and 98.1%, respectively). CONCLUSIONS AND RELEVANCE In this trial, among critically ill adults with COVID-19, initiation of an ACE inhibitor or ARB did not improve, and likely worsened, clinical outcomes. TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT0273570
    • 

    corecore