131 research outputs found
Frameworks for peace in Northern Ireland: Analysis of the 1998 Belfast Agreement
The 1998 Belfast Agreement brought to an end over three decades of armed conflict in Northern Ireland. This paper summarizes the role of actors within and outside Northern Ireland, and the processes and mechanics of the Agreement itself. The Agreement is placed in the context of previous unsuccessful peace initiatives in the region, and elements within the political and economic environment at the time that facilitated agreement are identified. The consociational nature of the Agreement is set alongside concern about continuing sectarian division. It is argued that the Agreement was as much a product of previous failed attempts and the changed economic and political environment as it was a product of the negotiations. The Belfast Agreement is evaluated and tentative lessons for the ArabâIsraeli and other peace processes are delineated
Hierarchies of Pain and Responsibility: Victims and War by Other Means in Northern Ireland
This article develops an earlier analysis of definitions and disqualifications of victimhood during armed conflict, claims of responsibility and apologies for harm, based on the Northern Ireland case. The significance of political structures is considered by considering the consociational nature of the 1998 Belfast/Good Friday Agreement, which established two parallel political dynasties, allowing the parties to the Northern Ireland conflict to âagree to disagreeâ. The nature of this agreement makes a âreconciliationâ between the parties optional and therefore unlikely without some intervention to address the grievances of the past, proposals for which were the responsibility of the Committee on Managing the Past whose report caused controversy
Financial considerations in the conduct of multi-centre randomised controlled trials: evidence from a qualitative study.
National Coordinating Centre for Research Methodology; Medical Research Council, UK Department of Health; Chief Scientist OfficeNot peer reviewedPublisher PD
Identifying potential terrorists: visuality, security and the channel project
This article analyses how British counter-radicalization policy in general, and the Channel project in particular, constitute individuals who are vulnerable to radicalization as visible, producing them as subjects of intervention. It thus asks, how can potential terrorists be identified and made knowable? The article first argues that to understand Channel, it is crucial to develop a conceptual account of the security politics of (in)visibilization that draws attention to the ways in which security regimes can, at times, function primarily through the production of regimes of (in)visibility. Using this approach, the article focusses on the role of âindicatorsâ as a technology of (in)visibilization. This role is central to the functioning of Channel, visibilizing certain subjects as threatening. Yet such a production is political. In bringing together a politics of care and a politics of identity, it is a regime of (in)visibility that produces new sites of intervention, contains significant potential consequences for the expression of certain identities, and raises new and troubling possibilities for how contemporary life may be secured
Interpretation, judgement and dialogue: a hermeneutical recollection of causal analysis in critical terrorism studies
This article problematises Critical Terrorism Studiesâs (CTS) seem- ing reluctance to engage in causal explanation. An analysis of the meta-theoretical assumptions on causation in both orthodox and critical terrorism studies reveals that the latterâs refusal to incor- porate causal analysis in its broader research agenda reproduces â despite its commitment to epistemological pluralism â the for- merâs understanding of causation as the only sustainable one. Elemental to this understanding is the idea that causation refers to the regular observation of constant conjunction. Due to the positivist leanings of such a conception, CTS is quick to dismiss it as consolidating Orthodox Terrorism Studiesâs lack of critical self- reflexivity, responsibility of the researcher, and dedication towards informing state-led policies of counterterrorism. Drawing on recent work in the philosophy of science and International Relations, this article advances an alternative understanding of causation that emphasises its interpretative, normative and dialo- gical fabric. It is therefore argued that CTS should reclaim causal analysis as an essential element of its research agenda. This not only facilitates a more robust challenge against Orthodox Terrorism Studiesâ conventional understanding of causation but also consolidates CTSâs endeavour of deepening and broadening our understanding that (re)embeds terrorist violence in its histor- ical and social context
Comparative performances of machine learning methods for classifying Crohn Disease patients using genome-wide genotyping data
Abstract: Crohn Disease (CD) is a complex genetic disorder for which more than 140 genes have been identified using genome wide association studies (GWAS). However, the genetic architecture of the trait remains largely unknown. The recent development of machine learning (ML) approaches incited us to apply them to classify healthy and diseased people according to their genomic information. The Immunochip dataset containing 18,227 CD patients and 34,050 healthy controls enrolled and genotyped by the international Inflammatory Bowel Disease genetic consortium (IIBDGC) has been re-analyzed using a set of ML methods: penalized logistic regression (LR), gradient boosted trees (GBT) and artificial neural networks (NN). The main score used to compare the methods was the Area Under the ROC Curve (AUC) statistics. The impact of quality control (QC), imputing and coding methods on LR results showed that QC methods and imputation of missing genotypes may artificially increase the scores. At the opposite, neither the patient/control ratio nor marker preselection or coding strategies significantly affected the results. LR methods, including Lasso, Ridge and ElasticNet provided similar results with a maximum AUC of 0.80. GBT methods like XGBoost, LightGBM and CatBoost, together with dense NN with one or more hidden layers, provided similar AUC values, suggesting limited epistatic effects in the genetic architecture of the trait. ML methods detected near all the genetic variants previously identified by GWAS among the best predictors plus additional predictors with lower effects. The robustness and complementarity of the different methods are also studied. Compared to LR, non-linear models such as GBT or NN may provide robust complementary approaches to identify and classify genetic markers
Localization of type 1 diabetes susceptibility to the MHC class I genes HLA-B and HLA-A
The major histocompatibility complex (MHC) on chromosome 6 is associated with susceptibility to more common diseases than any other region of the human genome, including almost all disorders classified as autoimmune. In type 1 diabetes the major genetic susceptibility determinants have been mapped to the MHC class II genes HLA-DQB1 and HLA-DRB1 (refs 1-3), but these genes cannot completely explain the association between type 1 diabetes and the MHC region. Owing to the region's extreme gene density, the multiplicity of disease-associated alleles, strong associations between alleles, limited genotyping capability, and inadequate statistical approaches and sample sizes, which, and how many, loci within the MHC determine susceptibility remains unclear. Here, in several large type 1 diabetes data sets, we analyse a combined total of 1,729 polymorphisms, and apply statistical methods - recursive partitioning and regression - to pinpoint disease susceptibility to the MHC class I genes HLA-B and HLA-A (risk ratios >1.5; Pcombined = 2.01 à 10-19 and 2.35 à 10-13, respectively) in addition to the established associations of the MHC class II genes. Other loci with smaller and/or rarer effects might also be involved, but to find these, future searches must take into account both the HLA class II and class I genes and use even larger samples. Taken together with previous studies, we conclude that MHC-class-I-mediated events, principally involving HLA-B*39, contribute to the aetiology of type 1 diabetes. ©2007 Nature Publishing Group
Recommended from our members
Effect of Hydrocortisone on Mortality and Organ Support in Patients With Severe COVID-19: The REMAP-CAP COVID-19 Corticosteroid Domain Randomized Clinical Trial.
Importance: Evidence regarding corticosteroid use for severe coronavirus disease 2019 (COVID-19) is limited. Objective: To determine whether hydrocortisone improves outcome for patients with severe COVID-19. Design, Setting, and Participants: An ongoing adaptive platform trial testing multiple interventions within multiple therapeutic domains, for example, antiviral agents, corticosteroids, or immunoglobulin. Between March 9 and June 17, 2020, 614 adult patients with suspected or confirmed COVID-19 were enrolled and randomized within at least 1 domain following admission to an intensive care unit (ICU) for respiratory or cardiovascular organ support at 121 sites in 8 countries. Of these, 403 were randomized to open-label interventions within the corticosteroid domain. The domain was halted after results from another trial were released. Follow-up ended August 12, 2020. Interventions: The corticosteroid domain randomized participants to a fixed 7-day course of intravenous hydrocortisone (50 mg or 100 mg every 6 hours) (nâ=â143), a shock-dependent course (50 mg every 6 hours when shock was clinically evident) (nâ=â152), or no hydrocortisone (nâ=â108). Main Outcomes and Measures: The primary end point was organ support-free days (days alive and free of ICU-based respiratory or cardiovascular support) within 21 days, where patients who died were assigned -1 day. The primary analysis was a bayesian cumulative logistic model that included all patients enrolled with severe COVID-19, adjusting for age, sex, site, region, time, assignment to interventions within other domains, and domain and intervention eligibility. Superiority was defined as the posterior probability of an odds ratio greater than 1 (threshold for trial conclusion of superiority >99%). Results: After excluding 19 participants who withdrew consent, there were 384 patients (mean age, 60 years; 29% female) randomized to the fixed-dose (nâ=â137), shock-dependent (nâ=â146), and no (nâ=â101) hydrocortisone groups; 379 (99%) completed the study and were included in the analysis. The mean age for the 3 groups ranged between 59.5 and 60.4 years; most patients were male (range, 70.6%-71.5%); mean body mass index ranged between 29.7 and 30.9; and patients receiving mechanical ventilation ranged between 50.0% and 63.5%. For the fixed-dose, shock-dependent, and no hydrocortisone groups, respectively, the median organ support-free days were 0 (IQR, -1 to 15), 0 (IQR, -1 to 13), and 0 (-1 to 11) days (composed of 30%, 26%, and 33% mortality rates and 11.5, 9.5, and 6 median organ support-free days among survivors). The median adjusted odds ratio and bayesian probability of superiority were 1.43 (95% credible interval, 0.91-2.27) and 93% for fixed-dose hydrocortisone, respectively, and were 1.22 (95% credible interval, 0.76-1.94) and 80% for shock-dependent hydrocortisone compared with no hydrocortisone. Serious adverse events were reported in 4 (3%), 5 (3%), and 1 (1%) patients in the fixed-dose, shock-dependent, and no hydrocortisone groups, respectively. Conclusions and Relevance: Among patients with severe COVID-19, treatment with a 7-day fixed-dose course of hydrocortisone or shock-dependent dosing of hydrocortisone, compared with no hydrocortisone, resulted in 93% and 80% probabilities of superiority with regard to the odds of improvement in organ support-free days within 21 days. However, the trial was stopped early and no treatment strategy met prespecified criteria for statistical superiority, precluding definitive conclusions. Trial Registration: ClinicalTrials.gov Identifier: NCT02735707
Effect of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker initiation on organ support-free days in patients hospitalized with COVID-19
IMPORTANCE Overactivation of the renin-angiotensin system (RAS) may contribute to poor clinical outcomes in patients with COVID-19.
Objective To determine whether angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) initiation improves outcomes in patients hospitalized for COVID-19.
DESIGN, SETTING, AND PARTICIPANTS In an ongoing, adaptive platform randomized clinical trial, 721 critically ill and 58 nonâcritically ill hospitalized adults were randomized to receive an RAS inhibitor or control between March 16, 2021, and February 25, 2022, at 69 sites in 7 countries (final follow-up on June 1, 2022).
INTERVENTIONS Patients were randomized to receive open-label initiation of an ACE inhibitor (nâ=â257), ARB (nâ=â248), ARB in combination with DMX-200 (a chemokine receptor-2 inhibitor; nâ=â10), or no RAS inhibitor (control; nâ=â264) for up to 10 days.
MAIN OUTCOMES AND MEASURES The primary outcome was organ supportâfree days, a composite of hospital survival and days alive without cardiovascular or respiratory organ support through 21 days. The primary analysis was a bayesian cumulative logistic model. Odds ratios (ORs) greater than 1 represent improved outcomes.
RESULTS On February 25, 2022, enrollment was discontinued due to safety concerns. Among 679 critically ill patients with available primary outcome data, the median age was 56 years and 239 participants (35.2%) were women. Median (IQR) organ supportâfree days among critically ill patients was 10 (â1 to 16) in the ACE inhibitor group (nâ=â231), 8 (â1 to 17) in the ARB group (nâ=â217), and 12 (0 to 17) in the control group (nâ=â231) (median adjusted odds ratios of 0.77 [95% bayesian credible interval, 0.58-1.06] for improvement for ACE inhibitor and 0.76 [95% credible interval, 0.56-1.05] for ARB compared with control). The posterior probabilities that ACE inhibitors and ARBs worsened organ supportâfree days compared with control were 94.9% and 95.4%, respectively. Hospital survival occurred in 166 of 231 critically ill participants (71.9%) in the ACE inhibitor group, 152 of 217 (70.0%) in the ARB group, and 182 of 231 (78.8%) in the control group (posterior probabilities that ACE inhibitor and ARB worsened hospital survival compared with control were 95.3% and 98.1%, respectively).
CONCLUSIONS AND RELEVANCE In this trial, among critically ill adults with COVID-19, initiation of an ACE inhibitor or ARB did not improve, and likely worsened, clinical outcomes.
TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT0273570
- âŠ