1,529 research outputs found

    Lack of significant association of an insertion/deletion polymorphism in the angiotensin converting enzyme (ACE) gene with tropical calcific pancreatitis

    Get PDF
    BACKGROUND: The genetic basis of tropical calcific pancreatitis (TCP) is different and is explained by mutations in the pancreatic secretory trypsin inhibitor (SPINK1) gene. However, mutated SPINK1 does not account for the disease in all the patients, neither does it explain the phenotypic heterogeneity between TCP and fibro-calculous pancreatic diabetes (FCPD). Recent studies suggest a crucial role for pancreatic renin-angiotensin system during chronic hypoxia in acute pancreatitis and for angiotensin converting enzyme (ACE) inhibitors in reducing pancreatic fibrosis in experimental models. We investigated the association of ACE gene insertion/deletion (I/D) polymorphism in TCP patients using a case-control approach. Since SPINK1 mutations are proposed a modifier role, we also investigated its interaction with the ACE gene variant. METHODS: We analyzed the I/D polymorphism in the ACE gene (g.11417_11704del287) in 171 subjects comprising 91 TCP and 80 FCPD patients and compared the allelic and genotypic frequency in them with 99 healthy ethnically matched control subjects. RESULTS: We found 46% and 21% of TCP patients, 56% and 19.6% of FCPD patients and 54.5% and 19.2% of the healthy controls carrying the I/D and D/D genotypes respectively (P>0.05). No significant difference in the clinical picture was observed between patients with and without the del allele at the ACE in/del polymorphism in both categories. No association was observed with the presence or absence of N34S SPINK1 mutation in these patients. CONCLUSION: We conclude that the ACE insertion/deletion variant does not show any significant association with the pathogenesis, fibrosis and progression of tropical calcific pancreatitis and the fibro-calculous pancreatic diabetes

    Reaction rates and transport in neutron stars

    Full text link
    Understanding signals from neutron stars requires knowledge about the transport inside the star. We review the transport properties and the underlying reaction rates of dense hadronic and quark matter in the crust and the core of neutron stars and point out open problems and future directions.Comment: 74 pages; commissioned for the book "Physics and Astrophysics of Neutron Stars", NewCompStar COST Action MP1304; version 3: minor changes, references updated, overview graphic added in the introduction, improvements in Sec IV.A.

    Task-Specific Effects of tDCS-Induced Cortical Excitability Changes on Cognitive and Motor Sequence Set Shifting Performance

    Get PDF
    In this study, we tested the effects of transcranial Direct Current Stimulation (tDCS) on two set shifting tasks. Set shifting ability is defined as the capacity to switch between mental sets or actions and requires the activation of a distributed neural network. Thirty healthy subjects (fifteen per site) received anodal, cathodal and sham stimulation of the dorsolateral prefrontal cortex (DLPFC) or the primary motor cortex (M1). We measured set shifting in both cognitive and motor tasks. The results show that both anodal and cathodal single session tDCS can modulate cognitive and motor tasks. However, an interaction was found between task and type of stimulation as anodal tDCS of DLPFC and M1 was found to increase performance in the cognitive task, while cathodal tDCS of DLPFC and M1 had the opposite effect on the motor task. Additionally, tDCS effects seem to be most evident on the speed of changing sets, rather than on reducing the number of errors or increasing the efficacy of irrelevant set filtering

    Dispelling urban myths about default uncertainty factors in chemical risk assessment - Sufficient protection against mixture effects?

    Get PDF
    © 2013 Martin et al.; licensee BioMed Central LtdThis article has been made available through the Brunel Open Access Publishing Fund.Assessing the detrimental health effects of chemicals requires the extrapolation of experimental data in animals to human populations. This is achieved by applying a default uncertainty factor of 100 to doses not found to be associated with observable effects in laboratory animals. It is commonly assumed that the toxicokinetic and toxicodynamic sub-components of this default uncertainty factor represent worst-case scenarios and that the multiplication of those components yields conservative estimates of safe levels for humans. It is sometimes claimed that this conservatism also offers adequate protection from mixture effects. By analysing the evolution of uncertainty factors from a historical perspective, we expose that the default factor and its sub-components are intended to represent adequate rather than worst-case scenarios. The intention of using assessment factors for mixture effects was abandoned thirty years ago. It is also often ignored that the conservatism (or otherwise) of uncertainty factors can only be considered in relation to a defined level of protection. A protection equivalent to an effect magnitude of 0.001-0.0001% over background incidence is generally considered acceptable. However, it is impossible to say whether this level of protection is in fact realised with the tolerable doses that are derived by employing uncertainty factors. Accordingly, it is difficult to assess whether uncertainty factors overestimate or underestimate the sensitivity differences in human populations. It is also often not appreciated that the outcome of probabilistic approaches to the multiplication of sub-factors is dependent on the choice of probability distributions. Therefore, the idea that default uncertainty factors are overly conservative worst-case scenarios which can account both for the lack of statistical power in animal experiments and protect against potential mixture effects is ill-founded. We contend that precautionary regulation should provide an incentive to generate better data and recommend adopting a pragmatic, but scientifically better founded approach to mixture risk assessment. © 2013 Martin et al.; licensee BioMed Central Ltd.Oak Foundatio

    Regulation of Budding Yeast Mating-Type Switching Donor Preference by the FHA Domain of Fkh1

    Get PDF
    During Saccharomyces cerevisiae mating-type switching, an HO endonuclease-induced double-strand break (DSB) at MAT is repaired by recombining with one of two donors, HMLα or HMRa, located at opposite ends of chromosome III. MATa cells preferentially recombine with HMLα; this decision depends on the Recombination Enhancer (RE), located about 17 kb to the right of HML. In MATα cells, HML is rarely used and RE is bound by the MATα2-Mcm1 corepressor, which prevents the binding of other proteins to RE. In contrast, in MATa cells, RE is bound by multiple copies of Fkh1 and a single copy of Swi4/Swi6. We report here that, when RE is replaced with four LexA operators in MATa cells, 95% of cells use HMR for repair, but expression of a LexA-Fkh1 fusion protein strongly increases HML usage. A LexA-Fkh1 truncation, containing only Fkh1's phosphothreonine-binding FHA domain, restores HML usage to 90%. A LexA-FHA-R80A mutant lacking phosphothreonine binding fails to increase HML usage. The LexA-FHA fusion protein associates with chromatin in a 10-kb interval surrounding the HO cleavage site at MAT, but only after DSB induction. This association occurs even in a donorless strain lacking HML. We propose that the FHA domain of Fkh1 regulates donor preference by physically interacting with phosphorylated threonine residues created on proteins bound near the DSB, thus positioning HML close to the DSB at MAT. Donor preference is independent of Mec1/ATR and Tel1/ATM checkpoint protein kinases but partially depends on casein kinase II. RE stimulates the strand invasion step of interchromosomal recombination even for non-MAT sequences. We also find that when RE binds to the region near the DSB at MATa then Mec1 and Tel1 checkpoint kinases are not only able to phosphorylate histone H2A (γ-H2AX) around the DSB but can also promote γ-H2AX spreading around the RE region

    A comprehensive 1000 Genomes-based genome-wide association meta-analysis of coronary artery disease

    Get PDF
    Existing knowledge of genetic variants affecting risk of coronary artery disease (CAD) is largely based on genome-wide association studies (GWAS) analysis of common SNPs. Leveraging phased haplotypes from the 1000 Genomes Project, we report a GWAS meta-analysis of 185 thousand CAD cases and controls, interrogating 6.7 million common (MAF>0.05) as well as 2.7 million low frequency (0.005<MAF<0.05) variants. In addition to confirmation of most known CAD loci, we identified 10 novel loci, eight additive and two recessive, that contain candidate genes that newly implicate biological processes in vessel walls. We observed intra-locus allelic heterogeneity but little evidence of low frequency variants with larger effects and no evidence of synthetic association. Our analysis provides a comprehensive survey of the fine genetic architecture of CAD showing that genetic susceptibility to this common disease is largely determined by common SNPs of small effect siz

    Search for a Technicolor omega_T Particle in Events with a Photon and a b-quark Jet at CDF

    Full text link
    If the Technicolor omega_T particle exists, a likely decay mode is omega_T -> gamma pi_T, followed by pi_T -> bb-bar, yielding the signature gamma bb-bar. We have searched 85 pb^-1 of data collected by the CDF experiment at the Fermilab Tevatron for events with a photon and two jets, where one of the jets must contain a secondary vertex implying the presence of a b quark. We find no excess of events above standard model expectations. We express the result of an exclusion region in the M_omega_T - M_pi_T mass plane.Comment: 14 pages, 2 figures. Available from the CDF server (PS with figs): http://www-cdf.fnal.gov/physics/pub98/cdf4674_omega_t_prl_4.ps FERMILAB-PUB-98/321-

    All-cause and liver-related mortality risk factors in excessive drinkers: Analysis of data from the UK biobank

    Get PDF
    Background: High alcohol intake is associated with increased mortality. We aimed to identify factors affecting mortality in people drinking extreme amounts of alcohol. Methods: We obtained information from the UK Biobank on approximately 500,000 participants aged 40–70 years at baseline assessment in 2006–2010. Habitual alcohol intake, lifestyle and physiological data, laboratory test results, and hospital diagnoses and death certificate data (to June 2020) for 5136 men (2.20% of male participants) and 1504 women (0.60%) who reported consuming ≥80 or ≥50 g/day, respectively, were used in survival analysis. Results: Mortality hazard ratios for these excessive drinkers, compared to all other participants, were 2.02 (95% CI 1.89–2.17) for all causes, 1.89 (1.69–2.12) for any cancer, 1.87 (1.61–2.17) for any circulatory disease, and 9.40 (7.00–12.64) for any liver disease. Liver disease diagnosis or abnormal liver function tests predicted not only deaths attributed to liver disease but also those from cancers or circulatory diseases. Mortality among excessive drinkers was also associated with quantitative alcohol intake; diagnosed alcohol dependence, harmful use, or withdrawal syndrome; and current smoking at assessment. Conclusions: People with chronic excessive alcohol intake experience decreased average survival, but there is substantial variation in their mortality, with liver abnormality and alcohol dependence or other alcohol use disorders associated with a worse prognosis. Clinically, patients with these risk factors and high alcohol intake should be considered for early or intensive management. Research can usefully focus on the factors predisposing to dependence or liver abnormality

    Measurement of the B0 anti-B0 oscillation frequency using l- D*+ pairs and lepton flavor tags

    Full text link
    The oscillation frequency Delta-md of B0 anti-B0 mixing is measured using the partially reconstructed semileptonic decay anti-B0 -> l- nubar D*+ X. The data sample was collected with the CDF detector at the Fermilab Tevatron collider during 1992 - 1995 by triggering on the existence of two lepton candidates in an event, and corresponds to about 110 pb-1 of pbar p collisions at sqrt(s) = 1.8 TeV. We estimate the proper decay time of the anti-B0 meson from the measured decay length and reconstructed momentum of the l- D*+ system. The charge of the lepton in the final state identifies the flavor of the anti-B0 meson at its decay. The second lepton in the event is used to infer the flavor of the anti-B0 meson at production. We measure the oscillation frequency to be Delta-md = 0.516 +/- 0.099 +0.029 -0.035 ps-1, where the first uncertainty is statistical and the second is systematic.Comment: 30 pages, 7 figures. Submitted to Physical Review

    Search for New Particles Decaying to top-antitop in proton-antiproton collisions at squareroot(s)=1.8 TeV

    Get PDF
    We use 106 \ipb of data collected with the Collider Detector at Fermilab to search for narrow-width, vector particles decaying to a top and an anti-top quark. Model independent upper limits on the cross section for narrow, vector resonances decaying to \ttbar are presented. At the 95% confidence level, we exclude the existence of a leptophobic \zpr boson in a model of topcolor-assisted technicolor with mass M_{\zpr} << 480 \gev for natural width Γ\Gamma = 0.012 M_{\zpr}, and M_{\zpr} << 780 \gev for Γ\Gamma = 0.04 M_{\zpr}.Comment: The CDF Collaboration, submitted to PRL 25-Feb-200
    corecore