952 research outputs found

    Serotypes, virulence genes and intimin types of Shiga toxin (verocytotoxin)-producing Escherichia coli isolates from minced beef in Lugo (Spain) from 1995 through 2003

    Get PDF
    BACKGROUND: Shiga toxin-producing Escherichia coli (STEC) have emerged as pathogens that can cause food-borne infections and severe and potentially fatal illnesses in humans, such as haemorrhagic colitis (HC) and haemolytic uraemic syndrome (HUS). In Spain, like in many other countries, STEC strains have been frequently isolated from ruminants, and represent a significant cause of sporadic cases of human infection. In view of the lack of data on STEC isolated from food in Spain, the objectives of this study were to determine the level of microbiological contamination and the prevalence of STEC O157:H7 and non-O157 in a large sampling of minced beef collected from 30 local stores in Lugo city between 1995 and 2003. Also to establish if those STEC isolated from food possessed the same virulence profiles as STEC strains causing human infections. RESULTS: STEC were detected in 95 (12%) of the 785 minced beef samples tested. STEC O157:H7 was isolated from eight (1.0%) samples and non-O157 STEC from 90 (11%) samples. Ninety-six STEC isolates were further characterized by PCR and serotyping. PCR showed that 28 (29%) isolates carried stx(1 )genes, 49 (51%) possessed stx(2 )genes, and 19 (20%) both stx(1 )and stx(2). Enterohemolysin (ehxA) and intimin (eae) virulence genes were detected in 43 (45%) and in 25 (26%) of the isolates, respectively. Typing of the eae variants detected four types: γ1 (nine isolates), β1 (eight isolates), ε1 (three isolates), and θ (two isolates). The majority (68%) of STEC isolates belonged to serotypes previously detected in human STEC and 38% to serotypes associated with STEC isolated from patients with HUS. Ten new serotypes not previously described in raw beef products were also detected. The highly virulent seropathotypes O26:H11 stx(1 )eae-β1, O157:H7 stx(1)stx(2 )eae-γ1 and O157:H7 stx(2)eae-γ1, which are the most frequently observed among STEC causing human infections in Spain, were detected in 10 of the 96 STEC isolates. Furthermore, phage typing of STEC O157:H7 isolates showed that the majority (seven of eight isolates) belonged to the main phage types previously detected in STEC O157:H7 strains associated with severe human illnesses. CONCLUSION: The results of this study do not differ greatly from those reported in other countries with regard to prevalence of O157 and non-O157 STEC in minced beef. As we suspected, serotypes different from O157:H7 also play an important role in food contamination in Spain, including the highly virulent seropathotype O26:H11 stx(1 )eae-β1. Thus, our data confirm minced beef in the city of Lugo as vehicles of highly pathogenic STEC. This requires that control measures to be introduced and implemented to increase the safety of minced beef

    Genetic Mapping of Social Interaction Behavior in B6/MSM Consomic Mouse Strains

    Get PDF
    Genetic studies are indispensable for understanding the mechanisms by which individuals develop differences in social behavior. We report genetic mapping of social interaction behavior using inter-subspecific consomic strains established from MSM/Ms (MSM) and C57BL/6J (B6) mice. Two animals of the same strain and sex, aged 10 weeks, were introduced into a novel open-field for 10 min. Social contact was detected by an automated system when the distance between the centers of the two animals became less than ~12 cm. In addition, detailed behavioral observations were made of the males. The wild-derived mouse strain MSM showed significantly longer social contact as compared to B6. Analysis of the consomic panel identified two chromosomes (Chr 6 and Chr 17) with quantitative trait loci (QTL) responsible for lengthened social contact in MSM mice and two chromosomes (Chr 9 and Chr X) with QTL that inhibited social contact. Detailed behavioral analysis of males identified four additional chromosomes associated with social interaction behavior. B6 mice that contained Chr 13 from MSM showed more genital grooming and following than the parental B6 strain, whereas the presence of Chr 8 and Chr 12 from MSM resulted in a reduction of those behaviors. Longer social sniffing was observed in Chr 4 consomic strain than in B6 mice. Although the frequency was low, aggressive behavior was observed in a few pairs from consomic strains for Chrs 4, 13, 15 and 17, as well as from MSM. The social interaction test has been used as a model to measure anxiety, but genetic correlation analysis suggested that social interaction involves different aspects of anxiety than are measured by open-field test

    Azimuthal Anisotropy of Photon and Charged Particle Emission in Pb+Pb Collisions at 158 A GeV/c

    Full text link
    The azimuthal distributions of photons and charged particles with respect to the event plane are investigated as a function of centrality in Pb + Pb collisions at 158 A GeV/c in the WA98 experiment at the CERN SPS. The anisotropy of the azimuthal distributions is characterized using a Fourier analysis. For both the photon and charged particle distributions the first two Fourier coefficients are observed to decrease with increasing centrality. The observed anisotropies of the photon distributions compare well with the expectations from the charged particle measurements for all centralities.Comment: 8 pages and 6 figures. The manuscript has undergone a major revision. The unwanted correlations were enhanced in the random subdivision method used in the earlier version. The present version uses the more established method of division into subevents separated in rapidity to minimise short range correlations. The observed results for charged particles are in agreement with results from the other experiments. The observed anisotropy in photons is explained using flow results of pions and the correlations arising due to the decay of the neutral pion

    Global Diversity Hotspots and Conservation Priorities for Sharks

    Get PDF
    Sharks are one of the most threatened groups of marine animals, as high exploitation rates coupled with low resilience to fishing pressure have resulted in population declines worldwide. Designing conservation strategies for this group depends on basic knowledge of the geographic distribution and diversity of known species. So far, this information has been fragmented and incomplete. Here, we have synthesized the first global shark diversity pattern from a new database of published sources, including all 507 species described at present, and have identified hotspots of shark species richness, functional diversity and endemicity from these data. We have evaluated the congruence of these diversity measures and demonstrate their potential use in setting priority areas for shark conservation. Our results show that shark diversity across all species peaks on the continental shelves and at mid-latitudes (30–40 degrees N and S). Global hotspots of species richness, functional diversity and endemicity were found off Japan, Taiwan, the East and West coasts of Australia, Southeast Africa, Southeast Brazil and Southeast USA. Moreover, some areas with low to moderate species richness such as Southern Australia, Angola, North Chile and Western Continental Europe stood out as places of high functional diversity. Finally, species affected by shark finning showed different patterns of diversity, with peaks closer to the Equator and a more oceanic distribution overall. Our results show that the global pattern of shark diversity is uniquely different from land, and other well-studied marine taxa, and may provide guidance for spatial approaches to shark conservation. However, similar to terrestrial ecosystems, protected areas based on hotspots of diversity and endemism alone would provide insufficient means for safeguarding the diverse functional roles that sharks play in marine ecosystems

    A posteriori error estimates for the virtual element method

    Get PDF
    An a posteriori error analysis for the virtual element method (VEM) applied to general elliptic problems is presented. The resulting error estimator is of residual-type and applies on very general polygonal/polyhedral meshes. The estimator is fully computable as it relies only on quantities available from the VEM solution, namely its degrees of freedom and element-wise polynomial projection. Upper and lower bounds of the error estimator with respect to the VEM approximation error are proven. The error estimator is used to drive adaptive mesh refinement in a number of test problems. Mesh adaptation is particularly simple to implement since elements with consecutive co-planar edges/faces are allowed and, therefore, locally adapted meshes do not require any local mesh post-processing

    Estimating the Impact of Adding C-Reactive Protein as a Criterion for Lipid Lowering Treatment in the United States

    Get PDF
    BACKGROUND: There is growing interest in using C-reactive protein (CRP) levels to help select patients for lipid lowering therapy—although this practice is not yet supported by evidence of benefit in a randomized trial. OBJECTIVE: To estimate the number of Americans potentially affected if a CRP criteria were adopted as an additional indication for lipid lowering therapy. To provide context, we also determined how well current lipid lowering guidelines are being implemented. METHODS: We analyzed nationally representative data to determine how many Americans age 35 and older meet current National Cholesterol Education Program (NCEP) treatment criteria (a combination of risk factors and their Framingham risk score). We then determined how many of the remaining individuals would meet criteria for treatment using 2 different CRP-based strategies: (1) narrow: treat individuals at intermediate risk (i.e., 2 or more risk factors and an estimated 10–20% risk of coronary artery disease over the next 10 years) with CRP > 3 mg/L and (2) broad: treat all individuals with CRP > 3 mg/L. DATA SOURCE: Analyses are based on the 2,778 individuals participating in the 1999–2002 National Health and Nutrition Examination Survey with complete data on cardiac risk factors, fasting lipid levels, CRP, and use of lipid lowering agents. MAIN MEASURES: The estimated number and proportion of American adults meeting NCEP criteria who take lipid-lowering drugs, and the additional number who would be eligible based on CRP testing. RESULTS: About 53 of the 153 million Americans aged 35 and older meet current NCEP criteria (that do not involve CRP) for lipid-lowering treatment. Sixty-five percent, however, are not currently being treated, even among those at highest risk (i.e., patients with established heart disease or its risk equivalent)—62% are untreated. Adopting the narrow and broad CRP strategies would make an additional 2.1 and 25.3 million Americans eligible for treatment, respectively. The latter strategy would make over half the adults age 35 and older eligible for lipid-lowering therapy, with most of the additionally eligible (57%) coming from the lowest NCEP heart risk category (i.e., 0–1 risk factors). CONCLUSION: There is substantial underuse of lipid lowering therapy for American adults at high risk for coronary disease. Rather than adopting CRP-based strategies, which would make millions more lower risk patients eligible for treatment (and for whom treatment benefit has not yet been demonstrated in a randomized trial), we should ensure the treatment of currently defined high-risk patients for whom the benefit of therapy is established

    Extracorporeal membrane oxygenator as a bridge to successful surgical repair of bronchopleural fistula following bilateral sequential lung transplantation: a case report and review of literature

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Lung transplantation (LTx) is widely accepted as a therapeutic option for end-stage respiratory failure in cystic fibrosis. However, airway complications remain a major cause of morbidity and mortality in these patients, serious airway complications like bronchopleural fistula (BPF) are rare, and their management is very difficult.</p> <p>Case presentation</p> <p>A 47-year-old man with end-stage respiratory failure due to cystic fibrosis underwent bilateral sequential lung transplantation. Severe post-operative bleeding occurred due to dense intrapleural adhesions of the native lungs. He was re-explored and packed leading to satisfactory haemostasis. He developed a bronchopleural fistula on the 14<sup>th </sup>post-operative day. The fistula was successfully repaired using pericardial and intercostal vascular flaps with veno-venous extracorporeal membrane oxygenator (VV-ECMO) support. Subsequently his recovery was uneventful.</p> <p>Conclusion</p> <p>The combination of pedicled intercostal and pericardial flaps provide adequate vascular tissue for sealing a large BPF following LTx. Veno-venous ECMO allows a feasible bridge to recovery.</p

    Resource-sharing in multiple component working memory

    Get PDF
    Working memory research often focuses on measuring the capacity of the system and how it relates to other cognitive abilities. However, research into the structure of working memory is less concerned with an overall capacity measure but rather with the intricacies of underlying components and their contribution to different tasks. A number of models of working memory structure have been proposed, each with different assumptions and predictions, but none of which adequately accounts for the full range of data in the working memory literature. We report 2 experiments that investigated the effects of load manipulations on dual-task verbal temporary memory and spatial processing. Crucially, we manipulated cognitive load around the measured memory span of each individual participant. We report a clear effect of increasing memory load on processing accuracy, but only when memory load is increased above each participant’s measured memory span. However, increasing processing load did not affect memory performance. We argue that immediate verbal memory may rely both on a temporary phonological store and on activated traces in long-term memory, with the latter deployed to support memory performance for supraspan lists and when a high memory load is coupled with a processing task. We propose that future research should tailor the load manipulations to the capacities of individual participants and suggest that contrasts between models of working memory may be more apparent than real

    Rapidity and Centrality Dependence of Proton and Anti-proton Production from Au+Au Collisions at sqrt(sNN) = 130GeV

    Full text link
    We report on the rapidity and centrality dependence of proton and anti-proton transverse mass distributions from Au+Au collisions at sqrt(sNN) = 130GeV as measured by the STAR experiment at RHIC. Our results are from the rapidity and transverse momentum range of |y|<0.5 and 0.35 <p_t<1.00GeV/c. For both protons and anti-protons, transverse mass distributions become more convex from peripheral to central collisions demonstrating characteristics of collective expansion. The measured rapidity distributions and the mean transverse momenta versus rapidity are flat within |y|<0.5. Comparisons of our data with results from model calculations indicate that in order to obtain a consistent picture of the proton(anti-proton) yields and transverse mass distributions the possibility of pre-hadronic collective expansion may have to be taken into account.Comment: 4 pages, 3 figures, 1 table, submitted to PR

    Jet energy measurement with the ATLAS detector in proton-proton collisions at root s=7 TeV

    Get PDF
    The jet energy scale and its systematic uncertainty are determined for jets measured with the ATLAS detector at the LHC in proton-proton collision data at a centre-of-mass energy of √s = 7TeV corresponding to an integrated luminosity of 38 pb-1. Jets are reconstructed with the anti-kt algorithm with distance parameters R=0. 4 or R=0. 6. Jet energy and angle corrections are determined from Monte Carlo simulations to calibrate jets with transverse momenta pT≥20 GeV and pseudorapidities {pipe}η{pipe}<4. 5. The jet energy systematic uncertainty is estimated using the single isolated hadron response measured in situ and in test-beams, exploiting the transverse momentum balance between central and forward jets in events with dijet topologies and studying systematic variations in Monte Carlo simulations. The jet energy uncertainty is less than 2. 5 % in the central calorimeter region ({pipe}η{pipe}<0. 8) for jets with 60≤pT<800 GeV, and is maximally 14 % for pT<30 GeV in the most forward region 3. 2≤{pipe}η{pipe}<4. 5. The jet energy is validated for jet transverse momenta up to 1 TeV to the level of a few percent using several in situ techniques by comparing a well-known reference such as the recoiling photon pT, the sum of the transverse momenta of tracks associated to the jet, or a system of low-pT jets recoiling against a high-pT jet. More sophisticated jet calibration schemes are presented based on calorimeter cell energy density weighting or hadronic properties of jets, aiming for an improved jet energy resolution and a reduced flavour dependence of the jet response. The systematic uncertainty of the jet energy determined from a combination of in situ techniques is consistent with the one derived from single hadron response measurements over a wide kinematic range. The nominal corrections and uncertainties are derived for isolated jets in an inclusive sample of high-pT jets. Special cases such as event topologies with close-by jets, or selections of samples with an enhanced content of jets originating from light quarks, heavy quarks or gluons are also discussed and the corresponding uncertainties are determined. © 2013 CERN for the benefit of the ATLAS collaboration
    corecore