640 research outputs found

    Introduction of Solid Food to Young Infants

    Get PDF
    Timing of the first introduction of solid food during infancy may have potential effects on life-long health. To understand the characteristics that are associated with the timing of infants’ initial exposure to solid foods. The 2000 National Survey of Early Childhood Health (NSECH) was a nationally representative telephone survey of 2,068 parents of children aged 4–35 months, which profiled content and quality of health care for young children. African-American and Latino families were over-sampled. Analyses in this report include bivariate tests and logistic regressions. 62% of parents reported introducing solids to their child between 4–6 months of age. African-American mothers (OR = 0.5 [0.3, 0.9]), English-speaking Latino mothers (OR = 0.4 [0.2, 0.7]), White mothers with more than high school education (OR = 0.5 [0.2, 1.0]), and mothers who breastfed for 4 months or longer (OR = 0.4 [0.3, 0.7]) were less likely to introduce solids early. Most parents (92%) of children 4–9 months of age reported that their pediatric provider had discussed introduction of solids with them since the child’s birth, and provider discussion of feeding was not associated with the timing of introduction of solids. Although most parents recall discussing the introduction of solid foods with their child’s physician, several subgroups of mothers introduce solid foods earlier than the AAP recommendation of 4–6 months. More effective discussion of solid food introduction linked to counseling and support of breastfeeding by the primary health care provider may reduce early introduction of solids

    The Surgical Infection Society revised guidelines on the management of intra-abdominal infection

    Get PDF
    Background: Previous evidence-based guidelines on the management of intra-abdominal infection (IAI) were published by the Surgical Infection Society (SIS) in 1992, 2002, and 2010. At the time the most recent guideline was released, the plan was to update the guideline every five years to ensure the timeliness and appropriateness of the recommendations. Methods: Based on the previous guidelines, the task force outlined a number of topics related to the treatment of patients with IAI and then developed key questions on these various topics. All questions were approached using general and specific literature searches, focusing on articles and other information published since 2008. These publications and additional materials published before 2008 were reviewed by the task force as a whole or by individual subgroups as to relevance to individual questions. Recommendations were developed by a process of iterative consensus, with all task force members voting to accept or reject each recommendation. Grading was based on the GRADE (Grades of Recommendation Assessment, Development, and Evaluation) system; the quality of the evidence was graded as high, moderate, or weak, and the strength of the recommendation was graded as strong or weak. Review of the document was performed by members of the SIS who were not on the task force. After responses were made to all critiques, the document was approved as an official guideline of the SIS by the Executive Council. Results: This guideline summarizes the current recommendations developed by the task force on the treatment of patients who have IAI. Evidence-based recommendations have been made regarding risk assessment in individual patients; source control; the timing, selection, and duration of antimicrobial therapy; and suggested approaches to patients who fail initial therapy. Additional recommendations related to the treatment of pediatric patients with IAI have been included. Summary: The current recommendations of the SIS regarding the treatment of patients with IAI are provided in this guideline

    Utilisation of an operative difficulty grading scale for laparoscopic cholecystectomy

    Get PDF
    Background A reliable system for grading operative difficulty of laparoscopic cholecystectomy would standardise description of findings and reporting of outcomes. The aim of this study was to validate a difficulty grading system (Nassar scale), testing its applicability and consistency in two large prospective datasets. Methods Patient and disease-related variables and 30-day outcomes were identified in two prospective cholecystectomy databases: the multi-centre prospective cohort of 8820 patients from the recent CholeS Study and the single-surgeon series containing 4089 patients. Operative data and patient outcomes were correlated with Nassar operative difficultly scale, using Kendall’s tau for dichotomous variables, or Jonckheere–Terpstra tests for continuous variables. A ROC curve analysis was performed, to quantify the predictive accuracy of the scale for each outcome, with continuous outcomes dichotomised, prior to analysis. Results A higher operative difficulty grade was consistently associated with worse outcomes for the patients in both the reference and CholeS cohorts. The median length of stay increased from 0 to 4 days, and the 30-day complication rate from 7.6 to 24.4% as the difficulty grade increased from 1 to 4/5 (both p < 0.001). In the CholeS cohort, a higher difficulty grade was found to be most strongly associated with conversion to open and 30-day mortality (AUROC = 0.903, 0.822, respectively). On multivariable analysis, the Nassar operative difficultly scale was found to be a significant independent predictor of operative duration, conversion to open surgery, 30-day complications and 30-day reintervention (all p < 0.001). Conclusion We have shown that an operative difficulty scale can standardise the description of operative findings by multiple grades of surgeons to facilitate audit, training assessment and research. It provides a tool for reporting operative findings, disease severity and technical difficulty and can be utilised in future research to reliably compare outcomes according to case mix and intra-operative difficulty

    Measurement of the Bottom-Strange Meson Mixing Phase in the Full CDF Data Set

    Get PDF
    We report a measurement of the bottom-strange meson mixing phase \beta_s using the time evolution of B0_s -> J/\psi (->\mu+\mu-) \phi (-> K+ K-) decays in which the quark-flavor content of the bottom-strange meson is identified at production. This measurement uses the full data set of proton-antiproton collisions at sqrt(s)= 1.96 TeV collected by the Collider Detector experiment at the Fermilab Tevatron, corresponding to 9.6 fb-1 of integrated luminosity. We report confidence regions in the two-dimensional space of \beta_s and the B0_s decay-width difference \Delta\Gamma_s, and measure \beta_s in [-\pi/2, -1.51] U [-0.06, 0.30] U [1.26, \pi/2] at the 68% confidence level, in agreement with the standard model expectation. Assuming the standard model value of \beta_s, we also determine \Delta\Gamma_s = 0.068 +- 0.026 (stat) +- 0.009 (syst) ps-1 and the mean B0_s lifetime, \tau_s = 1.528 +- 0.019 (stat) +- 0.009 (syst) ps, which are consistent and competitive with determinations by other experiments.Comment: 8 pages, 2 figures, Phys. Rev. Lett 109, 171802 (2012

    Effects of poling and crystallinity on the dielectric properties of Pb(In1/2Nb1/2)O3-Pb(Mg1/3Nb2/3)O3-PbTiO3 at cryogenic temperatures

    Get PDF
    The mechanisms underlying the anomalously large, room temperature piezoelectric activity of relaxor-PbTiO3 type single crystals have previously been linked to low temperature relaxations in the piezoelectric and dielectric properties. We investigate the properties of Pb(In1/2Nb1/2)O3-Pb(Mg1/3Nb2/3)O3-PbTiO3 between 10 and 300 K using dielectric permittivity measurements. We compare results on single crystal plates measured in the [001] and [111] directions with a polycrystalline ceramic of the same composition. Poled crystals have very different behaviour to unpoled crystals, whereas the dielectric spectrum of the polycrystalline ceramic changes very little on poling. A large, frequency dependent dielectric relaxation is seen in the poled [001] crystal around 100 K. The relaxation is much less prominent in the [111] cut crystal, and is not present in the polycrystalline ceramic. The unique presence of the large relaxation in poled, [001] oriented crystals indicates that the phenomenon is not due their relaxor nature alone. We propose that heterophase dynamics such as the motion of phase domain boundaries are responsible for both the anomalous electromechanical and dielectric behaviour

    Relationships Linking Amplification Level to Gene Over-Expression in Gliomas

    Get PDF
    Background: Gene amplification is thought to promote over-expression of genes favouring tumour development. Because amplified regions are usually megabase-long, amplification often concerns numerous syntenic or non-syntenic genes, among which only a subset is over-expressed. The rationale for these differences remains poorly understood. Methodology/Principal Finding: To address this question, we used quantitative RT-PCR to determine the expression level of a series of co-amplified genes in five xenografted and one fresh human gliomas. These gliomas were chosen because we have previously characterised in detail the genetic content of their amplicons. In all the cases, the amplified sequences lie on extra-chromosomal DNA molecules, as commonly observed in gliomas. We show here that genes transcribed in nonamplified gliomas are over-expressed when amplified, roughly in proportion to their copy number, while non-expressed genes remain inactive. When specific antibodies were available, we also compared protein expression in amplified and nonamplified tumours. We found that protein accumulation barely correlates with the level of mRNA expression in some of these tumours. Conclusions/Significance: Here we show that the tissue-specific pattern of gene expression is maintained upon amplification in gliomas. Our study relies on a single type of tumour and a limited number of cases. However, it strongly suggests that, even when amplified, genes that are normally silent in a given cell type play no role in tumour progression

    Mycobacterium tuberculosis Rv3586 (DacA) Is a Diadenylate Cyclase That Converts ATP or ADP into c-di-AMP

    Get PDF
    Cyclic diguanosine monophosphate (c-di-GMP) and cyclic diadenosine monophosphate (c-di-AMP) are recently identified signaling molecules. c-di-GMP has been shown to play important roles in bacterial pathogenesis, whereas information about c-di-AMP remains very limited. Mycobacterium tuberculosis Rv3586 (DacA), which is an ortholog of Bacillus subtilis DisA, is a putative diadenylate cyclase. In this study, we determined the enzymatic activity of DacA in vitro using high-performance liquid chromatography (HPLC), mass spectrometry (MS) and thin layer chromatography (TLC). Our results showed that DacA was mainly a diadenylate cyclase, which resembles DisA. In addition, DacA also exhibited residual ATPase and ADPase in vitro. Among the potential substrates tested, DacA was able to utilize both ATP and ADP, but not AMP, pApA, c-di-AMP or GTP. By using gel filtration and analytical ultracentrifugation, we further demonstrated that DacA existed as an octamer, with the N-terminal domain contributing to tetramerization and the C-terminal domain providing additional dimerization. Both the N-terminal and the C-terminal domains were essential for the DacA's enzymatically active conformation. The diadenylate cyclase activity of DacA was dependent on divalent metal ions such as Mg2+, Mn2+ or Co2+. DacA was more active at a basic pH rather than at an acidic pH. The conserved RHR motif in DacA was essential for interacting with ATP, and mutation of this motif to AAA completely abolished DacA's diadenylate cyclase activity. These results provide the molecular basis for designating DacA as a diadenylate cyclase. Our future studies will explore the biological function of this enzyme in M. tuberculosis

    Modeling Magnification and Anisotropy in the Primate Foveal Confluence

    Get PDF
    A basic organizational principle of the primate visual system is that it maps the visual environment repeatedly and retinotopically onto cortex. Simple algebraic models can be used to describe the projection from visual space to cortical space not only for V1, but also for the complex of areas V1, V2 and V3. Typically a conformal (angle-preserving) projection ensuring local isotropy is regarded as ideal and primate visual cortex is often regarded as an approximation of this ideal. However, empirical data show systematic deviations from this ideal that are especially relevant in the foveal projection. The aims of this study were to map the nature of anisotropy predicted by existing models, to investigate the optimization targets faced by different types of retino-cortical maps, and finally to propose a novel map that better models empirical data than other candidates. The retino-cortical map can be optimized towards a space-conserving homogenous representation or a quasi-conformal mapping. The latter would require a significantly enlarged representation of specific parts of the cortical maps. In particular it would require significant enlargement of parafoveal V2 and V3 which is not supported by empirical data. Further, the recently published principal layout of the foveal singularity cannot be explained by existing models. We suggest a new model that accurately describes foveal data, minimizing cortical surface area in the periphery but suggesting that local isotropy dominates the most foveal part at the expense of additional cortical surface. The foveal confluence is an important example of the detailed trade-offs between the compromises required for the mapping of environmental space to a complex of neighboring cortical areas. Our models demonstrate that the organization follows clear morphogenetic principles that are essential for our understanding of foveal vision in daily life
    corecore