879 research outputs found

    Automated annotation of chemical names in the literature with tunable accuracy

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>A significant portion of the biomedical and chemical literature refers to small molecules. The accurate identification and annotation of compound name that are relevant to the topic of the given literature can establish links between scientific publications and various chemical and life science databases. Manual annotation is the preferred method for these works because well-trained indexers can understand the paper topics as well as recognize key terms. However, considering the hundreds of thousands of new papers published annually, an automatic annotation system with high precision and relevance can be a useful complement to manual annotation.</p> <p>Results</p> <p>An automated chemical name annotation system, MeSH Automated Annotations (MAA), was developed to annotate small molecule names in scientific abstracts with tunable accuracy. This system aims to reproduce the MeSH term annotations on biomedical and chemical literature that would be created by indexers. When comparing automated free text matching to those indexed manually of 26 thousand MEDLINE abstracts, more than 40% of the annotations were false-positive (FP) cases. To reduce the FP rate, MAA incorporated several filters to remove "incorrect" annotations caused by nonspecific, partial, and low relevance chemical names. In part, relevance was measured by the position of the chemical name in the text. Tunable accuracy was obtained by adding or restricting the sections of the text scanned for chemical names. The best precision obtained was 96% with a 28% recall rate. The best performance of MAA, as measured with the F statistic was 66%, which favorably compares to other chemical name annotation systems.</p> <p>Conclusions</p> <p>Accurate chemical name annotation can help researchers not only identify important chemical names in abstracts, but also match unindexed and unstructured abstracts to chemical records. The current work is tested against MEDLINE, but the algorithm is not specific to this corpus and it is possible that the algorithm can be applied to papers from chemical physics, material, polymer and environmental science, as well as patents, biological assay descriptions and other textual data.</p

    Clinical practice: The bleeding child. Part II: Disorders of secondary hemostasis and fibrinolysis

    Get PDF
    Bleeding complications in children may be caused by disorders of secondary hemostasis or fibrinolysis. Characteristic features in medical history and physical examination, especially of hemophilia, are palpable deep hematomas, bleeding in joints and muscles, and recurrent bleedings. A detailed medical and family history combined with a thorough physical examination is essential to distinguish abnormal from normal bleeding and to decide whether it is necessary to perform diagnostic laboratory evaluation. Initial laboratory tests include prothrombin time and activated partial thromboplastin time. Knowledge of the classical coagulation cascade with its intrinsic, extrinsic, and common pathways, is useful to identify potential defects in the coagulation in order to decide which additional coagulation tests should be performed

    Investigating the effects of particle shape on normal compression and overconsolidation using DEM

    Get PDF
    Discrete element modelling of normal compression has been simulated on a sample of breakable two-ball clumps and compared to that of spheres. In both cases the size effect on strength is assumed to be that of real silica sand. The slopes of the normal compression lines are compared and found to be consistent with the proposed equation of the normal compression line. The values of the coefficient of earth pressure at rest K0,nc are also compared and related to the critical state fiction angles for the two materials. The breakable samples have then been unloaded to establish the stress ratios on unloading. At low overconsolidation ratios the values of K0 follow a well-established empirical relationship and realistic Poisson ratios are observed. On progressive unloading both samples head towards passive failure, and the values of the critical state lines in extension in q–p' space are found to be consistent with the critical state angles deduced from the values of K0 during normal compression. The paper highlights the important role of particle shape in governing the stress ratio during both normal compression and subsequent overconsolidation

    Consensus Paper—ICIS Expert Meeting Basel 2009 treatment milestones in immune thrombocytopenia

    Get PDF
    The rarity of severe complications of this disease in children makes randomized clinical trials in immune thrombocytopenia (ITP) unfeasible. Therefore, the current management recommendations for ITP are largely dependent on clinical expertise and observations. As part of its discussions during the Intercontinental Cooperative ITP Study Group Expert Meeting in Basel, the Management working group recommended that the decision to treat an ITP patient be individualized and based mainly on bleeding symptoms and not on the actual platelet count number and should be supported by bleeding scores using a validated assessment tool. The group stressed the need to develop a uniform validated bleeding score system and to explore new measures to evaluate bleeding risk in thrombocytopenic patients—the role of rituximab as a splenectomy-sparing agent in resistant disease was also discussed. Given the apparently high recurrence rate to rituximab therapy in children and the drug's possible toxicity, the group felt that until more data are available, a conservative approach may be considered, reserving rituximab for patients who failed splenectomy. More studies of the effectiveness and side effects of drugs to treat refractory patients, such as TPO mimetics, cyclosporine, mycophenolate mofetil, and cytotoxic agents are required, as are long-term data on post-splenectomy complications. In the patient with either acute or chronic ITP, using a more personalized approach to treatment based on bleeding symptoms rather than platelet count should result in less toxicity and empower both physicians and families to focus on quality-of-life

    Utilisation of an operative difficulty grading scale for laparoscopic cholecystectomy

    Get PDF
    Background A reliable system for grading operative difficulty of laparoscopic cholecystectomy would standardise description of findings and reporting of outcomes. The aim of this study was to validate a difficulty grading system (Nassar scale), testing its applicability and consistency in two large prospective datasets. Methods Patient and disease-related variables and 30-day outcomes were identified in two prospective cholecystectomy databases: the multi-centre prospective cohort of 8820 patients from the recent CholeS Study and the single-surgeon series containing 4089 patients. Operative data and patient outcomes were correlated with Nassar operative difficultly scale, using Kendall’s tau for dichotomous variables, or Jonckheere–Terpstra tests for continuous variables. A ROC curve analysis was performed, to quantify the predictive accuracy of the scale for each outcome, with continuous outcomes dichotomised, prior to analysis. Results A higher operative difficulty grade was consistently associated with worse outcomes for the patients in both the reference and CholeS cohorts. The median length of stay increased from 0 to 4 days, and the 30-day complication rate from 7.6 to 24.4% as the difficulty grade increased from 1 to 4/5 (both p < 0.001). In the CholeS cohort, a higher difficulty grade was found to be most strongly associated with conversion to open and 30-day mortality (AUROC = 0.903, 0.822, respectively). On multivariable analysis, the Nassar operative difficultly scale was found to be a significant independent predictor of operative duration, conversion to open surgery, 30-day complications and 30-day reintervention (all p < 0.001). Conclusion We have shown that an operative difficulty scale can standardise the description of operative findings by multiple grades of surgeons to facilitate audit, training assessment and research. It provides a tool for reporting operative findings, disease severity and technical difficulty and can be utilised in future research to reliably compare outcomes according to case mix and intra-operative difficulty

    Membrane connectivity estimated by digital image analysis of HER2 immunohistochemistry is concordant with visual scoring and fluorescence in situ hybridization results: algorithm evaluation on breast cancer tissue microarrays

    Get PDF
    <p>Abstract</p> <p>Introduction</p> <p>The human epidermal growth factor receptor 2 (HER2) is an established biomarker for management of patients with breast cancer. While conventional testing of HER2 protein expression is based on semi-quantitative visual scoring of the immunohistochemistry (IHC) result, efforts to reduce inter-observer variation and to produce continuous estimates of the IHC data are potentiated by digital image analysis technologies.</p> <p>Methods</p> <p>HER2 IHC was performed on the tissue microarrays (TMAs) of 195 patients with an early ductal carcinoma of the breast. Digital images of the IHC slides were obtained by Aperio ScanScope GL Slide Scanner. Membrane connectivity algorithm (HER2-CONNECT™, Visiopharm) was used for digital image analysis (DA). A pathologist evaluated the images on the screen twice (visual evaluations: VE1 and VE2). HER2 fluorescence <it>in situ </it>hybridization (FISH) was performed on the corresponding sections of the TMAs. The agreement between the IHC HER2 scores, obtained by VE1, VE2, and DA was tested for individual TMA spots and patient's maximum TMA spot values (VE1max, VE2max, DAmax). The latter were compared with the FISH data. Correlation of the continuous variable of the membrane connectivity estimate with the FISH data was tested.</p> <p>Results</p> <p>The pathologist intra-observer agreement (VE1 and VE2) on HER2 IHC score was almost perfect: kappa 0.91 (by spot) and 0.88 (by patient). The agreement between visual evaluation and digital image analysis was almost perfect at the spot level (kappa 0.86 and 0.87, with VE1 and VE2 respectively) and at the patient level (kappa 0.80 and 0.86, with VE1max and VE2max, respectively). The DA was more accurate than VE in detection of FISH-positive patients by recruiting 3 or 2 additional FISH-positive patients to the IHC score 2+ category from the IHC 0/1+ category by VE1max or VE2max, respectively. The DA continuous variable of the membrane connectivity correlated with the FISH data (HER2 and CEP17 copy numbers, and HER2/CEP17 ratio).</p> <p>Conclusion</p> <p>HER2 IHC digital image analysis based on membrane connectivity estimate was in almost perfect agreement with the visual evaluation of the pathologist and more accurate in detection of HER2 FISH-positive patients. Most immediate benefit of integrating the DA algorithm into the routine pathology HER2 testing may be obtained by alerting/reassuring pathologists of potentially misinterpreted IHC 0/1+ versus 2+ cases.</p

    Performance of CMS muon reconstruction in pp collision events at sqrt(s) = 7 TeV

    Get PDF
    The performance of muon reconstruction, identification, and triggering in CMS has been studied using 40 inverse picobarns of data collected in pp collisions at sqrt(s) = 7 TeV at the LHC in 2010. A few benchmark sets of selection criteria covering a wide range of physics analysis needs have been examined. For all considered selections, the efficiency to reconstruct and identify a muon with a transverse momentum pT larger than a few GeV is above 95% over the whole region of pseudorapidity covered by the CMS muon system, abs(eta) < 2.4, while the probability to misidentify a hadron as a muon is well below 1%. The efficiency to trigger on single muons with pT above a few GeV is higher than 90% over the full eta range, and typically substantially better. The overall momentum scale is measured to a precision of 0.2% with muons from Z decays. The transverse momentum resolution varies from 1% to 6% depending on pseudorapidity for muons with pT below 100 GeV and, using cosmic rays, it is shown to be better than 10% in the central region up to pT = 1 TeV. Observed distributions of all quantities are well reproduced by the Monte Carlo simulation.Comment: Replaced with published version. Added journal reference and DO

    Performance of CMS muon reconstruction in pp collision events at sqrt(s) = 7 TeV

    Get PDF
    The performance of muon reconstruction, identification, and triggering in CMS has been studied using 40 inverse picobarns of data collected in pp collisions at sqrt(s) = 7 TeV at the LHC in 2010. A few benchmark sets of selection criteria covering a wide range of physics analysis needs have been examined. For all considered selections, the efficiency to reconstruct and identify a muon with a transverse momentum pT larger than a few GeV is above 95% over the whole region of pseudorapidity covered by the CMS muon system, abs(eta) < 2.4, while the probability to misidentify a hadron as a muon is well below 1%. The efficiency to trigger on single muons with pT above a few GeV is higher than 90% over the full eta range, and typically substantially better. The overall momentum scale is measured to a precision of 0.2% with muons from Z decays. The transverse momentum resolution varies from 1% to 6% depending on pseudorapidity for muons with pT below 100 GeV and, using cosmic rays, it is shown to be better than 10% in the central region up to pT = 1 TeV. Observed distributions of all quantities are well reproduced by the Monte Carlo simulation.Comment: Replaced with published version. Added journal reference and DO

    Measurement of the Forward-Backward Asymmetry in the B -> K(*) mu+ mu- Decay and First Observation of the Bs -> phi mu+ mu- Decay

    Get PDF
    We reconstruct the rare decays B+→K+μ+μ−B^+ \to K^+\mu^+\mu^-, B0→K∗(892)0μ+μ−B^0 \to K^{*}(892)^0\mu^+\mu^-, and Bs0→ϕ(1020)μ+μ−B^0_s \to \phi(1020)\mu^+\mu^- in a data sample corresponding to 4.4fb−14.4 {\rm fb^{-1}} collected in ppˉp\bar{p} collisions at s=1.96TeV\sqrt{s}=1.96 {\rm TeV} by the CDF II detector at the Fermilab Tevatron Collider. Using 121±16121 \pm 16 B+→K+μ+μ−B^+ \to K^+\mu^+\mu^- and 101±12101 \pm 12 B0→K∗0μ+μ−B^0 \to K^{*0}\mu^+\mu^- decays we report the branching ratios. In addition, we report the measurement of the differential branching ratio and the muon forward-backward asymmetry in the B+B^+ and B0B^0 decay modes, and the K∗0K^{*0} longitudinal polarization in the B0B^0 decay mode with respect to the squared dimuon mass. These are consistent with the theoretical prediction from the standard model, and most recent determinations from other experiments and of comparable accuracy. We also report the first observation of the Bs0→ϕμ+μ−decayandmeasureitsbranchingratioB^0_s \to \phi\mu^+\mu^- decay and measure its branching ratio {\mathcal{B}}(B^0_s \to \phi\mu^+\mu^-) = [1.44 \pm 0.33 \pm 0.46] \times 10^{-6}using using 27 \pm 6signalevents.Thisiscurrentlythemostrare signal events. This is currently the most rare B^0_s$ decay observed.Comment: 7 pages, 2 figures, 3 tables. Submitted to Phys. Rev. Let

    X-ray emission from the Sombrero galaxy: discrete sources

    Get PDF
    We present a study of discrete X-ray sources in and around the bulge-dominated, massive Sa galaxy, Sombrero (M104), based on new and archival Chandra observations with a total exposure of ~200 ks. With a detection limit of L_X = 1E37 erg/s and a field of view covering a galactocentric radius of ~30 kpc (11.5 arcminute), 383 sources are detected. Cross-correlation with Spitler et al.'s catalogue of Sombrero globular clusters (GCs) identified from HST/ACS observations reveals 41 X-rays sources in GCs, presumably low-mass X-ray binaries (LMXBs). We quantify the differential luminosity functions (LFs) for both the detected GC and field LMXBs, whose power-low indices (~1.1 for the GC-LF and ~1.6 for field-LF) are consistent with previous studies for elliptical galaxies. With precise sky positions of the GCs without a detected X-ray source, we further quantify, through a fluctuation analysis, the GC LF at fainter luminosities down to 1E35 erg/s. The derived index rules out a faint-end slope flatter than 1.1 at a 2 sigma significance, contrary to recent findings in several elliptical galaxies and the bulge of M31. On the other hand, the 2-6 keV unresolved emission places a tight constraint on the field LF, implying a flattened index of ~1.0 below 1E37 erg/s. We also detect 101 sources in the halo of Sombrero. The presence of these sources cannot be interpreted as galactic LMXBs whose spatial distribution empirically follows the starlight. Their number is also higher than the expected number of cosmic AGNs (52+/-11 [1 sigma]) whose surface density is constrained by deep X-ray surveys. We suggest that either the cosmic X-ray background is unusually high in the direction of Sombrero, or a distinct population of X-ray sources is present in the halo of Sombrero.Comment: 11 figures, 5 tables, ApJ in pres
    • …
    corecore