141 research outputs found

    Protein C Inhibitor—A Novel Antimicrobial Agent

    Get PDF
    Protein C inhibitor (PCI) is a heparin-binding serine proteinase inhibitor belonging to the family of serpin proteins. Here we describe that PCI exerts broad antimicrobial activity against bacterial pathogens. This ability is mediated by the interaction of PCI with lipid membranes, which subsequently leads to their permeabilization. As shown by negative staining electron microscopy, treatment of Escherichia coli or Streptococcus pyogenes bacteria with PCI triggers membrane disruption followed by the efflux of bacterial cytosolic contents and bacterial killing. The antimicrobial activity of PCI is located to the heparin-binding site of the protein and a peptide spanning this region was found to mimic the antimicrobial activity of PCI, without causing lysis or membrane destruction of eukaryotic cells. Finally, we show that platelets can assemble PCI on their surface upon activation. As platelets are recruited to the site of a bacterial infection, these results may explain our finding that PCI levels are increased in tissue biopsies from patients suffering from necrotizing fasciitis caused by S. pyogenes. Taken together, our data describe a new function for PCI in innate immunity

    Being user-oriented: convergences, divergences, and the potentials for systematic dialogue between disciplines and between researchers, designers, and providers

    Get PDF
    The challenge this panel addresses is drawn from intersecting literature reviews and critical commentaries focusing on: 1) user studies in multiple fields; and 2) the difficulties of bringing different disciplines and perspectives to bear on user‐oriented research, design, and practice. 1 The challenge is that while we have made some progress in collaborative work, we have some distance to go to become user‐oriented in inter‐disciplinary and inter‐perspective ways. The varieties of our approaches and solutions are, as some observers suggest, an increasing cacophony. One major difficulty is that most discussions are solution‐oriented, offering arguments of this sort ‐‐ if only we addressed users in this way… Each solution becomes yet another addition to the cacophony. This panel implements a central approach documented for its utility by communication researchers and long used by communication mediators and negotiators ‐‐ that of focusing not on communication but rather on meta‐communication: communicating about communication. The intent in the context of this panel is to help us refocus attention from too frequent polarizations between alternative solutions to the possibility of coming to understand what is behind the alternatives and where they point to experientially‐based convergences and divergences, both of which might potentially contribute to synergies. The background project for this panel comes from a series of in‐depth interviews with expert researchers, designers, and providers in three field groupings ‐‐ library and information science; human computer interaction/information technology; and communication and media studies. One set of interviews involved 5‐hour focus groups with directors of academic and public libraries serving 44 colleges and universities in central Ohio; the second involved one‐on‐one interviews averaging 50 minutes with 81 nationally‐internationally known experts in the 3 fields, 25‐27 interviews per field. Using Dervin\u27s Sense‐Making Methodological approach to interviewing, the expert interviews of both kinds asked each interviewee: what he/she considered to be the big unanswered questions about users and what explained why the questions have not been answered; and, what he/she saw as hindering versus helping in attempts to communicate about users across disciplinary and perspective gaps. 2 The panel consists of six teams, two from each field. Prior to the panel presentation at ASIST, each team will have read the set of interviews and completed impressionistic essays of what patterns and themes they saw as emerging. At this stage, team members will purposively not homogenize their differences and most will write solo‐authored essays that will be placed on a web‐site accessible to ASIST members prior to the November meeting. In addition, at least one systematic analysis will be completed and available online. 3 At the ASIST panel, each team\u27s leader will present a brief and intentionally provocative impressionist account of what his/her team came to understand about our struggles communicating across fields and perspectives about users. Again, each team will purposively not homogenize its own differences in viewpoints, but rather highlight them as fodder for discussion. A major purpose will be to invite audience members to join the panel in discussion. At least 20 minutes will be left open for this purpose

    Exploring and linking biomedical resources through multidimensional semantic spaces

    Get PDF
    Background The semantic integration of biomedical resources is still a challenging issue which is required for effective information processing and data analysis. The availability of comprehensive knowledge resources such as biomedical ontologies and integrated thesauri greatly facilitates this integration effort by means of semantic annotation, which allows disparate data formats and contents to be expressed under a common semantic space. In this paper, we propose a multidimensional representation for such a semantic space, where dimensions regard the different perspectives in biomedical research (e.g., population, disease, anatomy and protein/genes). Results This paper presents a novel method for building multidimensional semantic spaces from semantically annotated biomedical data collections. This method consists of two main processes: knowledge and data normalization. The former one arranges the concepts provided by a reference knowledge resource (e.g., biomedical ontologies and thesauri) into a set of hierarchical dimensions for analysis purposes. The latter one reduces the annotation set associated to each collection item into a set of points of the multidimensional space. Additionally, we have developed a visual tool, called 3D-Browser, which implements OLAP-like operators over the generated multidimensional space. The method and the tool have been tested and evaluated in the context of the Health-e-Child (HeC) project. Automatic semantic annotation was applied to tag three collections of abstracts taken from PubMed, one for each target disease of the project, the Uniprot database, and the HeC patient record database. We adopted the UMLS Meta-thesaurus 2010AA as the reference knowledge resource. Conclusions Current knowledge resources and semantic-aware technology make possible the integration of biomedical resources. Such an integration is performed through semantic annotation of the intended biomedical data resources. This paper shows how these annotations can be exploited for integration, exploration, and analysis tasks. Results over a real scenario demonstrate the viability and usefulness of the approach, as well as the quality of the generated multidimensional semantic spaces

    Heterozygous Yeast Deletion Collection Screens Reveal Essential Targets of Hsp90

    Get PDF
    Hsp90 is an essential eukaryotic chaperone with a role in folding specific “client” proteins such as kinases and hormone receptors. Previously performed homozygous diploid yeast deletion collection screens uncovered broad requirements for Hsp90 in cellular transport and cell cycle progression. These screens also revealed that the requisite cellular functions of Hsp90 change with growth temperature. We present here for the first time the results of heterozygous deletion collection screens conducted at the hypothermic stress temperature of 15°C. Extensive bioinformatic analyses were performed on the resulting data in combination with data from homozygous and heterozygous screens previously conducted at normal (30°C) and hyperthermic stress (37°C) growth temperatures. Our resulting meta-analysis uncovered extensive connections between Hsp90 and (1) general transcription, (2) ribosome biogenesis and (3) GTP binding proteins. Predictions from bioinformatic analyses were tested experimentally, supporting a role for Hsp90 in ribosome stability. Importantly, the integrated analysis of the 15°C heterozygous deletion pool screen with previously conducted 30°C and 37°C screens allows for essential genetic targets of Hsp90 to emerge. Altogether, these novel contributions enable a more complete picture of essential Hsp90 functions

    Planck 2013 results. IX. HFI spectral response

    Get PDF
    The Planck High Frequency Instrument (HFI) spectral response was determined through a series of ground based tests conducted with the HFI focal plane in a cryogenic environment prior to launch. The main goal of the spectral transmission tests was to measure the relative spectral response (including out-of-band signal rejection) of all HFI detectors. This was determined by measuring the output of a continuously scanned Fourier transform spectrometer coupled with all HFI detectors. As there is no on-board spectrometer within HFI, the ground-based spectral response experiments provide the definitive data set for the relative spectral calibration of the HFI. The spectral response of the HFI is used in Planck data analysis and component separation, this includes extraction of CO emission observed within Planck bands, dust emission, Sunyaev-Zeldovich sources, and intensity to polarization leakage. The HFI spectral response data have also been used to provide unit conversion and colour correction analysis tools. Verifications of the HFI spectral response data are provided through comparisons with photometric HFI flight data. This validation includes use of HFI zodiacal emission observations to demonstrate out-of-band spectral signal rejection better than 10^8. The accuracy of the HFI relative spectral response data is verified through comparison with complementary flight-data based unit conversion coefficients and colour correction coefficients. These coefficients include those based upon HFI observations of CO, dust, and Sunyaev-Zeldovich emission. General agreement is observed between the ground-based spectral characterization of HFI and corresponding in-flight observations, within the quoted uncertainty of each; explanations are provided for any discrepancies.Comment: 27 pages, 28 figures, one of the papers associated with the 2013 Planck data releas

    Homoplastic microinversions and the avian tree of life

    Get PDF
    Background: Microinversions are cytologically undetectable inversions of DNA sequences that accumulate slowly in genomes. Like many other rare genomic changes (RGCs), microinversions are thought to be virtually homoplasyfree evolutionary characters, suggesting that they may be very useful for difficult phylogenetic problems such as the avian tree of life. However, few detailed surveys of these genomic rearrangements have been conducted, making it difficult to assess this hypothesis or understand the impact of microinversions upon genome evolution. Results: We surveyed non-coding sequence data from a recent avian phylogenetic study and found substantially more microinversions than expected based upon prior information about vertebrate inversion rates, although this is likely due to underestimation of these rates in previous studies. Most microinversions were lineage-specific or united well-accepted groups. However, some homoplastic microinversions were evident among the informative characters. Hemiplasy, which reflects differences between gene trees and the species tree, did not explain the observed homoplasy. Two specific loci were microinversion hotspots, with high numbers of inversions that included both the homoplastic as well as some overlapping microinversions. Neither stem-loop structures nor detectable sequence motifs were associated with microinversions in the hotspots. Conclusions: Microinversions can provide valuable phylogenetic information, although power analysis indicate

    Finding Diagnostically Useful Patterns in Quantitative Phenotypic Data.

    Get PDF
    Trio-based whole-exome sequence (WES) data have established confident genetic diagnoses in ∼40% of previously undiagnosed individuals recruited to the Deciphering Developmental Disorders (DDD) study. Here we aim to use the breadth of phenotypic information recorded in DDD to augment diagnosis and disease variant discovery in probands. Median Euclidean distances (mEuD) were employed as a simple measure of similarity of quantitative phenotypic data within sets of ≥10 individuals with plausibly causative de novo mutations (DNM) in 28 different developmental disorder genes. 13/28 (46.4%) showed significant similarity for growth or developmental milestone metrics, 10/28 (35.7%) showed similarity in HPO term usage, and 12/28 (43%) showed no phenotypic similarity. Pairwise comparisons of individuals with high-impact inherited variants to the 32 individuals with causative DNM in ANKRD11 using only growth z-scores highlighted 5 likely causative inherited variants and two unrecognized DNM resulting in an 18% diagnostic uplift for this gene. Using an independent approach, naive Bayes classification of growth and developmental data produced reasonably discriminative models for the 24 DNM genes with sufficiently complete data. An unsupervised naive Bayes classification of 6,993 probands with WES data and sufficient phenotypic information defined 23 in silico syndromes (ISSs) and was used to test a "phenotype first" approach to the discovery of causative genotypes using WES variants strictly filtered on allele frequency, mutation consequence, and evidence of constraint in humans. This highlighted heterozygous de novo nonsynonymous variants in SPTBN2 as causative in three DDD probands

    Multi-messenger observations of a binary neutron star merger

    Get PDF
    On 2017 August 17 a binary neutron star coalescence candidate (later designated GW170817) with merger time 12:41:04 UTC was observed through gravitational waves by the Advanced LIGO and Advanced Virgo detectors. The Fermi Gamma-ray Burst Monitor independently detected a gamma-ray burst (GRB 170817A) with a time delay of ~1.7 s with respect to the merger time. From the gravitational-wave signal, the source was initially localized to a sky region of 31 deg2 at a luminosity distance of 40+8-8 Mpc and with component masses consistent with neutron stars. The component masses were later measured to be in the range 0.86 to 2.26 Mo. An extensive observing campaign was launched across the electromagnetic spectrum leading to the discovery of a bright optical transient (SSS17a, now with the IAU identification of AT 2017gfo) in NGC 4993 (at ~40 Mpc) less than 11 hours after the merger by the One- Meter, Two Hemisphere (1M2H) team using the 1 m Swope Telescope. The optical transient was independently detected by multiple teams within an hour. Subsequent observations targeted the object and its environment. Early ultraviolet observations revealed a blue transient that faded within 48 hours. Optical and infrared observations showed a redward evolution over ~10 days. Following early non-detections, X-ray and radio emission were discovered at the transient’s position ~9 and ~16 days, respectively, after the merger. Both the X-ray and radio emission likely arise from a physical process that is distinct from the one that generates the UV/optical/near-infrared emission. No ultra-high-energy gamma-rays and no neutrino candidates consistent with the source were found in follow-up searches. These observations support the hypothesis that GW170817 was produced by the merger of two neutron stars in NGC4993 followed by a short gamma-ray burst (GRB 170817A) and a kilonova/macronova powered by the radioactive decay of r-process nuclei synthesized in the ejecta
    corecore