86 research outputs found

    Pilus distribution among lineages of group b <i>streptococcus</i>: an evolutionary and clinical perspective

    Get PDF
    &lt;b&gt;Background&lt;/b&gt;&lt;p&gt;&lt;/p&gt; Group B Streptococcus (GBS) is an opportunistic pathogen in both humans and bovines. Epidemiological and phylogenetic analyses have found strains belonging to certain phylogenetic lineages to be more frequently associated with invasive newborn disease, asymptomatic maternal colonization, and subclinical bovine mastitis. Pilus structures in GBS facilitate colonization and invasion of host tissues and play a role in biofilm formation, though few large-scale studies have estimated the frequency and diversity of the three pilus islands (PIs) across diverse genotypes. Here, we examined the distribution of pilus islands (PI) 1, 2a and 2b among 295 GBS strains representing 73 multilocus sequence types (STs) belonging to eight clonal complexes. PCR-based RFLP was also used to evaluate variation in the genes encoding pilus backbone proteins of PI-2a and PI-2b.&lt;p&gt;&lt;/p&gt; &lt;b&gt;Results&lt;/b&gt;&lt;p&gt;&lt;/p&gt; All 295 strains harbored one of the PI-2 variants and most human-derived strains contained PI-1. Bovine-derived strains lacked PI-1 and possessed a unique PI-2b backbone protein allele. Neonatal strains more frequently had PI-1 and a PI-2 variant than maternal colonizing strains, and most CC-17 strains had PI-1 and PI-2b with a distinct backbone protein allele. Furthermore, we present evidence for the frequent gain and loss of genes encoding certain pilus types.&lt;p&gt;&lt;/p&gt; &lt;b&gt;Conclusions&lt;/b&gt;&lt;p&gt;&lt;/p&gt; These data suggest that pilus combinations impact host specificity and disease presentation and that diversification often involves the loss or acquisition of PIs. Such findings have implications for the development of GBS vaccines that target the three pilus islands

    Anticoagulant rodenticides on our public and community lands: spatial distribution of exposure and poisoning of a rare forest carnivore.

    Get PDF
    Anticoagulant rodenticide (AR) poisoning has emerged as a significant concern for conservation and management of non-target wildlife. The purpose for these toxicants is to suppress pest populations in agricultural or urban settings. The potential of direct and indirect exposures and illicit use of ARs on public and community forest lands have recently raised concern for fishers (Martes pennanti), a candidate for listing under the federal Endangered Species Act in the Pacific states. In an investigation of threats to fisher population persistence in the two isolated California populations, we investigate the magnitude of this previously undocumented threat to fishers, we tested 58 carcasses for the presence and quantification of ARs, conducted spatial analysis of exposed fishers in an effort to identify potential point sources of AR, and identified fishers that died directly due to AR poisoning. We found 46 of 58 (79%) fishers exposed to an AR with 96% of those individuals having been exposed to one or more second-generation AR compounds. No spatial clustering of AR exposure was detected and the spatial distribution of exposure suggests that AR contamination is widespread within the fisher's range in California, which encompasses mostly public forest and park lands Additionally, we diagnosed four fisher deaths, including a lactating female, that were directly attributed to AR toxicosis and documented the first neonatal or milk transfer of an AR to an altricial fisher kit. These ARs, which some are acutely toxic, pose both a direct mortality or fitness risk to fishers, and a significant indirect risk to these isolated populations. Future research should be directed towards investigating risks to prey populations fishers are dependent on, exposure in other rare forest carnivores, and potential AR point sources such as illegal marijuana cultivation in the range of fishers on California public lands

    Patterns of Natural and Human-Caused Mortality Factors of a Rare Forest Carnivore, the Fisher (Pekania pennanti) in California.

    Get PDF
    Wildlife populations of conservation concern are limited in distribution, population size and persistence by various factors, including mortality. The fisher (Pekania pennanti), a North American mid-sized carnivore whose range in the western Pacific United States has retracted considerably in the past century, was proposed for threatened status protection in late 2014 under the United States Endangered Species Act by the United States Fish and Wildlife Service in its West Coast Distinct Population Segment. We investigated mortality in 167 fishers from two genetically and geographically distinct sub-populations in California within this West Coast Distinct Population Segment using a combination of gross necropsy, histology, toxicology and molecular methods. Overall, predation (70%), natural disease (16%), toxicant poisoning (10%) and, less commonly, vehicular strike (2%) and other anthropogenic causes (2%) were causes of mortality observed. We documented both an increase in mortality to (57% increase) and exposure (6%) from pesticides in fishers in just the past three years, highlighting further that toxicants from marijuana cultivation still pose a threat. Additionally, exposure to multiple rodenticides significantly increased the likelihood of mortality from rodenticide poisoning. Poisoning was significantly more common in male than female fishers and was 7 times more likely than disease to kill males. Based on necropsy findings, suspected causes of mortality based on field evidence alone tended to underestimate the frequency of disease-related mortalities. This study is the first comprehensive investigation of mortality causes of fishers and provides essential information to assist in the conservation of this species

    Mapping Materials and Molecules

    Get PDF
    The visualization of data is indispensable in scientific research, from the early stages when human insight forms to the final step of communicating results. In computational physics, chemistry and materials science, it can be as simple as making a scatter plot or as straightforward as looking through the snapshots of atomic positions manually. However, as a result of the “big data” revolution, these conventional approaches are often inadequate. The widespread adoption of high-throughput computation for materials discovery and the associated community-wide repositories have given rise to data sets that contain an enormous number of compounds and atomic configurations. A typical data set contains thousands to millions of atomic structures, along with a diverse range of properties such as formation energies, band gaps, or bioactivities. It would thus be desirable to have a data-driven and automated framework for visualizing and analyzing such structural data sets. The key idea is to construct a low-dimensional representation of the data, which facilitates navigation, reveals underlying patterns, and helps to identify data points with unusual attributes. Such data-intensive maps, often employing machine learning methods, are appearing more and more frequently in the literature. However, to the wider community, it is not always transparent how these maps are made and how they should be interpreted. Furthermore, while these maps undoubtedly serve a decorative purpose in academic publications, it is not always apparent what extra information can be garnered from reading or making them. This Account attempts to answer such questions. We start with a concise summary of the theory of representing chemical environments, followed by the introduction of a simple yet practical conceptual approach for generating structure maps in a generic and automated manner. Such analysis and mapping is made nearly effortless by employing the newly developed software tool ASAP. To showcase the applicability to a wide variety of systems in chemistry and materials science, we provide several illustrative examples, including crystalline and amorphous materials, interfaces, and organic molecules. In these examples, the maps not only help to sift through large data sets but also reveal hidden patterns that could be easily missed using conventional analyses. The explosion in the amount of computed information in chemistry and materials science has made visualization into a science in itself. Not only have we benefited from exploiting these visualization methods in previous works, we also believe that the automated mapping of data sets will in turn stimulate further creativity and exploration, as well as ultimately feed back into future advances in the respective fields

    Mapping Materials and Molecules.

    Get PDF
    The visualization of data is indispensable in scientific research, from the early stages when human insight forms to the final step of communicating results. In computational physics, chemistry and materials science, it can be as simple as making a scatter plot or as straightforward as looking through the snapshots of atomic positions manually. However, as a result of the "big data" revolution, these conventional approaches are often inadequate. The widespread adoption of high-throughput computation for materials discovery and the associated community-wide repositories have given rise to data sets that contain an enormous number of compounds and atomic configurations. A typical data set contains thousands to millions of atomic structures, along with a diverse range of properties such as formation energies, band gaps, or bioactivities.It would thus be desirable to have a data-driven and automated framework for visualizing and analyzing such structural data sets. The key idea is to construct a low-dimensional representation of the data, which facilitates navigation, reveals underlying patterns, and helps to identify data points with unusual attributes. Such data-intensive maps, often employing machine learning methods, are appearing more and more frequently in the literature. However, to the wider community, it is not always transparent how these maps are made and how they should be interpreted. Furthermore, while these maps undoubtedly serve a decorative purpose in academic publications, it is not always apparent what extra information can be garnered from reading or making them.This Account attempts to answer such questions. We start with a concise summary of the theory of representing chemical environments, followed by the introduction of a simple yet practical conceptual approach for generating structure maps in a generic and automated manner. Such analysis and mapping is made nearly effortless by employing the newly developed software tool ASAP. To showcase the applicability to a wide variety of systems in chemistry and materials science, we provide several illustrative examples, including crystalline and amorphous materials, interfaces, and organic molecules. In these examples, the maps not only help to sift through large data sets but also reveal hidden patterns that could be easily missed using conventional analyses.The explosion in the amount of computed information in chemistry and materials science has made visualization into a science in itself. Not only have we benefited from exploiting these visualization methods in previous works, we also believe that the automated mapping of data sets will in turn stimulate further creativity and exploration, as well as ultimately feed back into future advances in the respective fields

    Development and evaluation of machine learning in whole-body magnetic resonance imaging for detecting metastases in patients with lung or colon cancer: a diagnostic test accuracy study.

    Get PDF
    OBJECTIVES: Whole-body magnetic resonance imaging (WB-MRI) has been demonstrated to be efficient and cost-effective for cancer staging. The study aim was to develop a machine learning (ML) algorithm to improve radiologists' sensitivity and specificity for metastasis detection and reduce reading times. MATERIALS AND METHODS: A retrospective analysis of 438 prospectively collected WB-MRI scans from multicenter Streamline studies (February 2013-September 2016) was undertaken. Disease sites were manually labeled using Streamline reference standard. Whole-body MRI scans were randomly allocated to training and testing sets. A model for malignant lesion detection was developed based on convolutional neural networks and a 2-stage training strategy. The final algorithm generated lesion probability heat maps. Using a concurrent reader paradigm, 25 radiologists (18 experienced, 7 inexperienced in WB-/MRI) were randomly allocated WB-MRI scans with or without ML support to detect malignant lesions over 2 or 3 reading rounds. Reads were undertaken in the setting of a diagnostic radiology reading room between November 2019 and March 2020. Reading times were recorded by a scribe. Prespecified analysis included sensitivity, specificity, interobserver agreement, and reading time of radiology readers to detect metastases with or without ML support. Reader performance for detection of the primary tumor was also evaluated. RESULTS: Four hundred thirty-three evaluable WB-MRI scans were allocated to algorithm training (245) or radiology testing (50 patients with metastases, from primary 117 colon [n = 117] or lung [n = 71] cancer). Among a total 562 reads by experienced radiologists over 2 reading rounds, per-patient specificity was 86.2% (ML) and 87.7% (non-ML) (-1.5% difference; 95% confidence interval [CI], -6.4%, 3.5%; P = 0.39). Sensitivity was 66.0% (ML) and 70.0% (non-ML) (-4.0% difference; 95% CI, -13.5%, 5.5%; P = 0.344). Among 161 reads by inexperienced readers, per-patient specificity in both groups was 76.3% (0% difference; 95% CI, -15.0%, 15.0%; P = 0.613), with sensitivity of 73.3% (ML) and 60.0% (non-ML) (13.3% difference; 95% CI, -7.9%, 34.5%; P = 0.313). Per-site specificity was high (>90%) for all metastatic sites and experience levels. There was high sensitivity for the detection of primary tumors (lung cancer detection rate of 98.6% with and without ML [0.0% difference; 95% CI, -2.0%, 2.0%; P = 1.00], colon cancer detection rate of 89.0% with and 90.6% without ML [-1.7% difference; 95% CI, -5.6%, 2.2%; P = 0.65]). When combining all reads from rounds 1 and 2, reading times fell by 6.2% (95% CI, -22.8%, 10.0%) when using ML. Round 2 read-times fell by 32% (95% CI, 20.8%, 42.8%) compared with round 1. Within round 2, there was a significant decrease in read-time when using ML support, estimated as 286 seconds (or 11%) quicker (P = 0.0281), using regression analysis to account for reader experience, read round, and tumor type. Interobserver variance suggests moderate agreement, Cohen κ = 0.64; 95% CI, 0.47, 0.81 (with ML), and Cohen κ = 0.66; 95% CI, 0.47, 0.81 (without ML). CONCLUSIONS: There was no evidence of a significant difference in per-patient sensitivity and specificity for detecting metastases or the primary tumor using concurrent ML compared with standard WB-MRI. Radiology read-times with or without ML support fell for round 2 reads compared with round 1, suggesting that readers familiarized themselves with the study reading method. During the second reading round, there was a significant reduction in reading time when using ML support

    Development and Evaluation of Machine Learning in Whole-Body Magnetic Resonance Imaging for Detecting Metastases in Patients With Lung or Colon Cancer: A Diagnostic Test Accuracy Study

    Get PDF
    OBJECTIVES: Whole-body magnetic resonance imaging (WB-MRI) has been demonstrated to be efficient and cost-effective for cancer staging. The study aim was to develop a machine learning (ML) algorithm to improve radiologists' sensitivity and specificity for metastasis detection and reduce reading times. MATERIALS AND METHODS: A retrospective analysis of 438 prospectively collected WB-MRI scans from multicenter Streamline studies (February 2013-September 2016) was undertaken. Disease sites were manually labeled using Streamline reference standard. Whole-body MRI scans were randomly allocated to training and testing sets. A model for malignant lesion detection was developed based on convolutional neural networks and a 2-stage training strategy. The final algorithm generated lesion probability heat maps. Using a concurrent reader paradigm, 25 radiologists (18 experienced, 7 inexperienced in WB-/MRI) were randomly allocated WB-MRI scans with or without ML support to detect malignant lesions over 2 or 3 reading rounds. Reads were undertaken in the setting of a diagnostic radiology reading room between November 2019 and March 2020. Reading times were recorded by a scribe. Prespecified analysis included sensitivity, specificity, interobserver agreement, and reading time of radiology readers to detect metastases with or without ML support. Reader performance for detection of the primary tumor was also evaluated. RESULTS: Four hundred thirty-three evaluable WB-MRI scans were allocated to algorithm training (245) or radiology testing (50 patients with metastases, from primary 117 colon [n = 117] or lung [n = 71] cancer). Among a total 562 reads by experienced radiologists over 2 reading rounds, per-patient specificity was 86.2% (ML) and 87.7% (non-ML) (-1.5% difference; 95% confidence interval [CI], -6.4%, 3.5%; P = 0.39). Sensitivity was 66.0% (ML) and 70.0% (non-ML) (-4.0% difference; 95% CI, -13.5%, 5.5%; P = 0.344). Among 161 reads by inexperienced readers, per-patient specificity in both groups was 76.3% (0% difference; 95% CI, -15.0%, 15.0%; P = 0.613), with sensitivity of 73.3% (ML) and 60.0% (non-ML) (13.3% difference; 95% CI, -7.9%, 34.5%; P = 0.313). Per-site specificity was high (>90%) for all metastatic sites and experience levels. There was high sensitivity for the detection of primary tumors (lung cancer detection rate of 98.6% with and without ML [0.0% difference; 95% CI, -2.0%, 2.0%; P = 1.00], colon cancer detection rate of 89.0% with and 90.6% without ML [-1.7% difference; 95% CI, -5.6%, 2.2%; P = 0.65]). When combining all reads from rounds 1 and 2, reading times fell by 6.2% (95% CI, -22.8%, 10.0%) when using ML. Round 2 read-times fell by 32% (95% CI, 20.8%, 42.8%) compared with round 1. Within round 2, there was a significant decrease in read-time when using ML support, estimated as 286 seconds (or 11%) quicker (P = 0.0281), using regression analysis to account for reader experience, read round, and tumor type. Interobserver variance suggests moderate agreement, Cohen κ = 0.64; 95% CI, 0.47, 0.81 (with ML), and Cohen κ = 0.66; 95% CI, 0.47, 0.81 (without ML). CONCLUSIONS: There was no evidence of a significant difference in per-patient sensitivity and specificity for detecting metastases or the primary tumor using concurrent ML compared with standard WB-MRI. Radiology read-times with or without ML support fell for round 2 reads compared with round 1, suggesting that readers familiarized themselves with the study reading method. During the second reading round, there was a significant reduction in reading time when using ML support

    Optymalizacja właściwości dynamicznych systemów obiektywów o wysokiej dokładności w celu redukcji aberracji dynamicznych

    No full text
    In high-performance optical systems, small disturbances can be sufficient to put the projected image out of focus. Little stochastic excitations, for example, are a huge problem in those extremely precise opto-mechanical systems. To avoid this problem or at least to reduce it, several possibilities are thinkable. One of these possibilities is the modification of the dynamical behavior. In this method the redistribution of masses and stiffnesses is utilized to decrease the aberrations caused by dynamical excitations. Here, a multidisciplinary optimization process is required for which the basics of coupling dynamical and optical simulation methods will be introduced. The optimization is based on a method for efficiently coupling the two types of simulations. In a concluding example, the rigid body dynamics of a lithography objective is optimized with respect to its dynamical-optical behavior.W systemach obiektywów wysokiej klasy nawet małe zakłócenia mogą spowodować nieostrość projekcji obrazu. Ogromny problem dla tych niezwykle precyzyjnych systemów optyczno-mechanicznych stanowią na przykład niewielkie pobudzenia o charakterze stochastycznym. Jest do przemyślenia szereg środków, by uniknąć związanych z tym problemów, a przynajmniej by je ograniczyć. Jedną z takich możliwości jest modyfikacja właściwości dynamicznych. W metodzie tej, w celu zmniejszenia aberracji powodowanych przez pobudzenia dynamiczne, stosuje się redystrybucję mas i sztywności systemu. Wymagany w tym przypadku jest multidyscyplinarny proces optymalizacyjny, dla potrzeb którego w artykule wprowadza się podstawy połączonych dynamicznych i optycznych metod stymulacji. Optymalizacja jest oparta na metodzie zapewniającej efektywne połączenie tych dwu typów stymulacji. W końcowym przykładzie przedstawiono optymalizację dynamiki ciała sztywnego reprezentującego obiektyw litograficzny pod kątem jego właściwości dynamiczno-optycznych

    Mapping Materials and Molecules

    No full text
    The visualization of data is indispensable in scientific research, from the early stages when human insight forms to the final step of communicating results. In computational physics, chemistry and materials science, it can be as simple as making a scatter plot or as straightforward as looking through the snapshots of atomic positions manually. However, as a result of the “big data” revolution, these conventional approaches are often inadequate. The widespread adoption of high-throughput computation for materials discovery and the associated community-wide repositories have given rise to data sets that contain an enormous number of compounds and atomic configurations. A typical data set contains thousands to millions of atomic structures, along with a diverse range of properties such as formation energies, band gaps, or bioactivities. It would thus be desirable to have a data-driven and automated framework for visualizing and analyzing such structural data sets. The key idea is to construct a low-dimensional representation of the data, which facilitates navigation, reveals underlying patterns, and helps to identify data points with unusual attributes. Such data-intensive maps, often employing machine learning methods, are appearing more and more frequently in the literature. However, to the wider community, it is not always transparent how these maps are made and how they should be interpreted. Furthermore, while these maps undoubtedly serve a decorative purpose in academic publications, it is not always apparent what extra information can be garnered from reading or making them. This Account attempts to answer such questions. We start with a concise summary of the theory of representing chemical environments, followed by the introduction of a simple yet practical conceptual approach for generating structure maps in a generic and automated manner. Such analysis and mapping is made nearly effortless by employing the newly developed software tool ASAP. To showcase the applicability to a wide variety of systems in chemistry and materials science, we provide several illustrative examples, including crystalline and amorphous materials, interfaces, and organic molecules. In these examples, the maps not only help to sift through large data sets but also reveal hidden patterns that could be easily missed using conventional analyses. The explosion in the amount of computed information in chemistry and materials science has made visualization into a science in itself. Not only have we benefited from exploiting these visualization methods in previous works, we also believe that the automated mapping of data sets will in turn stimulate further creativity and exploration, as well as ultimately feed back into future advances in the respective fields
    corecore