890 research outputs found

    Mechanism of Action of Surface Immobilized Antimicrobial Peptides Against Pseudomonas aeruginosa

    Get PDF
    Bacterial colonization and biofilm development on medical devices can lead to infection. Antimicrobial peptide-coated surfaces may prevent such infections. Melimine and Mel4 are chimeric cationic peptides showing broad-spectrum antimicrobial activity once attached to biomaterials and are highly biocompatible in animal models and have been tested in Phase I and II/III human clinical trials. These peptides were covalently attached to glass using an azidobenzoic acid linker. Peptide attachment was confirmed using X-ray photoelectron spectroscopy and amino acid analysis. Mel4 when bound to glass was able to adopt a more ordered structure in the presence of bacterial membrane mimetic lipids. The ability of surface bound peptides to neutralize endotoxin was measured along with their interactions with the bacterial cytoplasmic membrane which were analyzed using DiSC(3)-5 and Sytox green, Syto-9, and PI dyes with fluorescence microscopy. Leakage of ATP and nucleic acids from cells were determined by analyzing the surrounding fluid. Attachment of the peptides resulted in increases in the percentage of nitrogen by 3.0% and 2.4%, and amino acid concentrations to 0.237 nmole and 0.298 nmole per coverslip on melimine and Mel4 coated surfaces, respectively. The immobilized peptides bound lipopolysaccharide and disrupted the cytoplasmic membrane potential of Pseudomonas aeruginosa within 15 min. Membrane depolarization was associated with a reduction in bacterial viability by 82% and 63% for coatings melimine and Mel4, respectively (p < 0.001). Disruption of membrane potential was followed by leakage of ATP from melimine (1.5 ± 0.4 nM) or Mel4 (1.3 ± 0.2 nM) coated surfaces compared to uncoated glass after 2 h (p < 0.001). Sytox green influx started after 3 h incubation with either peptide. Melimine coatings yielded 59% and Mel4 gave 36% PI stained cells after 4 h. Release of the larger molecules (DNA/RNA) commenced after 4 h for melimine (1.8 ± 0.9 times more than control; p = 0.008) and after 6 h with Mel4 (2.1 ± 0.2 times more than control; p < 0.001). The mechanism of action of surface bound melimine and Mel4 was similar to that of the peptides in solution, however, their immobilization resulted in much slower (approximately 30 times) kinetics

    Single and double qubit gates by manipulating degeneracy

    Full text link
    A novel mechanism is proposed for single and double qubit state manipulations in quantum computation with four-fold degenerate energy levels. The principle is based on starting with a four fold degeneracy, lifting it stepwise adiabatically by a set of control parameters and performing the quantum gate operations on non-degenerate states. A particular realization of the proposed mechanism is suggested by using inductively coupled rf-squid loops in the macroscopic quantum tunnelling regime where the energy eigen levels are directly connected with the measurable flux states. The one qubit and two qubit controlled operations are demonstrated explicitly. The appearance of the flux states also allows precise read-in and read-out operations by the measurement of flux.Comment: 6 pages + 5 figures (separately included

    Probe-configuration dependent dephasing in a mesoscopic interferometer

    Full text link
    Dephasing in a ballistic four-terminal Aharonov-Bohm geometry due to charge and voltage fluctuations is investigated. Treating two terminals as voltage probes, we find a strong dependence of the dephasing rate on the probe configuration in agreement with a recent experiment by Kobayashi et al. (J. Phys. Soc. Jpn. 71, 2094 (2002)). Voltage fluctuations in the measurement circuit are shown to be the source of the configuration dependence.Comment: 4 pages, 3 figure

    Chapter 6 - Assessing transformation pathways

    Get PDF
    Stabilizing greenhouse gas (GHG) concentrations at any level will require deep reductions in GHG emissions. Net global CO2 emissions, in particular, must eventually be brought to or below zero. Emissions reductions of this magnitude will require large-scale transformations in human societies, from the way that we produce and consume energy to how we use the land surface. The more ambitious the stabilization goal, the more rapid this transformation must occur. A natural question in this context is what will be the transformation pathway toward stabilization; that is, how do we get from here to there? The topic of this chapter is transformation pathways. The chapter is motivated primarily by three questions. First, what are the near-term and future choices that define transformation pathways including, for example, the goal itself, the emissions pathway to the goal, the technologies used for and sectors contributing to mitigation, the nature of international coordination, and mitigation policies? Second, what are the key decision making outcomes of different transformation pathways, including the magnitude and international distribution of economic costs and the implications for other policy objectives such as those associated with sustainable development? Third, how will actions taken today influence the options that might be available in the future? Two concepts are particularly important for framing any answers to these questions. The first is that there is no single pathway to stabilization of GHG concentrations at any level. Instead, the literature elucidates a wide range of transformation pathways. Choices will govern which pathway is followed. These choices include, among other things, the long-term stabilization goal, the emissions pathway to meet that goal, the degree to which concentrations might temporarily overshoot the goal, the technologies that will be deployed to reduce emissions, the degree to which mitigation is coordinated across countries, the policy approaches used to achieve these goals within and across countries, the treatment of land use, and the manner in which mitigation is meshed with other policy objectives such as sustainable development. The second concept is that transformation pathways can be distinguished from one another in important ways. Weighing the characteristics of different pathways is the way in which deliberative decisions about transformation pathways would be made. Although measures of aggregate economic implications have often been put forward as key deliberative decision making factors, these are far from the only characteristics that matter for making good decisions. Transformation pathways inherently involve a range of tradeoffs that link to other national and policy objectives such as energy and food security, the distribution of economic costs, local air pollution, other environmental factors associated with different technology solutions (e.g., nuclear power, coal-fired carbon dioxide capture and storage (CCS)), and economic competitiveness. Many of these fall under the umbrella of sustainable development. A question that is often raised about particular stabilization goals and transformation pathways to those goals is whether the goals or pathways are "feasible". In many circumstances, there are clear physical constraints that can render particular long-term goals physically impossible. For example, if additinional mitigation beyond that of today is delayed to a large enough degree and carbon dioxide removal (CDR) options are not available (see Section 6.9), a goal of reaching 450 ppm CO2eq by the end of the 21st century can be physically impossible. However, in many cases, statements about feasibility are bound up in subjective assessments of the degree to which other characteristics of particular transformation pathways might influence the ability or desire of human societies to follow them. Important characteristics include economic implications, social acceptance of new technologies that underpin particular transformation pathways, the rapidity at which social and technological systems would need to change to follow particular pathways, political feasibility, and linkages to other national objectives. A primary goal of this chapter is to illuminate these characteristics of transformation pathways

    The Gain Reduction Method for manual tracking of radio-tagged fish in streams

    Get PDF
    Background: Manual tracking has been used since the 1970s as an effective radio telemetry approach for evaluating habitat use of fish in fluvial systems. Radio tags are often located by continually reducing the gain when approaching the tag along a watercourse to estimate its location, termed here as the 'Gain Reduction Method'. However, to our knowledge the accuracy of this method has not been empirically evaluated and reported in the literature. Here, the longitudinal and lateral positional errors of radio tags are assessed when applying the Gain Reduction Method in a small stream environment. Longitudinal and lateral positional errors (i.e. the difference between the e

    Debris disk size distributions: steady state collisional evolution with P-R drag and other loss processes

    Full text link
    We present a new scheme for determining the shape of the size distribution, and its evolution, for collisional cascades of planetesimals undergoing destructive collisions and loss processes like Poynting-Robertson drag. The scheme treats the steady state portion of the cascade by equating mass loss and gain in each size bin; the smallest particles are expected to reach steady state on their collision timescale, while larger particles retain their primordial distribution. For collision-dominated disks, steady state means that mass loss rates in logarithmic size bins are independent of size. This prescription reproduces the expected two phase size distribution, with ripples above the blow-out size, and above the transition to gravity-dominated planetesimal strength. The scheme also reproduces the expected evolution of disk mass, and of dust mass, but is computationally much faster than evolving distributions forward in time. For low-mass disks, P-R drag causes a turnover at small sizes to a size distribution that is set by the redistribution function (the mass distribution of fragments produced in collisions). Thus information about the redistribution function may be recovered by measuring the size distribution of particles undergoing loss by P-R drag, such as that traced by particles accreted onto Earth. Although cross-sectional area drops with 1/age^2 in the PR-dominated regime, dust mass falls as 1/age^2.8, underlining the importance of understanding which particle sizes contribute to an observation when considering how disk detectability evolves. Other loss processes are readily incorporated; we also discuss generalised power law loss rates, dynamical depletion, realistic radiation forces and stellar wind drag.Comment: Accepted for publication by Celestial Mechanics and Dynamical Astronomy (special issue on EXOPLANETS

    Theory of Two-Dimensional Quantum Heisenberg Antiferromagnets with a Nearly Critical Ground State

    Full text link
    We present the general theory of clean, two-dimensional, quantum Heisenberg antiferromagnets which are close to the zero-temperature quantum transition between ground states with and without long-range N\'{e}el order. For N\'{e}el-ordered states, `nearly-critical' means that the ground state spin-stiffness, ρs\rho_s, satisfies ρsJ\rho_s \ll J, where JJ is the nearest-neighbor exchange constant, while `nearly-critical' quantum-disordered ground states have a energy-gap, Δ\Delta, towards excitations with spin-1, which satisfies ΔJ\Delta \ll J. Under these circumstances, we show that the wavevector/frequency-dependent uniform and staggered spin susceptibilities, and the specific heat, are completely universal functions of just three thermodynamic parameters. Explicit results for the universal scaling functions are obtained by a 1/N1/N expansion on the O(N)O(N) quantum non-linear sigma model, and by Monte Carlo simulations. These calculations lead to a variety of testable predictions for neutron scattering, NMR, and magnetization measurements. Our results are in good agreement with a number of numerical simulations and experiments on undoped and lightly-doped La2δSrδCuO4La_{2-\delta} Sr_{\delta}Cu O_4.Comment: 81 pages, REVTEX 3.0, smaller updated version, YCTP-xxx

    Development of a polygenic risk score to improve screening for fracture risk: A genetic risk prediction study

    Get PDF
    Background Since screening programs identify only a small proportion of the population as eligible for an intervention, genomic prediction of heritable risk factors could decrease the number needing to be screened by removing individuals at low genetic risk. We therefore tested whether a polygenic risk score for heel quantitative ultrasound speed of sound (SOS)—a heritable risk factor for osteoporotic fracture—can identify low-risk individuals who can safely be excluded from a fracture risk screening program. Methods and findings A polygenic risk score for SOS was trained and selected in 2 separate subsets of UK Biobank (comprising 341,449 and 5,335 individuals). The top-performing prediction model was termed “gSOS”, and its utility in fracture risk screening was tested in 5 validation cohorts using the National Osteoporosis Guideline Group clinical guidelines (N = 10,522 eligible participants). All individuals were genome-wide genotyped and had measured fracture risk factors. Across the 5 cohorts, the average age ranged from 57 to 75 years, and 54% of studied individuals were women. The main outcomes were the sensitivity and specificity to correctly identify individuals requiring treatment with and without genetic prescreening. The reference standard was a bone mineral density (BMD)–based Fracture Risk Assessment Tool (FRAX) score. The secondary outcomes were the proportions of the screened population requiring clinical-risk-factor-based FRAX (CRF-FRAX) screening and BMD-based FRAX (BMD-FRAX) screening. gSOS was strongly correlated with measured SOS (r2 = 23.2%, 95% CI 22.7% to 23.7%). Without genetic prescreening, guideline recommendations achieved a sensitivity and specificity for correct treatment assignment of 99.6% and 97.1%, respectively, in the validation cohorts. However, 81% of the population required CRF-FRAX tests, and 37% required BMD-FRAX tests to achieve this accuracy. Using gSOS in prescreening and limiting further assessment to those with a low gSOS resulted in small changes to the sensitivity and specificity (93.4% and 98.5%, respectively), but the proportions of individuals requiring CRF-FRAX tests and BMD-FRAX tests were reduced by 37% and 41%, respectively. Study limitations include a reliance on cohorts of predominantly European ethnicity and use of a proxy of fracture risk. Conclusions Our results suggest that the use of a polygenic risk score in fracture risk screening could decrease the number of individuals requiring screening tests, including BMD measurement, while maintaining a high sensitivity and specificity to identify individuals who should be recommended an intervention

    'Drowning in here in his bloody sea' : exploring TV cop drama's representations of the impact of stress in modern policing

    Get PDF
    The Criminal Justice System is a part of society that is both familiar and hidden. It is familiar in that a large part of daily news and television drama is devoted to it (Carrabine, 2008; Jewkes, 2011). It is hidden in the sense that the majority of the population have little, if any, direct contact with the Criminal Justice System, meaning that the media may be a major force in shaping their views on crime and policing (Carrabine, 2008). As Reiner (2000) notes, the debate about the relationship between the media, policing, and crime has been a key feature of wider societal concerns about crime since the establishment of the modern police force. He outlines the recurring themes in post-war debates in this field. For Conservatives there has been an ongoing concern that the media is criminongenic, as it serves to undermine traditional institutions, including the police. From the viewpoint of radical criminology, the impact of the media is two-fold: it exaggerates legitimate concerns about crime and emphasises the bureaucratic and other restrictions under which the police operate (Reiner, 2000). This is seen as undermining due process and legitimatising what can be termed a ‘maverick’ approach to policing. An early example of this can be seen in Clint Eastwood’s Dirty Harry movies (Siegel, 1971) where Harry Callaghan acts as a one-man law enforcement system outside of the formal legal process, a process portrayed as corrupt, inefficient, and concerned with offenders’ rights rather than protecting victims. From a policing perspective, Reiner (2000) argues that film and TV drama creates a simplistic narrative of crime solving that is almost completely divorced from the reality of modern police work, a finding consistent with more recent work by Cummins et al., (2014)
    corecore