663 research outputs found
Molecular bases determining daptomycin resistance-mediated re-sensitization to β-lactams ("see-saw effect") in MRSA
Antimicrobial resistance is recognized as one of the principal threats to public health worldwide, yet the problem is increasing. Methicillin-resistant Staphylococcus aureus (MRSA) are among the most difficult to treat in clinical settings due to the resistance to nearly all available antibiotics. The cyclic anionic lipopeptide antibiotic Daptomycin (DAP) is the clinical mainstay of anti-MRSA therapy. Decreased susceptibility to DAP (DAPR) reported in MRSA is frequently accompanied with a paradoxical decrease in β-lactam resistance, a process known as the "see-saw" effect. Despite the observed discordance in resistance phenotypes, the combination of DAP/β-lactams has been proven clinically effective for the prevention and treatment of infections due to DAPR-MRSA strains. However, the mechanisms underlying the interactions between DAP and β-lactams are largely unknown. Herein, we studied the role of DAP-induced mutated mprF in β-lactam sensitization and its involvement in the effective killing by the DAP/OXA combination. DAP/OXA-mediated effects resulted in cell-wall perturbations including changes in peptidoglycan (PG) insertion, penicillin-binding protein 2 (PBP2) delocalization and reduced membrane amounts of penicillin-binding protein 2a (PBP2a) contents despite increased transcription of mecA through mec regulatory elements. We have found that the VraSR sensor-regulator is a key component of DAP resistance, triggering mutated mprF-mediated cell membrane (CM) modifications and resulting in impairment of PrsA location and chaperone functions, both essentials for PBP2a maturation, the key determinant of β-lactam resistance. These observations provide first time evidence that synergistic effects between DAP and β-lactams involve PrsA post-transcriptional regulation of CM-associated PBP2a
Nonlinear Integer Programming
Research efforts of the past fifty years have led to a development of linear
integer programming as a mature discipline of mathematical optimization. Such a
level of maturity has not been reached when one considers nonlinear systems
subject to integrality requirements for the variables. This chapter is
dedicated to this topic.
The primary goal is a study of a simple version of general nonlinear integer
problems, where all constraints are still linear. Our focus is on the
computational complexity of the problem, which varies significantly with the
type of nonlinear objective function in combination with the underlying
combinatorial structure. Numerous boundary cases of complexity emerge, which
sometimes surprisingly lead even to polynomial time algorithms.
We also cover recent successful approaches for more general classes of
problems. Though no positive theoretical efficiency results are available, nor
are they likely to ever be available, these seem to be the currently most
successful and interesting approaches for solving practical problems.
It is our belief that the study of algorithms motivated by theoretical
considerations and those motivated by our desire to solve practical instances
should and do inform one another. So it is with this viewpoint that we present
the subject, and it is in this direction that we hope to spark further
research.Comment: 57 pages. To appear in: M. J\"unger, T. Liebling, D. Naddef, G.
Nemhauser, W. Pulleyblank, G. Reinelt, G. Rinaldi, and L. Wolsey (eds.), 50
Years of Integer Programming 1958--2008: The Early Years and State-of-the-Art
Surveys, Springer-Verlag, 2009, ISBN 354068274
An Integrated TCGA Pan-Cancer Clinical Data Resource to Drive High-Quality Survival Outcome Analytics
For a decade, The Cancer Genome Atlas (TCGA) program collected clinicopathologic annotation data along with multi-platform molecular profiles of more than 11,000 human tumors across 33 different cancer types. TCGA clinical data contain key features representing the democratized nature of the data collection process. To ensure proper use of this large clinical dataset associated with genomic features, we developed a standardized dataset named the TCGA Pan-Cancer Clinical Data Resource (TCGA-CDR), which includes four major clinical outcome endpoints. In addition to detailing major challenges and statistical limitations encountered during the effort of integrating the acquired clinical data, we present a summary that includes endpoint usage recommendations for each cancer type. These TCGA-CDR findings appear to be consistent with cancer genomics studies independent of the TCGA effort and provide opportunities for investigating cancer biology using clinical correlates at an unprecedented scale. Analysis of clinicopathologic annotations for over 11,000 cancer patients in the TCGA program leads to the generation of TCGA Clinical Data Resource, which provides recommendations of clinical outcome endpoint usage for 33 cancer types
The exposure of the hybrid detector of the Pierre Auger Observatory
The Pierre Auger Observatory is a detector for ultra-high energy cosmic rays.
It consists of a surface array to measure secondary particles at ground level
and a fluorescence detector to measure the development of air showers in the
atmosphere above the array. The "hybrid" detection mode combines the
information from the two subsystems. We describe the determination of the
hybrid exposure for events observed by the fluorescence telescopes in
coincidence with at least one water-Cherenkov detector of the surface array. A
detailed knowledge of the time dependence of the detection operations is
crucial for an accurate evaluation of the exposure. We discuss the relevance of
monitoring data collected during operations, such as the status of the
fluorescence detector, background light and atmospheric conditions, that are
used in both simulation and reconstruction.Comment: Paper accepted by Astroparticle Physic
Ocean turbulence, III : new GISS vertical mixing scheme
Author Posting. Š The Author(s), 2010. This is the author's version of the work. It is posted here by permission of Elsevier B.V. for personal use, not for redistribution. The definitive version was published in Ocean Modelling 34 (2010): 70-91, doi:10.1016/j.ocemod.2010.04.006.We have found a new way to express the solutions of the RSM (Reynolds Stress
Model) equations that allows us to present the turbulent diffusivities for heat, salt and
momentum in a way that is considerably simpler and thus easier to implement than in
previous work. The RSM provides the dimensionless mixing efficiencies ÎÎą (Îą stands for
heat, salt and momentum). However, to compute the diffusivities, one needs additional
information, specifically, the dissipation Îľ. Since a dynamic equation for the latter that
includes the physical processes relevant to the ocean is still not available, one must resort
to different sources of information outside the RSM to obtain a complete Mixing Scheme
usable in OGCMs.
As for the RSM results, we show that the ÎÎąâs are functions of both Ri and RĎ
(Richardson number and density ratio representing double diffusion, DD); the ÎÎą are
different for heat, salt and momentum; in the case of heat, the traditional value Îh = 0.2
is valid only in the presence of strong shear (when DD is inoperative) while when shear
subsides, NATRE data show that Îh can be three times as large, a result that we
reproduce. The salt Îs is given in terms of Îh. The momentum Îm has thus far been
guessed with different prescriptions while the RSM provides a well defined expression
for Îm (Ri, RĎ). Having tested Îh, we then test the momentum Îm by showing that the
turbulent Prandtl number Îm/Îh vs. Ri reproduces the available data quite well.
As for the dissipation Îľ, we use different representations, one for the mixed layer
(ML), one for the thermocline and one for the oceanâs bottom. For the ML, we adopt a
procedure analogous to the one successfully used in PB (planetary boundary layer)
studies; for the thermocline, we employ an expression for the variable ÎľN-2 from studies
of the internal gravity waves spectra which includes a latitude dependence; for the ocean
bottom, we adopt the enhanced bottom diffusivity expression used by previous authors
but with a state of the art internal tidal energy formulation and replace the fixed ÎÎą = 0.2
with the RSM result that brings into the problem the Ri,RĎ dependence of the ÎÎą; the
unresolved bottom drag, which has thus far been either ignored or modeled with heuristic
relations, is modeled using a formalism we previously developed and tested in PBL
studies.
We carried out several tests without an OGCM. Prandtl and flux Richardson
numbers vs. Ri. The RSM model reproduces both types of data satisfactorily. DD and
Mixing efficiency Îh (Ri, RĎ). The RSM model reproduces well the NATRE data.
Bimodal Îľ-distribution. NATRE data show that Îľ (Ri1), which our model
reproduces. Heat to salt flux ratio. In the Ri>>1 regime, the RSM predictions reproduce
the data satisfactorily. NATRE mass diffusivity. The z-profile of the mass diffusivity
reproduces well the measurements at NATRE. The local form of the mixing scheme is
algebraic with one cubic equation to solve
Search for a W' boson decaying to a bottom quark and a top quark in pp collisions at sqrt(s) = 7 TeV
Results are presented from a search for a W' boson using a dataset
corresponding to 5.0 inverse femtobarns of integrated luminosity collected
during 2011 by the CMS experiment at the LHC in pp collisions at sqrt(s)=7 TeV.
The W' boson is modeled as a heavy W boson, but different scenarios for the
couplings to fermions are considered, involving both left-handed and
right-handed chiral projections of the fermions, as well as an arbitrary
mixture of the two. The search is performed in the decay channel W' to t b,
leading to a final state signature with a single lepton (e, mu), missing
transverse energy, and jets, at least one of which is tagged as a b-jet. A W'
boson that couples to fermions with the same coupling constant as the W, but to
the right-handed rather than left-handed chiral projections, is excluded for
masses below 1.85 TeV at the 95% confidence level. For the first time using LHC
data, constraints on the W' gauge coupling for a set of left- and right-handed
coupling combinations have been placed. These results represent a significant
improvement over previously published limits.Comment: Submitted to Physics Letters B. Replaced with version publishe
Search for the standard model Higgs boson decaying into two photons in pp collisions at sqrt(s)=7 TeV
A search for a Higgs boson decaying into two photons is described. The
analysis is performed using a dataset recorded by the CMS experiment at the LHC
from pp collisions at a centre-of-mass energy of 7 TeV, which corresponds to an
integrated luminosity of 4.8 inverse femtobarns. Limits are set on the cross
section of the standard model Higgs boson decaying to two photons. The expected
exclusion limit at 95% confidence level is between 1.4 and 2.4 times the
standard model cross section in the mass range between 110 and 150 GeV. The
analysis of the data excludes, at 95% confidence level, the standard model
Higgs boson decaying into two photons in the mass range 128 to 132 GeV. The
largest excess of events above the expected standard model background is
observed for a Higgs boson mass hypothesis of 124 GeV with a local significance
of 3.1 sigma. The global significance of observing an excess with a local
significance greater than 3.1 sigma anywhere in the search range 110-150 GeV is
estimated to be 1.8 sigma. More data are required to ascertain the origin of
this excess.Comment: Submitted to Physics Letters
Measurement of the Lambda(b) cross section and the anti-Lambda(b) to Lambda(b) ratio with Lambda(b) to J/Psi Lambda decays in pp collisions at sqrt(s) = 7 TeV
The Lambda(b) differential production cross section and the cross section
ratio anti-Lambda(b)/Lambda(b) are measured as functions of transverse momentum
pt(Lambda(b)) and rapidity abs(y(Lambda(b))) in pp collisions at sqrt(s) = 7
TeV using data collected by the CMS experiment at the LHC. The measurements are
based on Lambda(b) decays reconstructed in the exclusive final state J/Psi
Lambda, with the subsequent decays J/Psi to an opposite-sign muon pair and
Lambda to proton pion, using a data sample corresponding to an integrated
luminosity of 1.9 inverse femtobarns. The product of the cross section times
the branching ratio for Lambda(b) to J/Psi Lambda versus pt(Lambda(b)) falls
faster than that of b mesons. The measured value of the cross section times the
branching ratio for pt(Lambda(b)) > 10 GeV and abs(y(Lambda(b))) < 2.0 is 1.06
+/- 0.06 +/- 0.12 nb, and the integrated cross section ratio for
anti-Lambda(b)/Lambda(b) is 1.02 +/- 0.07 +/- 0.09, where the uncertainties are
statistical and systematic, respectively.Comment: Submitted to Physics Letters
- âŚ