5,722 research outputs found
Cofactor Fingerprinting with STD NMR to Characterize Proteins of Unknown Function: Identification of a Rare cCMP Cofactor Preference
Proteomics efforts have created a need for better strategies to functionally categorize newly discovered proteins. To this end, we have employed saturation transfer difference NMR with pools of closely related cofactors, to determine cofactor preferences. This approach works well for dehydrogenases and has also been applied to cyclic nucleotide‐binding proteins. In the latter application, a protein (radial spoke protein‐2, RSP2) that plays a central role in forming the radial spoke of Chlamydomonas reinhardtii flagella was shown to bind cCMP. cCMP‐binding proteins are rare, although previous reports of their presence in sperm and flagella suggest that cCMP may have a more general role in flagellar function. 31P NMR was used to monitor the preferential hydrolysis of ATP versus GTP, suggesting that RSP2 is a kinase
Delay versus Stickiness Violation Trade-offs for Load Balancing in Large-Scale Data Centers
Most load balancing techniques implemented in current data centers tend to
rely on a mapping from packets to server IP addresses through a hash value
calculated from the flow five-tuple. The hash calculation allows extremely fast
packet forwarding and provides flow `stickiness', meaning that all packets
belonging to the same flow get dispatched to the same server. Unfortunately,
such static hashing may not yield an optimal degree of load balancing, e.g.,
due to variations in server processing speeds or traffic patterns. On the other
hand, dynamic schemes, such as the Join-the-Shortest-Queue (JSQ) scheme,
provide a natural way to mitigate load imbalances, but at the expense of
stickiness violation.
In the present paper we examine the fundamental trade-off between stickiness
violation and packet-level latency performance in large-scale data centers. We
establish that stringent flow stickiness carries a significant performance
penalty in terms of packet-level delay. Moreover, relaxing the stickiness
requirement by a minuscule amount is highly effective in clipping the tail of
the latency distribution. We further propose a bin-based load balancing scheme
that achieves a good balance among scalability, stickiness violation and
packet-level delay performance. Extensive simulation experiments corroborate
the analytical results and validate the effectiveness of the bin-based load
balancing scheme
A Chemical Proteomic Probe for Detecting Dehydrogenases: \u3cem\u3eCatechol Rhodanine\u3c/em\u3e
The inherent complexity of the proteome often demands that it be studied as manageable subsets, termed subproteomes. A subproteome can be defined in a number of ways, although a pragmatic approach is to define it based on common features in an active site that lead to binding of a common small molecule ligand (ex. a cofactor or a cross-reactive drug lead). The subproteome, so defined, can be purified using that common ligand tethered to a resin, with affinity chromatography. Affinity purification of a subproteome is described in the next chapter. That subproteome can then be analyzed using a common ligand probe, such as a fluorescent common ligand that can be used to stain members of the subproteome in a native gel. Here, we describe such a fluorescent probe, based on a catechol rhodanine acetic acid (CRAA) ligand that binds to dehydrogenases. The CRAA ligand is fluorescent and binds to dehydrogenases at pH \u3e 7, and hence can be used effectively to stain dehydrogenases in native gels to identify what subset of proteins in a mixture are dehydrogenases. Furthermore, if one is designing inhibitors to target one or more of these dehydrogenases, the CRAA staining can be performed in a competitive assay format, with or without inhibitor, to assess the selectivity of the inhibitor for the targeted dehydrogenase. Finally, the CRAA probe is a privileged scaffold for dehydrogenases, and hence can easily be modified to increase affinity for a given dehydrogenase
An \u3cem\u3eIn Vitro\u3c/em\u3e Spectroscopic Analysis to Determine Whether Para-Chloroaniline Is Produced from Mixing Sodium Hypochlorite and Chlorhexidine
Introduction: The purpose of this in vitro study was to determine whether para-chloroaniline (PCA) is formed through the reaction of mixing sodium hypochlorite (NaOCl) and chlorhexidine (CHX).
Methods: Initially, commercially available samples of chlorhexidine acetate (CHXa) and PCA were analyzed with 1H nuclear magnetic resonance (NMR) spectroscopy. Two solutions, NaOCl and CHXa, were warmed to 37ºC, and when mixed they produced a brown precipitate. This precipitate was separated in half, and pure PCA was added to 1 of the samples for comparison before they were each analyzed with 1H NMR spectroscopy.
Results: The peaks in the 1H NMR spectra of CHXa and PCA were assigned to specific protons of the molecules, and the location of the aromatic peaks in the PCA spectrum defined the PCA doublet region. Although the spectrum of the precipitate alone resulted in a complex combination of peaks, on magnification there were no peaks in the PCA doublet region that were intense enough to be quantified. In the spectrum of the precipitate to which PCA was added, 2 peaks do appear in the PCA doublet region. Comparing this spectrum with that of precipitate alone, the peaks in the PCA doublet region are not visible before the addition of PCA.
Conclusions: On the basis of this in vitro study, the reaction mixture of NaOCl and CHXa does not produce PCA at any measurable quantity, and further investigation is needed to determine the chemical composition of the brown precipitate
An \u3cem\u3eIn Vitro\u3c/em\u3e Spectroscopic Analysis to Determine the Chemical Composition of the Precipitate Formed by Mixing Sodium Hypochlorite and Chlorhexidine
Introduction—The purpose of this in vitro study was to determine the chemical composition of the precipitate formed by mixing sodium hypochlorite (NaOCl) and Chlorhexidine (CHX), and relative molecular weight of the components.
Methods—Using commercially available chlorhexidine gluconate (CHXg), a 2% solution was formed and mixed in a 1:1 ratio with commercially available NaOCl producing a brown precipitate. The precipitate as well as a mixture of precipitate and pure chlorhexidine diacetate (CHXa) was then analyzed using 1D and 2D NMR spectroscopy.
Results—The 1D and 2D NMR spectra were fully assigned, in terms of chemical shifts of all proton and carbon atoms in intact CHX. This permitted identification of CHX breakdown products with and without the aliphatic linker present, including lower molecular weight components of CHX that contained a para-substituted benzene that was not para-chloroaniline (PCA).
Conclusions—Based on this in vitro study, the precipitate formed by NaOCl and CHX is composed of at least two separate molecules, all of which are smaller in size than CHX. Along with native CHX, the precipitate contains two chemical fragments derived from CHX, neither of which are PCA
Nucleosynthesis Predictions and High-Precision Deuterium Measurements
Two new high-precision measurements of the deuterium abundance from absorbers
along the line of sight to the quasar PKS1937--1009 were presented. The
absorbers have lower neutral hydrogen column densities (N(HI)
18\,cm) than for previous high-precision measurements, boding well for
further extensions of the sample due to the plenitude of low column density
absorbers. The total high-precision sample now consists of 12 measurements with
a weighted average deuterium abundance of D/H = . The
sample does not favour a dipole similar to the one detected for the fine
structure constant. The increased precision also calls for improved
nucleosynthesis predictions. For that purpose we have updated the public
AlterBBN code including new reactions, updated nuclear reaction rates, and the
possibility of adding new physics such as dark matter. The standard Big Bang
Nucleosynthesis prediction of D/H = is consistent
with the observed value within 1.7 standard deviations.Comment: 10 pages, 5 figures, conference proceedings from VarCosmoFun 201
Exact asymptotics for fluid queues fed by multiple heavy-tailed on-off flows
We consider a fluid queue fed by multiple On-Off flows with heavy-tailed
(regularly varying) On periods. Under fairly mild assumptions, we prove that
the workload distribution is asymptotically equivalent to that in a reduced
system. The reduced system consists of a ``dominant'' subset of the flows, with
the original service rate subtracted by the mean rate of the other flows. We
describe how a dominant set may be determined from a simple knapsack
formulation. The dominant set consists of a ``minimally critical'' set of
On-Off flows with regularly varying On periods. In case the dominant set
contains just a single On-Off flow, the exact asymptotics for the reduced
system follow from known results. For the case of several
On-Off flows, we exploit a powerful intuitive argument to obtain the exact
asymptotics. Combined with the reduced-load equivalence, the results for the
reduced system provide a characterization of the tail of the workload
distribution for a wide range of traffic scenarios
Queue-Based Random-Access Algorithms: Fluid Limits and Stability Issues
We use fluid limits to explore the (in)stability properties of wireless
networks with queue-based random-access algorithms. Queue-based random-access
schemes are simple and inherently distributed in nature, yet provide the
capability to match the optimal throughput performance of centralized
scheduling mechanisms in a wide range of scenarios. Unfortunately, the type of
activation rules for which throughput optimality has been established, may
result in excessive queue lengths and delays. The use of more
aggressive/persistent access schemes can improve the delay performance, but
does not offer any universal maximum-stability guarantees. In order to gain
qualitative insight and investigate the (in)stability properties of more
aggressive/persistent activation rules, we examine fluid limits where the
dynamics are scaled in space and time. In some situations, the fluid limits
have smooth deterministic features and maximum stability is maintained, while
in other scenarios they exhibit random oscillatory characteristics, giving rise
to major technical challenges. In the latter regime, more aggressive access
schemes continue to provide maximum stability in some networks, but may cause
instability in others. Simulation experiments are conducted to illustrate and
validate the analytical results
Lingering Issues in Distributed Scheduling
Recent advances have resulted in queue-based algorithms for medium access
control which operate in a distributed fashion, and yet achieve the optimal
throughput performance of centralized scheduling algorithms. However,
fundamental performance bounds reveal that the "cautious" activation rules
involved in establishing throughput optimality tend to produce extremely large
delays, typically growing exponentially in 1/(1-r), with r the load of the
system, in contrast to the usual linear growth.
Motivated by that issue, we explore to what extent more "aggressive" schemes
can improve the delay performance. Our main finding is that aggressive
activation rules induce a lingering effect, where individual nodes retain
possession of a shared resource for excessive lengths of time even while a
majority of other nodes idle. Using central limit theorem type arguments, we
prove that the idleness induced by the lingering effect may cause the delays to
grow with 1/(1-r) at a quadratic rate. To the best of our knowledge, these are
the first mathematical results illuminating the lingering effect and
quantifying the performance impact.
In addition extensive simulation experiments are conducted to illustrate and
validate the various analytical results
- …