1,307 research outputs found
Fair Streaming Principal Component Analysis: Statistical and Algorithmic Viewpoint
Fair Principal Component Analysis (PCA) is a problem setting where we aim to
perform PCA while making the resulting representation fair in that the
projected distributions, conditional on the sensitive attributes, match one
another. However, existing approaches to fair PCA have two main problems:
theoretically, there has been no statistical foundation of fair PCA in terms of
learnability; practically, limited memory prevents us from using existing
approaches, as they explicitly rely on full access to the entire data. On the
theoretical side, we rigorously formulate fair PCA using a new notion called
\emph{probably approximately fair and optimal} (PAFO) learnability. On the
practical side, motivated by recent advances in streaming algorithms for
addressing memory limitation, we propose a new setting called \emph{fair
streaming PCA} along with a memory-efficient algorithm, fair noisy power method
(FNPM). We then provide its {\it statistical} guarantee in terms of
PAFO-learnability, which is the first of its kind in fair PCA literature.
Lastly, we verify the efficacy and memory efficiency of our algorithm on
real-world datasets.Comment: 42 pages, 5 figures, 4 tables. Accepted to the 37th Conference on
Neural Information Processing Systems (NeurIPS 2023
Electrochemical detection of mismatched DNA using a MutS probe
A direct and label-free electrochemical biosensor for the detection of the protein–mismatched DNA interaction was designed using immobilized N-terminal histidine tagged Escherichia coli. MutS on a Ni-NTA coated Au electrode. General electrochemical methods, cyclic voltammetry (CV), electrochemical quartz crystal microbalance (EQCM) and impedance spectroscopy, were used to ascertain the binding affinity of mismatched DNAs to the MutS probe. The direct results of CV and impedance clearly reveal that the interaction of MutS with the CC heteroduplex was much stronger than that with AT homoduplex, which was not differentiated in previous results (GT > CT > CC ≈ AT) of a gel mobility shift assay. The EQCM technique was also able to quantitatively analyze MutS affinity to heteroduplexes
Efficacy of inducible protein 10 as a biomarker for the diagnosis of tuberculosis
SummaryObjectiveThis study evaluated inducible protein 10 (IP-10) as a diagnostic biomarker for specific tuberculosis (TB) infection and evaluated the ability of IP-10 to distinguish between active TB and latent TB infection (LTBI).MethodsForty-six patients with active pulmonary TB, 22 participants with LTBI, and 32 non-TB controls were enrolled separately. We measured IP-10 in serum and in supernatants from whole blood stimulated with TB-specific antigens.ResultsTB antigen-dependent IP-10 secretion was significantly increased in the active TB patients and LTBI subjects compared with controls, but did not differ significantly between the active TB patients and LTBI subjects. Serum IP-10 levels were higher in active TB than in LTBI (174.9 vs. 102.7pg/ml, p=0.002). The respective rates of positive responders of TB antigen-dependent IP-10 were 97.8%, 90.9%, and 12.5% in active TB, LTBI, and non-TB controls, respectively. For serum IP-10, 87.5%, 45.5%, and 9.5% of responders were positive in the respective groups.ConclusionsThe IP-10 response to TB antigen may constitute a specific biomarker for TB infection, but does not by itself distinguish between active TB and LTBI. Serum IP-10 may enhance the diagnostic performance when used in combination with another marker
PLASTIC: Improving Input and Label Plasticity for Sample Efficient Reinforcement Learning
In Reinforcement Learning (RL), enhancing sample efficiency is crucial,
particularly in scenarios when data acquisition is costly and risky. In
principle, off-policy RL algorithms can improve sample efficiency by allowing
multiple updates per environment interaction. However, these multiple updates
often lead the model to overfit to earlier interactions, which is referred to
as the loss of plasticity. Our study investigates the underlying causes of this
phenomenon by dividing plasticity into two aspects. Input plasticity, which
denotes the model's adaptability to changing input data, and label plasticity,
which denotes the model's adaptability to evolving input-output relationships.
Synthetic experiments on the CIFAR-10 dataset reveal that finding smoother
minima of loss landscape enhances input plasticity, whereas refined gradient
propagation improves label plasticity. Leveraging these findings, we introduce
the PLASTIC algorithm, which harmoniously combines techniques to address both
concerns. With minimal architectural modifications, PLASTIC achieves
competitive performance on benchmarks including Atari-100k and Deepmind Control
Suite. This result emphasizes the importance of preserving the model's
plasticity to elevate the sample efficiency in RL. The code is available at
https://github.com/dojeon-ai/plastic.Comment: 26 pages, 6 figures, accepted to NeurIPS 202
Strong ferromagnetism in Pt-coated ZnCoO: The role of interstitial hydrogen
We observed strong ferromagnetism in ZnCoO as a result of high concentration hydrogen absorption. Coating ZnCoO with Pt layer, and ensuing hydrogen treatment with a high isostatic pressure resulted in a highly increased carrier concentration of 10(21)/cm(3). This hydrogen treatment induced a strong ferromagnetism at low temperature that turned to superparamagnetism at about 140 K. We performed density functional method computations and found that the interstitial H dopants promote the ferromagnetic ordering between scattered Co dopants. On the other hand, interstitial hydrogen can decrease the magnetic exchange energy of Co-H-Co complexes, leading to a reduction in the blocking temperature.open7
- …