75 research outputs found
Low-Sensitivity Functions from Unambiguous Certificates
We provide new query complexity separations against sensitivity for total
Boolean functions: a power separation between deterministic (and even
randomized or quantum) query complexity and sensitivity, and a power
separation between certificate complexity and sensitivity. We get these
separations by using a new connection between sensitivity and a seemingly
unrelated measure called one-sided unambiguous certificate complexity
(). We also show that is lower-bounded by fractional block
sensitivity, which means we cannot use these techniques to get a
super-quadratic separation between and . We also provide a
quadratic separation between the tree-sensitivity and decision tree complexity
of Boolean functions, disproving a conjecture of Gopalan, Servedio, Tal, and
Wigderson (CCC 2016).
Along the way, we give a power separation between certificate
complexity and one-sided unambiguous certificate complexity, improving the
power separation due to G\"o\"os (FOCS 2015). As a consequence, we
obtain an improved lower-bound on the
co-nondeterministic communication complexity of the Clique vs. Independent Set
problem.Comment: 25 pages. This version expands the results and adds Pooya Hatami and
Avishay Tal as author
Spectral Sparsification and Regret Minimization Beyond Matrix Multiplicative Updates
In this paper, we provide a novel construction of the linear-sized spectral
sparsifiers of Batson, Spielman and Srivastava [BSS14]. While previous
constructions required running time [BSS14, Zou12], our
sparsification routine can be implemented in almost-quadratic running time
.
The fundamental conceptual novelty of our work is the leveraging of a strong
connection between sparsification and a regret minimization problem over
density matrices. This connection was known to provide an interpretation of the
randomized sparsifiers of Spielman and Srivastava [SS11] via the application of
matrix multiplicative weight updates (MWU) [CHS11, Vis14]. In this paper, we
explain how matrix MWU naturally arises as an instance of the
Follow-the-Regularized-Leader framework and generalize this approach to yield a
larger class of updates. This new class allows us to accelerate the
construction of linear-sized spectral sparsifiers, and give novel insights on
the motivation behind Batson, Spielman and Srivastava [BSS14]
SubTuning: Efficient Finetuning for Multi-Task Learning
Finetuning a pretrained model has become a standard approach for training
neural networks on novel tasks, resulting in fast convergence and improved
performance. In this work, we study an alternative finetuning method, where
instead of finetuning all the weights of the network, we only train a carefully
chosen subset of layers, keeping the rest of the weights frozen at their
initial (pretrained) values. We demonstrate that \emph{subset finetuning} (or
SubTuning) often achieves accuracy comparable to full finetuning of the model,
and even surpasses the performance of full finetuning when training data is
scarce. Therefore, SubTuning allows deploying new tasks at minimal
computational cost, while enjoying the benefits of finetuning the entire model.
This yields a simple and effective method for multi-task learning, where
different tasks do not interfere with one another, and yet share most of the
resources at inference time. We demonstrate the efficiency of SubTuning across
multiple tasks, using different network architectures and pretraining methods
Gifsy-1 Prophage IsrK with Dual Function as Small and Messenger RNA Modulates Vital Bacterial Machineries
While an increasing number of conserved small regulatory RNAs (sRNAs) are known to function in general bacterial physiology, the roles and modes of action of sRNAs from horizontally acquired genomic regions remain little understood. The IsrK sRNA of Gifsy-1 prophage of Salmonella belongs to the latter class. This regulatory RNA exists in two isoforms. The first forms, when a portion of transcripts originating from isrK promoter reads-through the IsrK transcription-terminator producing a translationally inactive mRNA target. Acting in trans, the second isoform, short IsrK RNA, binds the inactive transcript rendering it translationally active. By switching on translation of the first isoform, short IsrK indirectly activates the production of AntQ, an antiterminator protein located upstream of isrK. Expression of antQ globally interferes with transcription termination resulting in bacterial growth arrest and ultimately cell death. Escherichia coli and Salmonella cells expressing AntQ display condensed chromatin morphology and localization of UvrD to the nucleoid. The toxic phenotype of AntQ can be rescued by co-expression of the transcription termination factor, Rho, or RNase H, which protects genomic DNA from breaks by resolving R-loops. We propose that AntQ causes conflicts between transcription and replication machineries and thus promotes DNA damage. The isrK locus represents a unique example of an island-encoded sRNA that exerts a highly complex regulatory mechanism to tune the expression of a toxic protein
Robustness and Generalization
We derive generalization bounds for learning algorithms based on their
robustness: the property that if a testing sample is "similar" to a training
sample, then the testing error is close to the training error. This provides a
novel approach, different from the complexity or stability arguments, to study
generalization of learning algorithms. We further show that a weak notion of
robustness is both sufficient and necessary for generalizability, which implies
that robustness is a fundamental property for learning algorithms to work
- …