8 research outputs found
Non Fourier Heat Transfer Across a Gun Barrel
In this project, study of hyperbolic heat transfer equation across a gun barrel is investigated. First, a comparison is made among several research work on parabolic and hyperbolic equations contributed in this field. Then, equations are formulated for non Fourier heat equations. These governing equations are normalized after which, a characteristic Peclet number is obtained which depends on local sound speed and thermal diffusion speed. A code was written in Matlab language for 3 different Peclet Numbers. Maccormack Predictor and Corrector algorithm was used to generate the program and plot graphs. The Peclet number is varied in the experiment, following which, a transition is seen from the hyperbolic nature to the parabolic curve
Optimal Gossip Algorithms for Exact and Approximate Quantile Computations
This paper gives drastically faster gossip algorithms to compute exact and
approximate quantiles.
Gossip algorithms, which allow each node to contact a uniformly random other
node in each round, have been intensely studied and been adopted in many
applications due to their fast convergence and their robustness to failures.
Kempe et al. [FOCS'03] gave gossip algorithms to compute important aggregate
statistics if every node is given a value. In particular, they gave a beautiful
round algorithm to -approximate
the sum of all values and an round algorithm to compute the exact
-quantile, i.e., the the smallest value.
We give an quadratically faster and in fact optimal gossip algorithm for the
exact -quantile problem which runs in rounds. We furthermore
show that one can achieve an exponential speedup if one allows for an
-approximation. We give an
round gossip algorithm which computes a value of rank between and
at every node.% for any and . Our algorithms are extremely simple and very robust - they can
be operated with the same running times even if every transmission fails with
a, potentially different, constant probability. We also give a matching
lower bound which shows that
our algorithm is optimal for all values of
SynBench: Task-Agnostic Benchmarking of Pretrained Representations using Synthetic Data
Recent success in fine-tuning large models, that are pretrained on broad data
at scale, on downstream tasks has led to a significant paradigm shift in deep
learning, from task-centric model design to task-agnostic representation
learning and task-specific fine-tuning. As the representations of pretrained
models are used as a foundation for different downstream tasks, this paper
proposes a new task-agnostic framework, \textit{SynBench}, to measure the
quality of pretrained representations using synthetic data. We set up a
reference by a theoretically-derived robustness-accuracy tradeoff of the class
conditional Gaussian mixture. Given a pretrained model, the representations of
data synthesized from the Gaussian mixture are used to compare with our
reference to infer the quality. By comparing the ratio of area-under-curve
between the raw data and their representations, SynBench offers a quantifiable
score for robustness-accuracy performance benchmarking. Our framework applies
to a wide range of pretrained models taking continuous data inputs and is
independent of the downstream tasks and datasets. Evaluated with several
pretrained vision transformer models, the experimental results show that our
SynBench score well matches the actual linear probing performance of the
pre-trained model when fine-tuned on downstream tasks. Moreover, our framework
can be used to inform the design of robust linear probing on pretrained
representations to mitigate the robustness-accuracy tradeoff in downstream
tasks
GENERALIZING ROBUSTNESS VERIFICATION FOR MACHINE LEARNING
Verifying robustness of neural networks given a specified threat model is a fundamental yet challenging task. Although a lot of work has been done to quantify the robustness of DNN’s to ℓₚ norm bounded adversarial attacks there are still a few gaps between available guarantees and those needed in practice. In this thesis we focus on resolving two of these limitations. 1)While current verification methods mainly focus on the ℓₚ-norm threat model of the input instances, robustness verification against semantic adversarial attacks inducing large ℓₚ-norm perturbations, such as color shifting and lighting adjustment, are beyond their capacity. To bridge this gap, we propose a framework Semantify-NN to extend ℓₚ norm verification to semantic verification. 2) Randomized smoothing is a recently proposed defense against adversarial attacks that has achieved state-of-the-art provable robustness against ℓ₂ perturbations. A number of publications have extended the guarantees to other metrics, such as ℓ₁ or ℓ subscript ∞, by using different smoothing measures. Although the current framework has been shown to yield near-optimal ℓₚ radii, the total safety region certified by the current framework can be arbitrarily small compared to the optimal. We provide Higher Order Verification: a general framework to improve the certified safety region for these smoothed classifiers without changing the underlying smoothing scheme which allows the resulting classifier to be provably robust to multiple threat models at once.M.Eng
Towards verifying robustness of neural networks against a family of semantic perturbations
Verifying robustness of neural networks given a specified threat model is a fundamental yet challenging task. While current verification methods mainly focus on the p-norm threat model of the input instances, robustness verification against semantic adversarial attacks inducing large p-norm perturbations, such as color shifting and lighting adjustment, are beyond their capacity. To bridge this gap, we propose Semantify-NN, a model-agnostic and generic robustness verification approach against semantic perturbations for neural networks. By simply inserting our proposed semantic perturbation layers (SP-layers) to the input layer of any given model, Semantify-NN is model-agnostic, and any p-norm based verification tools can be used to verify the model robustness against semantic perturbations. We illustrate the principles of designing the SP-layers and provide examples including semantic perturbations to image classification in the space of hue, saturation, lightness, brightness, contrast and rotation, respectively. In addition, an efficient refinement technique is proposed to further significantly improve the semantic certificate. Experiments on various network architectures and different datasets demonstrate the superior verification performance of Semantify-NN over p-norm-based verification frameworks that naively convert semantic perturbation to p-norm. The results show that Semantify-NN can support robustness verification against a wide range of semantic perturbations
Hidden Cost of Randomized Smoothing
The fragility of modern machine learning models has drawn a considerable
amount of attention from both academia and the public. While immense interests
were in either crafting adversarial attacks as a way to measure the robustness
of neural networks or devising worst-case analytical robustness verification
with guarantees, few methods could enjoy both scalability and robustness
guarantees at the same time. As an alternative to these attempts, randomized
smoothing adopts a different prediction rule that enables statistical
robustness arguments which easily scale to large networks. However, in this
paper, we point out the side effects of current randomized smoothing workflows.
Specifically, we articulate and prove two major points: 1) the decision
boundaries of smoothed classifiers will shrink, resulting in disparity in
class-wise accuracy; 2) applying noise augmentation in the training process
does not necessarily resolve the shrinking issue due to the inconsistent
learning objectives.Comment: Jeet Mohapatra and Ching-Yun Ko contributed equally. To appear in
AISTATS 202