60 research outputs found
Second-Order Stochastic Optimization for Machine Learning in Linear Time
First-order stochastic methods are the state-of-the-art in large-scale
machine learning optimization owing to efficient per-iteration complexity.
Second-order methods, while able to provide faster convergence, have been much
less explored due to the high cost of computing the second-order information.
In this paper we develop second-order stochastic methods for optimization
problems in machine learning that match the per-iteration cost of gradient
based methods, and in certain settings improve upon the overall running time
over popular first-order methods. Furthermore, our algorithm has the desirable
property of being implementable in time linear in the sparsity of the input
data
On the Expansion of Group-Based Lifts
A -lift of an -vertex base graph is a graph on
vertices, where each vertex of is replaced by vertices
and each edge in is replaced by a matching
representing a bijection so that the edges of are of the form
. Lifts have been studied as a means to efficiently
construct expanders. In this work, we study lifts obtained from groups and
group actions. We derive the spectrum of such lifts via the representation
theory principles of the underlying group. Our main results are:
(1) There is a constant such that for every , there
does not exist an abelian -lift of any -vertex -regular base graph
with being almost Ramanujan (nontrivial eigenvalues of the adjacency matrix
at most in magnitude). This can be viewed as an analogue of the
well-known no-expansion result for abelian Cayley graphs.
(2) A uniform random lift in a cyclic group of order of any -vertex
-regular base graph , with the nontrivial eigenvalues of the adjacency
matrix of bounded by in magnitude, has the new nontrivial
eigenvalues also bounded by in magnitude with probability
. In particular, there is a constant such that for
every , there exists a lift of every Ramanujan graph in
a cyclic group of order with being almost Ramanujan. We use this to
design a quasi-polynomial time algorithm to construct almost Ramanujan
expanders deterministically.
The existence of expanding lifts in cyclic groups of order
can be viewed as a lower bound on the order of the largest abelian group
that produces expanding lifts. Our results show that the lower bound matches
the upper bound for (upto in the exponent)
GenGradAttack:Efficient and Robust Targeted Adversarial Attacks Using Genetic Algorithms and Gradient-Based Fine-Tuning
Adversarial attacks pose a critical threat to the reliability of machine learning models, potentially undermining trust in practical applications. As machine learning models find deployment in vital domains like autonomous vehicles, healthcare, and finance, they become susceptible to adversarial examples—crafted inputs that induce erroneous high-confidence predictions. These attacks fall into two main categories: white-box, with full knowledge of model architecture, and black-box, with limited or no access to internal details. This paper introduces a novel approach for targeted adversarial attacks in black-box scenarios. By combining genetic algorithms and gradient-based fine-tuning, our method efficiently explores input space for perturbations without requiring access to internal model details. Subsequently, gradient-based fine-tuning optimizes these perturbations, aligning them with the target model’s decision boundary. This dual strategy aims to evolve perturbations that effectively mislead target models while minimizing queries, ensuring stealthy attacks. Results demonstrate the efficacy of GenGradAttack, achieving a remarkable 95.06% Adversarial Success Rate (ASR) on MNIST with a median query count of 556. In contrast, conventional GenAttack achieved 100% ASR but required significantly more queries. When applied to InceptionV3 and Ens4AdvInceptionV3 on ImageNet, GenGradAttack outperformed GenAttack with 100% and 96% ASR, respectively, and fewer median queries. These results highlight the efficiency and effectiveness of our approach in generating adversarial examples with reduced query counts, advancing our understanding of adversarial vulnerabilities in practical contexts
GenGradAttack:Efficient and Robust Targeted Adversarial Attacks Using Genetic Algorithms and Gradient-Based Fine-Tuning
Adversarial attacks pose a critical threat to the reliability of machine learning models, potentially undermining trust in practical applications. As machine learning models find deployment in vital domains like autonomous vehicles, healthcare, and finance, they become susceptible to adversarial examples—crafted inputs that induce erroneous high-confidence predictions. These attacks fall into two main categories: white-box, with full knowledge of model architecture, and black-box, with limited or no access to internal details. This paper introduces a novel approach for targeted adversarial attacks in black-box scenarios. By combining genetic algorithms and gradient-based fine-tuning, our method efficiently explores input space for perturbations without requiring access to internal model details. Subsequently, gradient-based fine-tuning optimizes these perturbations, aligning them with the target model’s decision boundary. This dual strategy aims to evolve perturbations that effectively mislead target models while minimizing queries, ensuring stealthy attacks. Results demonstrate the efficacy of GenGradAttack, achieving a remarkable 95.06% Adversarial Success Rate (ASR) on MNIST with a median query count of 556. In contrast, conventional GenAttack achieved 100% ASR but required significantly more queries. When applied to InceptionV3 and Ens4AdvInceptionV3 on ImageNet, GenGradAttack outperformed GenAttack with 100% and 96% ASR, respectively, and fewer median queries. These results highlight the efficiency and effectiveness of our approach in generating adversarial examples with reduced query counts, advancing our understanding of adversarial vulnerabilities in practical contexts
Electric Field Eect on Skyrmion phase in Chiral lattice Ferrimagnet Cu2OSeO3
Insulator materials are the recent research interest in Spintronics because of the absence of joule heating. There has been continued interest in insulator material Cu2OSeO3 because it possesses the novel Skyrmion phase [1], specific type of spin vortices in the lattice, characterized by quantized topological number, near the paramagnetic phase boundary [2]. This Skyrmion phase can be harnessed for the switching applications in the data storage devices [3–6]. We study the effect of positive or negative DC electric field on the Skyrmion phase boundary of this material by magnetoelectric susceptibility(dM/dE)measurements, which is a sensitive technique for scanning phase boundary [7,8]. We scan out the phase diagram for positive, negative and zero DC electric fields and find that electric field effect is maximum at the phase boundaries and decreases inside the phase There is no cut-off for the electric field effect. Also, the electric field effect is maximum at upper skyrmion boundary compared to other phase boundaries. We also show the small expansion(contraction) of Skyrmion phase along temperature axis with the application of positive(negative) DC electric field
- …