60 research outputs found

    Second-Order Stochastic Optimization for Machine Learning in Linear Time

    Full text link
    First-order stochastic methods are the state-of-the-art in large-scale machine learning optimization owing to efficient per-iteration complexity. Second-order methods, while able to provide faster convergence, have been much less explored due to the high cost of computing the second-order information. In this paper we develop second-order stochastic methods for optimization problems in machine learning that match the per-iteration cost of gradient based methods, and in certain settings improve upon the overall running time over popular first-order methods. Furthermore, our algorithm has the desirable property of being implementable in time linear in the sparsity of the input data

    On the Expansion of Group-Based Lifts

    Get PDF
    A kk-lift of an nn-vertex base graph GG is a graph HH on n×kn\times k vertices, where each vertex vv of GG is replaced by kk vertices v1,⋯,vkv_1,\cdots{},v_k and each edge (u,v)(u,v) in GG is replaced by a matching representing a bijection πuv\pi_{uv} so that the edges of HH are of the form (ui,vπuv(i))(u_i,v_{\pi_{uv}(i)}). Lifts have been studied as a means to efficiently construct expanders. In this work, we study lifts obtained from groups and group actions. We derive the spectrum of such lifts via the representation theory principles of the underlying group. Our main results are: (1) There is a constant c1c_1 such that for every k≥2c1ndk\geq 2^{c_1nd}, there does not exist an abelian kk-lift HH of any nn-vertex dd-regular base graph with HH being almost Ramanujan (nontrivial eigenvalues of the adjacency matrix at most O(d)O(\sqrt{d}) in magnitude). This can be viewed as an analogue of the well-known no-expansion result for abelian Cayley graphs. (2) A uniform random lift in a cyclic group of order kk of any nn-vertex dd-regular base graph GG, with the nontrivial eigenvalues of the adjacency matrix of GG bounded by λ\lambda in magnitude, has the new nontrivial eigenvalues also bounded by λ+O(d)\lambda+O(\sqrt{d}) in magnitude with probability 1−ke−Ω(n/d2)1-ke^{-\Omega(n/d^2)}. In particular, there is a constant c2c_2 such that for every k≤2c2n/d2k\leq 2^{c_2n/d^2}, there exists a lift HH of every Ramanujan graph in a cyclic group of order kk with HH being almost Ramanujan. We use this to design a quasi-polynomial time algorithm to construct almost Ramanujan expanders deterministically. The existence of expanding lifts in cyclic groups of order k=2O(n/d2)k=2^{O(n/d^2)} can be viewed as a lower bound on the order k0k_0 of the largest abelian group that produces expanding lifts. Our results show that the lower bound matches the upper bound for k0k_0 (upto d3d^3 in the exponent)

    GenGradAttack:Efficient and Robust Targeted Adversarial Attacks Using Genetic Algorithms and Gradient-Based Fine-Tuning

    Get PDF
    Adversarial attacks pose a critical threat to the reliability of machine learning models, potentially undermining trust in practical applications. As machine learning models find deployment in vital domains like autonomous vehicles, healthcare, and finance, they become susceptible to adversarial examples—crafted inputs that induce erroneous high-confidence predictions. These attacks fall into two main categories: white-box, with full knowledge of model architecture, and black-box, with limited or no access to internal details. This paper introduces a novel approach for targeted adversarial attacks in black-box scenarios. By combining genetic algorithms and gradient-based fine-tuning, our method efficiently explores input space for perturbations without requiring access to internal model details. Subsequently, gradient-based fine-tuning optimizes these perturbations, aligning them with the target model’s decision boundary. This dual strategy aims to evolve perturbations that effectively mislead target models while minimizing queries, ensuring stealthy attacks. Results demonstrate the efficacy of GenGradAttack, achieving a remarkable 95.06% Adversarial Success Rate (ASR) on MNIST with a median query count of 556. In contrast, conventional GenAttack achieved 100% ASR but required significantly more queries. When applied to InceptionV3 and Ens4AdvInceptionV3 on ImageNet, GenGradAttack outperformed GenAttack with 100% and 96% ASR, respectively, and fewer median queries. These results highlight the efficiency and effectiveness of our approach in generating adversarial examples with reduced query counts, advancing our understanding of adversarial vulnerabilities in practical contexts

    GenGradAttack:Efficient and Robust Targeted Adversarial Attacks Using Genetic Algorithms and Gradient-Based Fine-Tuning

    Get PDF
    Adversarial attacks pose a critical threat to the reliability of machine learning models, potentially undermining trust in practical applications. As machine learning models find deployment in vital domains like autonomous vehicles, healthcare, and finance, they become susceptible to adversarial examples—crafted inputs that induce erroneous high-confidence predictions. These attacks fall into two main categories: white-box, with full knowledge of model architecture, and black-box, with limited or no access to internal details. This paper introduces a novel approach for targeted adversarial attacks in black-box scenarios. By combining genetic algorithms and gradient-based fine-tuning, our method efficiently explores input space for perturbations without requiring access to internal model details. Subsequently, gradient-based fine-tuning optimizes these perturbations, aligning them with the target model’s decision boundary. This dual strategy aims to evolve perturbations that effectively mislead target models while minimizing queries, ensuring stealthy attacks. Results demonstrate the efficacy of GenGradAttack, achieving a remarkable 95.06% Adversarial Success Rate (ASR) on MNIST with a median query count of 556. In contrast, conventional GenAttack achieved 100% ASR but required significantly more queries. When applied to InceptionV3 and Ens4AdvInceptionV3 on ImageNet, GenGradAttack outperformed GenAttack with 100% and 96% ASR, respectively, and fewer median queries. These results highlight the efficiency and effectiveness of our approach in generating adversarial examples with reduced query counts, advancing our understanding of adversarial vulnerabilities in practical contexts

    Electric Field Eect on Skyrmion phase in Chiral lattice Ferrimagnet Cu2OSeO3

    Get PDF
    Insulator materials are the recent research interest in Spintronics because of the absence of joule heating. There has been continued interest in insulator material Cu2OSeO3 because it possesses the novel Skyrmion phase [1], specific type of spin vortices in the lattice, characterized by quantized topological number, near the paramagnetic phase boundary [2]. This Skyrmion phase can be harnessed for the switching applications in the data storage devices [3–6]. We study the effect of positive or negative DC electric field on the Skyrmion phase boundary of this material by magnetoelectric susceptibility(dM/dE)measurements, which is a sensitive technique for scanning phase boundary [7,8]. We scan out the phase diagram for positive, negative and zero DC electric fields and find that electric field effect is maximum at the phase boundaries and decreases inside the phase There is no cut-off for the electric field effect. Also, the electric field effect is maximum at upper skyrmion boundary compared to other phase boundaries. We also show the small expansion(contraction) of Skyrmion phase along temperature axis with the application of positive(negative) DC electric field
    • …
    corecore