71,038 research outputs found

    Revealing a double-inversion mechanism for the F- + CH3Cl S(N)2 reaction

    Get PDF
    Stereo-specific reaction mechanisms play a fundamental role in chemistry. The back-side attack inversion and front-side attack retention pathways of the bimolecular nucleophilic substitution (S(N)2) reactions are the textbook examples for stereo-specific chemical processes. Here, we report an accurate global analytic potential energy surface (PES) for the F- + CH3Cl S(N)2 reaction, which describes both the back-side and front-side attack substitution pathways as well as the proton-abstraction channel. Moreover, reaction dynamics simulations on this surface reveal a novel double-inversion mechanism, in which an abstraction-induced inversion via a FH center dot center dot center dot CH2Cl- transition state is followed by a second inversion via the usual [F center dot center dot center dot CH3 center dot center dot center dot Cl](-) saddle point, thereby opening a lower energy reaction path for retention than the front-side attack. Quasi-classical trajectory computations for the F- + CH3Cl(upsilon(1) = 0, 1) reactions show that the front-side attack is a fast direct, whereas the double inversion is a slow indirect process

    Reinforcement Learning-Based Black-Box Model Inversion Attacks

    Full text link
    Model inversion attacks are a type of privacy attack that reconstructs private data used to train a machine learning model, solely by accessing the model. Recently, white-box model inversion attacks leveraging Generative Adversarial Networks (GANs) to distill knowledge from public datasets have been receiving great attention because of their excellent attack performance. On the other hand, current black-box model inversion attacks that utilize GANs suffer from issues such as being unable to guarantee the completion of the attack process within a predetermined number of query accesses or achieve the same level of performance as white-box attacks. To overcome these limitations, we propose a reinforcement learning-based black-box model inversion attack. We formulate the latent space search as a Markov Decision Process (MDP) problem and solve it with reinforcement learning. Our method utilizes the confidence scores of the generated images to provide rewards to an agent. Finally, the private data can be reconstructed using the latent vectors found by the agent trained in the MDP. The experiment results on various datasets and models demonstrate that our attack successfully recovers the private information of the target model by achieving state-of-the-art attack performance. We emphasize the importance of studies on privacy-preserving machine learning by proposing a more advanced black-box model inversion attack.Comment: CVPR 2023, Accepte

    Hard isogeny problems over RSA moduli and groups with infeasible inversion

    Get PDF
    We initiate the study of computational problems on elliptic curve isogeny graphs defined over RSA moduli. We conjecture that several variants of the neighbor-search problem over these graphs are hard, and provide a comprehensive list of cryptanalytic attempts on these problems. Moreover, based on the hardness of these problems, we provide a construction of groups with infeasible inversion, where the underlying groups are the ideal class groups of imaginary quadratic orders. Recall that in a group with infeasible inversion, computing the inverse of a group element is required to be hard, while performing the group operation is easy. Motivated by the potential cryptographic application of building a directed transitive signature scheme, the search for a group with infeasible inversion was initiated in the theses of Hohenberger and Molnar (2003). Later it was also shown to provide a broadcast encryption scheme by Irrer et al. (2004). However, to date the only case of a group with infeasible inversion is implied by the much stronger primitive of self-bilinear map constructed by Yamakawa et al. (2014) based on the hardness of factoring and indistinguishability obfuscation (iO). Our construction gives a candidate without using iO.Comment: Significant revision of the article previously titled "A Candidate Group with Infeasible Inversion" (arXiv:1810.00022v1). Cleared up the constructions by giving toy examples, added "The Parallelogram Attack" (Sec 5.3.2). 54 pages, 8 figure

    Model Inversion Attack via Dynamic Memory Learning

    Full text link
    Model Inversion (MI) attacks aim to recover the private training data from the target model, which has raised security concerns about the deployment of DNNs in practice. Recent advances in generative adversarial models have rendered them particularly effective in MI attacks, primarily due to their ability to generate high-fidelity and perceptually realistic images that closely resemble the target data. In this work, we propose a novel Dynamic Memory Model Inversion Attack (DMMIA) to leverage historically learned knowledge, which interacts with samples (during the training) to induce diverse generations. DMMIA constructs two types of prototypes to inject the information about historically learned knowledge: Intra-class Multicentric Representation (IMR) representing target-related concepts by multiple learnable prototypes, and Inter-class Discriminative Representation (IDR) characterizing the memorized samples as learned prototypes to capture more privacy-related information. As a result, our DMMIA has a more informative representation, which brings more diverse and discriminative generated results. Experiments on multiple benchmarks show that DMMIA performs better than state-of-the-art MI attack methods

    Privacy Attacks Against Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models

    Full text link
    Authentication systems are vulnerable to model inversion attacks where an adversary is able to approximate the inverse of a target machine learning model. Biometric models are a prime candidate for this type of attack. This is because inverting a biometric model allows the attacker to produce a realistic biometric input to spoof biometric authentication systems. One of the main constraints in conducting a successful model inversion attack is the amount of training data required. In this work, we focus on iris and facial biometric systems and propose a new technique that drastically reduces the amount of training data necessary. By leveraging the output of multiple models, we are able to conduct model inversion attacks with 1/10th the training set size of Ahmad and Fuller (IJCB 2020) for iris data and 1/1000th the training set size of Mai et al. (Pattern Analysis and Machine Intelligence 2019) for facial data. We denote our new attack technique as structured random with alignment loss. Our attacks are black-box, requiring no knowledge of the weights of the target neural network, only the dimension, and values of the output vector. To show the versatility of the alignment loss, we apply our attack framework to the task of membership inference (Shokri et al., IEEE S&P 2017) on biometric data. For the iris, membership inference attack against classification networks improves from 52% to 62% accuracy.Comment: This is a major revision of a paper titled "Inverting Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models" by the same authors that appears at IJCB 202
    • …
    corecore