23,590 research outputs found

    Exploiting the Symmetry of the Resonator Mode to Enhance PELDOR Sensitivity.

    Get PDF
    Pulsed electron paramagnetic resonance (EPR) spectroscopy using microwaves at two frequencies can be employed to measure distances between pairs of paramagnets separated by up to 10 nm. The method, combined with site-directed mutagenesis, has become increasingly popular in structural biology for both its selectivity and capability of providing information not accessible through more standard methods such as nuclear magnetic resonance and X-ray crystallography. Despite these advantages, EPR distance measurements suffer from poor sensitivity. One contributing factor is technical: since 65 MHz typically separates the pump and detection frequencies, they cannot both be located at the center of the pseudo-Lorentzian microwave resonance of a single-mode resonator. To maximize the inversion efficiency, the pump pulse is usually placed at the center of the resonance, while the observer frequency is placed in the wing, with consequent reduction in sensitivity. Here, we consider an alternative configuration: by spacing pump and observer frequencies symmetrically with respect to the microwave resonance and by increasing the quality factor, valuable improvement in the signal-to-noise ratio can be obtained

    Privacy Risks of Securing Machine Learning Models against Adversarial Examples

    Full text link
    The arms race between attacks and defenses for machine learning models has come to a forefront in recent years, in both the security community and the privacy community. However, one big limitation of previous research is that the security domain and the privacy domain have typically been considered separately. It is thus unclear whether the defense methods in one domain will have any unexpected impact on the other domain. In this paper, we take a step towards resolving this limitation by combining the two domains. In particular, we measure the success of membership inference attacks against six state-of-the-art defense methods that mitigate the risk of adversarial examples (i.e., evasion attacks). Membership inference attacks determine whether or not an individual data record has been part of a model's training set. The accuracy of such attacks reflects the information leakage of training algorithms about individual members of the training set. Adversarial defense methods against adversarial examples influence the model's decision boundaries such that model predictions remain unchanged for a small area around each input. However, this objective is optimized on training data. Thus, individual data records in the training set have a significant influence on robust models. This makes the models more vulnerable to inference attacks. To perform the membership inference attacks, we leverage the existing inference methods that exploit model predictions. We also propose two new inference methods that exploit structural properties of robust models on adversarially perturbed data. Our experimental evaluation demonstrates that compared with the natural training (undefended) approach, adversarial defense methods can indeed increase the target model's risk against membership inference attacks.Comment: ACM CCS 2019, code is available at https://github.com/inspire-group/privacy-vs-robustnes

    Multitraining support vector machine for image retrieval

    Get PDF
    Relevance feedback (RF) schemes based on support vector machines (SVMs) have been widely used in content-based image retrieval (CBIR). However, the performance of SVM-based RF approaches is often poor when the number of labeled feedback samples is small. This is mainly due to 1) the SVM classifier being unstable for small-size training sets because its optimal hyper plane is too sensitive to the training examples; and 2) the kernel method being ineffective because the feature dimension is much greater than the size of the training samples. In this paper, we develop a new machine learning technique, multitraining SVM (MTSVM), which combines the merits of the cotraining technique and a random sampling method in the feature space. Based on the proposed MTSVM algorithm, the above two problems can be mitigated. Experiments are carried out on a large image set of some 20 000 images, and the preliminary results demonstrate that the developed method consistently improves the performance over conventional SVM-based RFs in terms of precision and standard deviation, which are used to evaluate the effectiveness and robustness of a RF algorithm, respectively

    Security Evaluation of Support Vector Machines in Adversarial Environments

    Full text link
    Support Vector Machines (SVMs) are among the most popular classification techniques adopted in security applications like malware detection, intrusion detection, and spam filtering. However, if SVMs are to be incorporated in real-world security systems, they must be able to cope with attack patterns that can either mislead the learning algorithm (poisoning), evade detection (evasion), or gain information about their internal parameters (privacy breaches). The main contributions of this chapter are twofold. First, we introduce a formal general framework for the empirical evaluation of the security of machine-learning systems. Second, according to our framework, we demonstrate the feasibility of evasion, poisoning and privacy attacks against SVMs in real-world security problems. For each attack technique, we evaluate its impact and discuss whether (and how) it can be countered through an adversary-aware design of SVMs. Our experiments are easily reproducible thanks to open-source code that we have made available, together with all the employed datasets, on a public repository.Comment: 47 pages, 9 figures; chapter accepted into book 'Support Vector Machine Applications
    corecore