751 research outputs found

    Development of Field-deployable Nucleic Acid Testing Platforms

    Get PDF
    This thesis is focused on the development of field-deployable nucleic acid testing platforms to allowed rapid detection and quantification of nucleic acids. Two distinct platforms suitable for nucleic acid testing in resource-limited settings were developed. First, a paper-based diagnostic device was developed. The principle of this paper-based device was based on the unique interfacial interaction of DNA and the DNA intercalating dye with cellulose on chromatographic paper. Second, a colorimetric reader was developed. The principle of the reader was based on measuring the absorbance change of a chromogenic substrate which is triggered by DNA and DNA intercalating dyes under light illumination. The performance of both devices was tested using synthetic DNA, nucleic acid amplicons, and actual parasites nucleic acid samples collected from school-age children in rural areas of Honduras

    Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser

    Full text link
    Neural networks are vulnerable to adversarial examples, which poses a threat to their application in security sensitive systems. We propose high-level representation guided denoiser (HGD) as a defense for image classification. Standard denoiser suffers from the error amplification effect, in which small residual adversarial noise is progressively amplified and leads to wrong classifications. HGD overcomes this problem by using a loss function defined as the difference between the target model's outputs activated by the clean image and denoised image. Compared with ensemble adversarial training which is the state-of-the-art defending method on large images, HGD has three advantages. First, with HGD as a defense, the target model is more robust to either white-box or black-box adversarial attacks. Second, HGD can be trained on a small subset of the images and generalizes well to other images and unseen classes. Third, HGD can be transferred to defend models other than the one guiding it. In NIPS competition on defense against adversarial attacks, our HGD solution won the first place and outperformed other models by a large margin

    Free electron emission in vacuum assisted by photonic time crystals

    Full text link
    The Cerenkov radiation and the Smith-Purcell effect state that free electron emission occurs exclusively in dielectrics when the velocity of the particles exceeds the speed of light in the medium or in the vicinity of periodic gratings close to each other within a vacuum. We demonstrate that free electrons in a vacuum can also emit highly directional monochromatic waves when they are in close proximity to a medium that is periodically modulated temporally, suggesting the existence of temporal Smith-Purcell effect. The momentum band gaps of time-varying media, such as photonic time crystals (PTCs), create new pathways for the injection of external energy, allowing the frequency, intensity, and spatial distribution of the electromagnetic fields to be controlled. Moreover, the PTC substrate enables the conversion of localized evanescent fields into amplified, highly directional propagating plane waves that are only sensitive to the velocity of particles and the modulation frequency, which allows us to observe and utilize Cerenkov-like radiation in free space. Our work exhibits significant opportunities for the utilization of time-varying structures in various fields, including particle identification, ultraweak signal detection, and improved radiation source design

    Boosting Transferability of Targeted Adversarial Examples via Hierarchical Generative Networks

    Full text link
    Transfer-based adversarial attacks can effectively evaluate model robustness in the black-box setting. Though several methods have demonstrated impressive transferability of untargeted adversarial examples, targeted adversarial transferability is still challenging. The existing methods either have low targeted transferability or sacrifice computational efficiency. In this paper, we develop a simple yet practical framework to efficiently craft targeted transfer-based adversarial examples. Specifically, we propose a conditional generative attacking model, which can generate the adversarial examples targeted at different classes by simply altering the class embedding and share a single backbone. Extensive experiments demonstrate that our method improves the success rates of targeted black-box attacks by a significant margin over the existing methods -- it reaches an average success rate of 29.6\% against six diverse models based only on one substitute white-box model in the standard testing of NeurIPS 2017 competition, which outperforms the state-of-the-art gradient-based attack methods (with an average success rate of <<2\%) by a large margin. Moreover, the proposed method is also more efficient beyond an order of magnitude than gradient-based methods
    • …
    corecore