2,633 research outputs found

    2-[(1 H

    Get PDF

    Compact broadband circularly-polarised antenna with a backed cavity for UHF RFID applications

    Get PDF

    Decompiling x86 Deep Neural Network Executables

    Full text link
    Due to their widespread use on heterogeneous hardware devices, deep learning (DL) models are compiled into executables by DL compilers to fully leverage low-level hardware primitives. This approach allows DL computations to be undertaken at low cost across a variety of computing platforms, including CPUs, GPUs, and various hardware accelerators. We present BTD (Bin to DNN), a decompiler for deep neural network (DNN) executables. BTD takes DNN executables and outputs full model specifications, including types of DNN operators, network topology, dimensions, and parameters that are (nearly) identical to those of the input models. BTD delivers a practical framework to process DNN executables compiled by different DL compilers and with full optimizations enabled on x86 platforms. It employs learning-based techniques to infer DNN operators, dynamic analysis to reveal network architectures, and symbolic execution to facilitate inferring dimensions and parameters of DNN operators. Our evaluation reveals that BTD enables accurate recovery of full specifications of complex DNNs with millions of parameters (e.g., ResNet). The recovered DNN specifications can be re-compiled into a new DNN executable exhibiting identical behavior to the input executable. We show that BTD can boost two representative attacks, adversarial example generation and knowledge stealing, against DNN executables. We also demonstrate cross-architecture legacy code reuse using BTD, and envision BTD being used for other critical downstream tasks like DNN security hardening and patching.Comment: The extended version of a paper to appear in the Proceedings of the 32nd USENIX Security Symposium, 2023, (USENIX Security '23), 25 page

    Carnosol Modulates Th17 Cell Differentiation and Microglial Switch in Experimental Autoimmune Encephalomyelitis

    Get PDF
    Medicinal plants as a rich pool for developing novel small molecule therapeutic medicine have been used for thousands of years. Carnosol as a bioactive diterpene compound originated from Rosmarinus officinalis (Rosemary) and Salvia officinalis, herbs extensively applied in traditional medicine for the treatment of multiple autoimmune diseases (1). In this study, we investigated the therapeutic effects and molecule mechanism of carnosol in experimental autoimmune encephalomyelitis (EAE), an animal model of multiple sclerosis (MS). Carnosol treatment significantly alleviated clinical development in the myelin oligodendrocyte glycoprotein (MOG35–55) peptide-induced EAE model, markedly decreased inflammatory cell infiltration into the central nervous system and reduced demyelination. Further, carnosol inhibited Th17 cell differentiation and signal transducer and activator of transcription 3 phosphorylation, and blocked transcription factor NF-κB nuclear translocation. In the passive-EAE model, carnosol treatment also significantly prevented Th17 cell pathogenicity. Moreover, carnosol exerted its therapeutic effects in the chronic stage of EAE, and, remarkably, switched the phenotypes of infiltrated macrophage/microglia. Taken together, our results show that carnosol has enormous potential for development as a therapeutic agent for autoimmune diseases such as MS

    Unveiling Single-Bit-Flip Attacks on DNN Executables

    Full text link
    Recent research has shown that bit-flip attacks (BFAs) can manipulate deep neural networks (DNNs) via DRAM Rowhammer exploitations. Existing attacks are primarily launched over high-level DNN frameworks like PyTorch and flip bits in model weight files. Nevertheless, DNNs are frequently compiled into low-level executables by deep learning (DL) compilers to fully leverage low-level hardware primitives. The compiled code is usually high-speed and manifests dramatically distinct execution paradigms from high-level DNN frameworks. In this paper, we launch the first systematic study on the attack surface of BFA specifically for DNN executables compiled by DL compilers. We design an automated search tool to identify vulnerable bits in DNN executables and identify practical attack vectors that exploit the model structure in DNN executables with BFAs (whereas prior works make likely strong assumptions to attack model weights). DNN executables appear more "opaque" than models in high-level DNN frameworks. Nevertheless, we find that DNN executables contain extensive, severe (e.g., single-bit flip), and transferrable attack surfaces that are not present in high-level DNN models and can be exploited to deplete full model intelligence and control output labels. Our finding calls for incorporating security mechanisms in future DNN compilation toolchains.Comment: Fix typ

    MAMO: Masked Multimodal Modeling for Fine-Grained Vision-Language Representation Learning

    Full text link
    Multimodal representation learning has shown promising improvements on various vision-language tasks. Most existing methods excel at building global-level alignment between vision and language while lacking effective fine-grained image-text interaction. In this paper, we propose a jointly masked multimodal modeling method to learn fine-grained multimodal representations. Our method performs joint masking on image-text input and integrates both implicit and explicit targets for the masked signals to recover. The implicit target provides a unified and debiased objective for vision and language, where the model predicts latent multimodal representations of the unmasked input. The explicit target further enriches the multimodal representations by recovering high-level and semantically meaningful information: momentum visual features of image patches and concepts of word tokens. Through such a masked modeling process, our model not only learns fine-grained multimodal interaction, but also avoids the semantic gap between high-level representations and low- or mid-level prediction targets (e.g. image pixels), thus producing semantically rich multimodal representations that perform well on both zero-shot and fine-tuned settings. Our pre-trained model (named MAMO) achieves state-of-the-art performance on various downstream vision-language tasks, including image-text retrieval, visual question answering, visual reasoning, and weakly-supervised visual grounding

    A Wideband Single-Fed, Circularly-Polarized Patch Antenna with Enhanced Axial Ratio Bandwidth for UHF RFID Reader Applications

    Get PDF
    • …
    corecore