183 research outputs found
Ultra-broad band perfect absorption realized by phonon-photon resonance in periodic polar dielectric material based pyramid structure
In this research, a mid-infrared wide-angle ultra-broadband perfect absorber
which composed of pyramid grating structure has been comprehensively studied.
The structure was operated in the reststrahlem band of SiC and with the
presence of surface phonon resonance(SPhR), the perfect absorption was observed
in the region between 10.25 and 10.85 . We explain the mechanism of this
structure with the help of PLC circuit model due to the independence of
magnetic polaritons. More over, by studying the resonance behavior of different
wavelength, we bridged the continuous perfect absorption band and the discret
peak in 11.05 (emerge two close absorption band together) by
modification of the geometry. The absorption band has been sufficiently
broadened. More over, both 1-D and 2-D periodic structure has been considered
and the response of different incident angles and polarized angles have been
studied and a omnidirectional and polarization insensitive structure can be
realized which may be a candidate of several sensor applications in
meteorology. The simulation was conducted by the Rigorous Coupled Wave
Method(RCWA)
Communication-Efficient Decentralized Federated Learning via One-Bit Compressive Sensing
Decentralized federated learning (DFL) has gained popularity due to its
practicality across various applications. Compared to the centralized version,
training a shared model among a large number of nodes in DFL is more
challenging, as there is no central server to coordinate the training process.
Especially when distributed nodes suffer from limitations in communication or
computational resources, DFL will experience extremely inefficient and unstable
training. Motivated by these challenges, in this paper, we develop a novel
algorithm based on the framework of the inexact alternating direction method
(iADM). On one hand, our goal is to train a shared model with a sparsity
constraint. This constraint enables us to leverage one-bit compressive sensing
(1BCS), allowing transmission of one-bit information among neighbour nodes. On
the other hand, communication between neighbour nodes occurs only at certain
steps, reducing the number of communication rounds. Therefore, the algorithm
exhibits notable communication efficiency. Additionally, as each node selects
only a subset of neighbours to participate in the training, the algorithm is
robust against stragglers. Additionally, complex items are computed only once
for several consecutive steps and subproblems are solved inexactly using
closed-form solutions, resulting in high computational efficiency. Finally,
numerical experiments showcase the algorithm's effectiveness in both
communication and computation
Real Time Scanning-Modeling System for Architecture Design and Construction
The disconnection between architectural form and materiality has become an important issue in recent years. Architectural form is mainly decided by the designer, while material data is often treated as an afterthought which doesn’t factor in decision-making directly. This study proposes a new, real-time scanning-modeling system for computational design and autonomous robotic construction. By using cameras to scan the raw materials, this system would get related data and build 3D models in real time. These data would be used by a computer to calculate rational outcomes and help a robot make decisions about its construction paths and methods. The result of an application pavilion shows that data of raw materials, architectural design, and robotic construction can be integrated into a digital chain. The method and gain of the material-oriented design approach are discussed and future research on using different source materials is laid out
Stable Unlearnable Example: Enhancing the Robustness of Unlearnable Examples via Stable Error-Minimizing Noise
The open source of large amounts of image data promotes the development of
deep learning techniques. Along with this comes the privacy risk of these
open-source image datasets being exploited by unauthorized third parties to
train deep learning models for commercial or illegal purposes. To avoid the
abuse of public data, a poisoning-based technique, the unlearnable example, is
proposed to significantly degrade the generalization performance of models by
adding a kind of imperceptible noise to the data. To further enhance its
robustness against adversarial training, existing works leverage iterative
adversarial training on both the defensive noise and the surrogate model.
However, it still remains unknown whether the robustness of unlearnable
examples primarily comes from the effect of enhancement in the surrogate model
or the defensive noise. Observing that simply removing the adversarial noise on
the training process of the defensive noise can improve the performance of
robust unlearnable examples, we identify that solely the surrogate model's
robustness contributes to the performance. Furthermore, we found a negative
correlation exists between the robustness of defensive noise and the protection
performance, indicating defensive noise's instability issue. Motivated by this,
to further boost the robust unlearnable example, we introduce stable
error-minimizing noise (SEM), which trains the defensive noise against random
perturbation instead of the time-consuming adversarial perturbation to improve
the stability of defensive noise. Through extensive experiments, we demonstrate
that SEM achieves a new state-of-the-art performance on CIFAR-10, CIFAR-100,
and ImageNet Subset in terms of both effectiveness and efficiency. The code is
available at https://github.com/liuyixin-louis/Stable-Unlearnable-Example.Comment: Accepted to AAAI 202
Audit and Improve Robustness of Private Neural Networks on Encrypted Data
Performing neural network inference on encrypted data without decryption is
one popular method to enable privacy-preserving neural networks (PNet) as a
service. Compared with regular neural networks deployed for
machine-learning-as-a-service, PNet requires additional encoding, e.g.,
quantized-precision numbers, and polynomial activation. Encrypted input also
introduces novel challenges such as adversarial robustness and security. To the
best of our knowledge, we are the first to study questions including (i)
Whether PNet is more robust against adversarial inputs than regular neural
networks? (ii) How to design a robust PNet given the encrypted input without
decryption? We propose PNet-Attack to generate black-box adversarial examples
that can successfully attack PNet in both target and untarget manners. The
attack results show that PNet robustness against adversarial inputs needs to be
improved. This is not a trivial task because the PNet model owner does not have
access to the plaintext of the input values, which prevents the application of
existing detection and defense methods such as input tuning, model
normalization, and adversarial training. To tackle this challenge, we propose a
new fast and accurate noise insertion method, called RPNet, to design Robust
and Private Neural Networks. Our comprehensive experiments show that
PNet-Attack reduces at least queries than prior works. We
theoretically analyze our RPNet methods and demonstrate that RPNet can decrease
attack success rate.Comment: 10 pages, 10 figure
Are Diffusion Models Vulnerable to Membership Inference Attacks?
Diffusion-based generative models have shown great potential for image
synthesis, but there is a lack of research on the security and privacy risks
they may pose. In this paper, we investigate the vulnerability of diffusion
models to Membership Inference Attacks (MIAs), a common privacy concern. Our
results indicate that existing MIAs designed for GANs or VAE are largely
ineffective on diffusion models, either due to inapplicable scenarios (e.g.,
requiring the discriminator of GANs) or inappropriate assumptions (e.g., closer
distances between synthetic samples and member samples). To address this gap,
we propose Step-wise Error Comparing Membership Inference (SecMI), a
query-based MIA that infers memberships by assessing the matching of forward
process posterior estimation at each timestep. SecMI follows the common
overfitting assumption in MIA where member samples normally have smaller
estimation errors, compared with hold-out samples. We consider both the
standard diffusion models, e.g., DDPM, and the text-to-image diffusion models,
e.g., Latent Diffusion Models and Stable Diffusion. Experimental results
demonstrate that our methods precisely infer the membership with high
confidence on both of the two scenarios across multiple different datasets.
Code is available at https://github.com/jinhaoduan/SecMI.Comment: To appear in ICML 202
- …