321 research outputs found
Transpose Attack: Stealing Datasets with Bidirectional Training
Deep neural networks are normally executed in the forward direction. However,
in this work, we identify a vulnerability that enables models to be trained in
both directions and on different tasks. Adversaries can exploit this capability
to hide rogue models within seemingly legitimate models. In addition, in this
work we show that neural networks can be taught to systematically memorize and
retrieve specific samples from datasets. Together, these findings expose a
novel method in which adversaries can exfiltrate datasets from protected
learning environments under the guise of legitimate models. We focus on the
data exfiltration attack and show that modern architectures can be used to
secretly exfiltrate tens of thousands of samples with high fidelity, high
enough to compromise data privacy and even train new models. Moreover, to
mitigate this threat we propose a novel approach for detecting infected models.Comment: NDSS24 pape
Human Motion Diffusion as a Generative Prior
Recent work has demonstrated the significant potential of denoising diffusion
models for generating human motion, including text-to-motion capabilities.
However, these methods are restricted by the paucity of annotated motion data,
a focus on single-person motions, and a lack of detailed control. In this
paper, we introduce three forms of composition based on diffusion priors:
sequential, parallel, and model composition. Using sequential composition, we
tackle the challenge of long sequence generation. We introduce DoubleTake, an
inference-time method with which we generate long animations consisting of
sequences of prompted intervals and their transitions, using a prior trained
only for short clips. Using parallel composition, we show promising steps
toward two-person generation. Beginning with two fixed priors as well as a few
two-person training examples, we learn a slim communication block, ComMDM, to
coordinate interaction between the two resulting motions. Lastly, using model
composition, we first train individual priors to complete motions that realize
a prescribed motion for a given joint. We then introduce DiffusionBlending, an
interpolation mechanism to effectively blend several such models to enable
flexible and efficient fine-grained joint and trajectory-level control and
editing. We evaluate the composition methods using an off-the-shelf motion
diffusion model, and further compare the results to dedicated models trained
for these specific tasks
Constructive sampling for patch-based embedding
Publication in the conference proceedings of SampTA, Bremen, Germany, 201
- …