7 research outputs found
Disentangling Geometric Deformation Spaces in Generative Latent Shape Models
A complete representation of 3D objects requires characterizing the space of
deformations in an interpretable manner, from articulations of a single
instance to changes in shape across categories. In this work, we improve on a
prior generative model of geometric disentanglement for 3D shapes, wherein the
space of object geometry is factorized into rigid orientation, non-rigid pose,
and intrinsic shape. The resulting model can be trained from raw 3D shapes,
without correspondences, labels, or even rigid alignment, using a combination
of classical spectral geometry and probabilistic disentanglement of a
structured latent representation space. Our improvements include more
sophisticated handling of rotational invariance and the use of a diffeomorphic
flow network to bridge latent and spectral space. The geometric structuring of
the latent space imparts an interpretable characterization of the deformation
space of an object. Furthermore, it enables tasks like pose transfer and
pose-aware retrieval without requiring supervision. We evaluate our model on
its generative modelling, representation learning, and disentanglement
performance, showing improved rotation invariance and intrinsic-extrinsic
factorization quality over the prior model.Comment: 22 page
Equivariance with Learned Canonicalization Functions
Symmetry-based neural networks often constrain the architecture in order to
achieve invariance or equivariance to a group of transformations. In this
paper, we propose an alternative that avoids this architectural constraint by
learning to produce canonical representations of the data. These
canonicalization functions can readily be plugged into non-equivariant backbone
architectures. We offer explicit ways to implement them for some groups of
interest. We show that this approach enjoys universality while providing
interpretable insights. Our main hypothesis, supported by our empirical
results, is that learning a small neural network to perform canonicalization is
better than using predefined heuristics. Our experiments show that learning the
canonicalization function is competitive with existing techniques for learning
equivariant functions across many tasks, including image classification,
-body dynamics prediction, point cloud classification and part segmentation,
while being faster across the board.Comment: 21 pages, 5 figure
Generative Methods, Meta-learning, and Meta-heuristics for Robust Cyber Defense
Cyberspace is the digital communications network that supports the internet of battlefield things (IoBT), the model by which defense-centric sensors, computers, actuators and humans are digitally connected. A secure IoBT infrastructure facilitates real time implementation of the observe, orient, decide, act (OODA) loop across distributed subsystems. Successful hacking efforts by cyber criminals and strategic adversaries suggest that cyber systems such as the IoBT are not secure. Three lines of effort demonstrate a path towards a more robust IoBT. First, a baseline data set of enterprise cyber network traffic was collected and modelled with generative methods allowing the generation of realistic, synthetic cyber data. Next, adversarial examples of cyber packets were algorithmically crafted to fool network intrusion detection systems while maintaining packet functionality. Finally, a framework is presented that uses meta-learning to combine the predictive power of various weak models. This resulted in a meta-model that outperforms all baseline classifiers with respect to overall accuracy of packets, and adversarial example detection rate. The National Defense Strategy underscores cybersecurity as an imperative to defend the homeland and maintain a military advantage in the information age. This research provides both academic perspective and applied techniques to to further the cybersecurity posture of the Department of Defense into the information age