12,842 research outputs found
Disaster-Resilient Control Plane Design and Mapping in Software-Defined Networks
Communication networks, such as core optical networks, heavily depend on
their physical infrastructure, and hence they are vulnerable to man-made
disasters, such as Electromagnetic Pulse (EMP) or Weapons of Mass Destruction
(WMD) attacks, as well as to natural disasters. Large-scale disasters may cause
huge data loss and connectivity disruption in these networks. As our dependence
on network services increases, the need for novel survivability methods to
mitigate the effects of disasters on communication networks becomes a major
concern. Software-Defined Networking (SDN), by centralizing control logic and
separating it from physical equipment, facilitates network programmability and
opens up new ways to design disaster-resilient networks. On the other hand, to
fully exploit the potential of SDN, along with data-plane survivability, we
also need to design the control plane to be resilient enough to survive network
failures caused by disasters. Several distributed SDN controller architectures
have been proposed to mitigate the risks of overload and failure, but they are
optimized for limited faults without addressing the extent of large-scale
disaster failures. For disaster resiliency of the control plane, we propose to
design it as a virtual network, which can be solved using Virtual Network
Mapping techniques. We select appropriate mapping of the controllers over the
physical network such that the connectivity among the controllers
(controller-to-controller) and between the switches to the controllers
(switch-to-controllers) is not compromised by physical infrastructure failures
caused by disasters. We formally model this disaster-aware control-plane design
and mapping problem, and demonstrate a significant reduction in the disruption
of controller-to-controller and switch-to-controller communication channels
using our approach.Comment: 6 page
DeepMarks: A Digital Fingerprinting Framework for Deep Neural Networks
This paper proposes DeepMarks, a novel end-to-end framework for systematic
fingerprinting in the context of Deep Learning (DL). Remarkable progress has
been made in the area of deep learning. Sharing the trained DL models has
become a trend that is ubiquitous in various fields ranging from biomedical
diagnosis to stock prediction. As the availability and popularity of
pre-trained models are increasing, it is critical to protect the Intellectual
Property (IP) of the model owner. DeepMarks introduces the first fingerprinting
methodology that enables the model owner to embed unique fingerprints within
the parameters (weights) of her model and later identify undesired usages of
her distributed models. The proposed framework embeds the fingerprints in the
Probability Density Function (pdf) of trainable weights by leveraging the extra
capacity available in contemporary DL models. DeepMarks is robust against
fingerprints collusion as well as network transformation attacks, including
model compression and model fine-tuning. Extensive proof-of-concept evaluations
on MNIST and CIFAR10 datasets, as well as a wide variety of deep neural
networks architectures such as Wide Residual Networks (WRNs) and Convolutional
Neural Networks (CNNs), corroborate the effectiveness and robustness of
DeepMarks framework
SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks
Going deeper and wider in neural architectures improves the accuracy, while
the limited GPU DRAM places an undesired restriction on the network design
domain. Deep Learning (DL) practitioners either need change to less desired
network architectures, or nontrivially dissect a network across multiGPUs.
These distract DL practitioners from concentrating on their original machine
learning tasks. We present SuperNeurons: a dynamic GPU memory scheduling
runtime to enable the network training far beyond the GPU DRAM capacity.
SuperNeurons features 3 memory optimizations, \textit{Liveness Analysis},
\textit{Unified Tensor Pool}, and \textit{Cost-Aware Recomputation}, all
together they effectively reduce the network-wide peak memory usage down to the
maximal memory usage among layers. We also address the performance issues in
those memory saving techniques. Given the limited GPU DRAM, SuperNeurons not
only provisions the necessary memory for the training, but also dynamically
allocates the memory for convolution workspaces to achieve the high
performance. Evaluations against Caffe, Torch, MXNet and TensorFlow have
demonstrated that SuperNeurons trains at least 3.2432 deeper network than
current ones with the leading performance. Particularly, SuperNeurons can train
ResNet2500 that has basic network layers on a 12GB K40c.Comment: PPoPP '2018: 23nd ACM SIGPLAN Symposium on Principles and Practice of
Parallel Programmin
- …