4 research outputs found
RobCaps: Evaluating the Robustness of Capsule Networks against Affine Transformations and Adversarial Attacks
Capsule Networks (CapsNets) are able to hierarchically preserve the pose
relationships between multiple objects for image classification tasks. Other
than achieving high accuracy, another relevant factor in deploying CapsNets in
safety-critical applications is the robustness against input transformations
and malicious adversarial attacks.
In this paper, we systematically analyze and evaluate different factors
affecting the robustness of CapsNets, compared to traditional Convolutional
Neural Networks (CNNs). Towards a comprehensive comparison, we test two CapsNet
models and two CNN models on the MNIST, GTSRB, and CIFAR10 datasets, as well as
on the affine-transformed versions of such datasets. With a thorough analysis,
we show which properties of these architectures better contribute to increasing
the robustness and their limitations. Overall, CapsNets achieve better
robustness against adversarial examples and affine transformations, compared to
a traditional CNN with a similar number of parameters. Similar conclusions have
been derived for deeper versions of CapsNets and CNNs. Moreover, our results
unleash a key finding that the dynamic routing does not contribute much to
improving the CapsNets' robustness. Indeed, the main generalization
contribution is due to the hierarchical feature learning through capsules.Comment: To appear at the 2023 International Joint Conference on Neural
Networks (IJCNN), Queensland, Australia, June 202
A Survey and Empirical Evaluation of Parallel Deep Learning Frameworks
The field of deep learning has witnessed a remarkable shift towards extremely
compute- and memory-intensive neural networks. These newer larger models have
enabled researchers to advance state-of-the-art tools across a variety of
fields. This phenomenon has spurred the development of algorithms for
distributed training of neural networks over a larger number of hardware
accelerators. In this paper, we discuss and compare current state-of-the-art
frameworks for large scale distributed deep learning. First, we survey current
practices in distributed learning and identify the different types of
parallelism used. Then, we present empirical results comparing their
performance on large image and language training tasks. Additionally, we
address their statistical efficiency and memory consumption behavior. Based on
our results, we discuss algorithmic and implementation portions of each
framework which hinder performance
Using machine learning to improve dense and sparse matrix multiplication kernels
This work is comprised of two different projects in numerical linear algebra. The first project is about using machine learning to speed up dense matrix-matrix multiplication computations on a shared-memory computer architecture. We found that found basic loop-based matrix-matrix multiplication algorithms tied to a decision tree algorithm selector were competitive to using Intel\u27s Math Kernel Library for the same computation. The second project is a preliminary report about re-implementing an encoding format for spare matrix-vector multiplication called Compressed Spare eXtended (CSX). The goal for the second project is to use machine learning to aid in encoding matrix substructures in the CSX format without using exhaustive search and a Just-In-Time compiler
Parallel Asynchronous Matrix Multiplication for a Distributed Pipelined Neural Network
Machine learning is an approach to devise algorithms that compute an output without a given rule set but based on a self-learning concept. This approach is of great importance for several fields of applications in science and industry where traditional programming methods are not sufficient. In neural networks, a popular subclass of machine learning algorithms, commonly previous experience is used to train the network and produce good outputs for newly introduced inputs. By increasing the size of the network more complex problems can be solved which again rely on a huge amount of training data. Increasing the complexity also leads to higher computational demand and storage requirements and to the need for parallelization.
Several parallelization approaches of neural networks have already been considered. Most approaches use special purpose hardware whilst other work focuses on using standard hardware. Often these approaches target the problem by parallelizing the training data. In this work a new parallelization method named poadSGD is proposed for the parallelization of fully-connected, largescale feedforward networks on a compute cluster with standard hardware. poadSGD is based on the stochastic gradient descent algorithm. A block-wise distribution of the network's layers to groups of processes and a pipelining scheme for batches of the training samples are used. The network is updated asynchronously without interrupting ongoing computations of subsequent batches. For this task a one-sided communication scheme is used. A main algorithmic part of the batch-wise pipelined version consists of matrix multiplications which occur for a special distributed setup, where each matrix is held by a different process group.
GASPI, a parallel programming model from the field of "Partitioned Global Address Spaces" (PGAS) models is introduced and compared to other models from this class. As it mainly relies on one-sided and asynchronous communication it is a perfect candidate for the asynchronous update task in the poadSGD algorithm. Therefore, the matrix multiplication is also implemented based GASPI. In order to efficiently handle upcoming synchronizations within the process groups and achieve a good workload distribution, a two-dimensional block-cyclic data distribution is applied for the matrices. Based on this distribution, the multiplication algorithm is computed by diagonally iterating over the sub blocks of the resulting matrix and computing the sub blocks in subgroups of the processes. The sub blocks are computed by sharing the workload between the process groups and communicating mostly in pairs or in subgroups. The communication in pairs is set up to be overlapped by other ongoing computations. The implementations provide a special challenge, since the asynchronous communication routines must be handled with care as to which processor is working at what point in time with which data in order to prevent an unintentional dual use of data.
The theoretical analysis shows the matrix multiplication to be superior to a naive implementation when the dimension of the sub blocks of the matrices exceeds 382. The performance achieved in the test runs did not withstand the expectations the theoretical analysis predicted. The algorithm is executed on up to 512 cores and for matrices up to a size of 131,072 x 131,072.
The implementation using the GASPI API was found not be straightforward but to provide a good potential for overlapping communication with computations whenever the data dependencies of an application allow for it. The matrix multiplication was successfully implemented and can be used within an implementation of the poadSGD method that is yet to come. The poadSGD method seems to be very promising, especially as nowadays, with the larger amount of data and the increased complexity of the applications, the approaches to parallelization of neural networks are increasingly of interest