3 research outputs found
Robustness Against Adversarial Attacks via Learning Confined Adversarial Polytopes
Deep neural networks (DNNs) could be deceived by generating
human-imperceptible perturbations of clean samples. Therefore, enhancing the
robustness of DNNs against adversarial attacks is a crucial task. In this
paper, we aim to train robust DNNs by limiting the set of outputs reachable via
a norm-bounded perturbation added to a clean sample. We refer to this set as
adversarial polytope, and each clean sample has a respective adversarial
polytope. Indeed, if the respective polytopes for all the samples are compact
such that they do not intersect the decision boundaries of the DNN, then the
DNN is robust against adversarial samples. Hence, the inner-working of our
algorithm is based on learning \textbf{c}onfined \textbf{a}dversarial
\textbf{p}olytopes (CAP). By conducting a thorough set of experiments, we
demonstrate the effectiveness of CAP over existing adversarial robustness
methods in improving the robustness of models against state-of-the-art attacks
including AutoAttack.Comment: The paper has been accepted in ICASSP 202
New Methods for Channel Allocation Schemes in wireless Cellular Networks
Due to the fast-growing number of cell-phone users, channel allocation scheme for
channel assignment plays an important role in cellular networks. Having known that for
downlink transmission the base-station aims to transmit signals over a specifi c channel to
different users, broadcasting enhancement gains momentum in cellular networks. Consider
a set-up in which the base-station is equipped with M time/frequency resources and M
clients are being served with their own rate requirements. Optimum solution in a degraded
broadcast channel is to send signals to all users over all channels. However, this enlarges the
codebook, which leads to an intricate coding/decoding system. Taking this into account,
the idea of grouping the clients into smaller subsets has been proposed in this research.
The required rate for each client should be satisfied in each subset, which determines the
size of the groups. As the highest gain in each subset is obtained by applying broadcasting
scenario, one can just focus on finding the best method of grouping; henceforth, each group
follows broadcasting scenario.
Similarly, the same scenario applies to uplink transmission in which clients transmit
signals to a single base station.
Intuitively, the optimum solution can be achieved using linear programming. However,
it turned out linear programming satisfi es the zero-one constraint which can be efficiently
solved by Hungarian method. It has been shown that this practical method conspicuously
outperforms the traditional frequency division method
Conditional Mutual Information Constrained Deep Learning for Classification
The concepts of conditional mutual information (CMI) and normalized
conditional mutual information (NCMI) are introduced to measure the
concentration and separation performance of a classification deep neural
network (DNN) in the output probability distribution space of the DNN, where
CMI and the ratio between CMI and NCMI represent the intra-class concentration
and inter-class separation of the DNN, respectively. By using NCMI to evaluate
popular DNNs pretrained over ImageNet in the literature, it is shown that their
validation accuracies over ImageNet validation data set are more or less
inversely proportional to their NCMI values. Based on this observation, the
standard deep learning (DL) framework is further modified to minimize the
standard cross entropy function subject to an NCMI constraint, yielding CMI
constrained deep learning (CMIC-DL). A novel alternating learning algorithm is
proposed to solve such a constrained optimization problem. Extensive experiment
results show that DNNs trained within CMIC-DL outperform the state-of-the-art
models trained within the standard DL and other loss functions in the
literature in terms of both accuracy and robustness against adversarial
attacks. In addition, visualizing the evolution of learning process through the
lens of CMI and NCMI is also advocated