2,130 research outputs found
Improved Practical Matrix Sketching with Guarantees
Matrices have become essential data representations for many large-scale
problems in data analytics, and hence matrix sketching is a critical task.
Although much research has focused on improving the error/size tradeoff under
various sketching paradigms, the many forms of error bounds make these
approaches hard to compare in theory and in practice. This paper attempts to
categorize and compare most known methods under row-wise streaming updates with
provable guarantees, and then to tweak some of these methods to gain practical
improvements while retaining guarantees.
For instance, we observe that a simple heuristic iSVD, with no guarantees,
tends to outperform all known approaches in terms of size/error trade-off. We
modify the best performing method with guarantees FrequentDirections under the
size/error trade-off to match the performance of iSVD and retain its
guarantees. We also demonstrate some adversarial datasets where iSVD performs
quite poorly. In comparing techniques in the time/error trade-off, techniques
based on hashing or sampling tend to perform better. In this setting we modify
the most studied sampling regime to retain error guarantee but obtain dramatic
improvements in the time/error trade-off.
Finally, we provide easy replication of our studies on APT, a new testbed
which makes available not only code and datasets, but also a computing platform
with fixed environmental settings.Comment: 27 page
Stochastically Rank-Regularized Tensor Regression Networks
Over-parametrization of deep neural networks has recently been shown to be key to their successful training. However, it also renders them prone to overfitting and makes them expensive to store and train. Tensor regression networks significantly reduce the number of effective parameters in deep neural networks while retaining accuracy and the ease of training. They replace the flattening and fully-connected layers with a tensor regression layer, where the regression weights are expressed through the factors of a low-rank tensor decomposition. In this paper, to further improve tensor regression networks, we propose a novel stochastic rank-regularization. It consists of a novel randomized tensor sketching method to approximate the weights of tensor regression layers. We theoretically and empirically establish the link between our proposed stochastic rank-regularization and the dropout on low-rank tensor regression. Extensive experimental results with both synthetic data and real world datasets (i.e., CIFAR-100 and the UK Biobank brain MRI dataset) support that the proposed approach i) improves performance in both classification and regression tasks, ii) decreases overfitting, iii) leads to more stable training and iv) improves robustness to adversarial attacks and random noise
On Security and Sparsity of Linear Classifiers for Adversarial Settings
Machine-learning techniques are widely used in security-related applications,
like spam and malware detection. However, in such settings, they have been
shown to be vulnerable to adversarial attacks, including the deliberate
manipulation of data at test time to evade detection. In this work, we focus on
the vulnerability of linear classifiers to evasion attacks. This can be
considered a relevant problem, as linear classifiers have been increasingly
used in embedded systems and mobile devices for their low processing time and
memory requirements. We exploit recent findings in robust optimization to
investigate the link between regularization and security of linear classifiers,
depending on the type of attack. We also analyze the relationship between the
sparsity of feature weights, which is desirable for reducing processing cost,
and the security of linear classifiers. We further propose a novel octagonal
regularizer that allows us to achieve a proper trade-off between them. Finally,
we empirically show how this regularizer can improve classifier security and
sparsity in real-world application examples including spam and malware
detection
Towards Adversarial Malware Detection: Lessons Learned from PDF-based Attacks
Malware still constitutes a major threat in the cybersecurity landscape, also
due to the widespread use of infection vectors such as documents. These
infection vectors hide embedded malicious code to the victim users,
facilitating the use of social engineering techniques to infect their machines.
Research showed that machine-learning algorithms provide effective detection
mechanisms against such threats, but the existence of an arms race in
adversarial settings has recently challenged such systems. In this work, we
focus on malware embedded in PDF files as a representative case of such an arms
race. We start by providing a comprehensive taxonomy of the different
approaches used to generate PDF malware, and of the corresponding
learning-based detection systems. We then categorize threats specifically
targeted against learning-based PDF malware detectors, using a well-established
framework in the field of adversarial machine learning. This framework allows
us to categorize known vulnerabilities of learning-based PDF malware detectors
and to identify novel attacks that may threaten such systems, along with the
potential defense mechanisms that can mitigate the impact of such threats. We
conclude the paper by discussing how such findings highlight promising research
directions towards tackling the more general challenge of designing robust
malware detectors in adversarial settings
Robust Deep Networks with Randomized Tensor Regression Layers
In this paper, we propose a novel randomized tensor decomposition for tensor regression. It allows to stochastically approximate the weights of tensor regression layers by randomly sampling in the low-rank subspace. We theoretically and empirically establish the link between our proposed stochastic rank-regularization and the dropout on low-rank tensor regression. This acts as an additional stochastic regularization on the regression weight, which, combined with the deterministic regularization imposed by the low-rank constraint, improves both the performance and robustness of neural networks augmented with it. In particular, it makes the model more robust to adversarial attacks and random noise, without requiring any adversarial training. We perform a thorough study of our method on synthetic data, object classification on the CIFAR100 and ImageNet datasets, and large scale brain-age prediction on UK Biobank brain MRI dataset. We demonstrate superior performance in all cases, as well as significant improvement in robustness to adversarial attacks and random noise
Designing for Irrelevance
My job title is ‘designer’ but I’m reluctant to describe myself as a designer for a number of reasons: first, because the practice has a lot to answer for; and second, because I don’t do a whole lot of design. I help groups of people to collaborate and converse their way through problems towards solutions—activating a latent capability for design in people as they think and work differently, together. The sense of agency that accompanies this is intoxicating. This work can produce strategies, systems, and services, as well as spaces, objects, and graphics. The awareness that design can shape both our (intangible) experiences and our (tangible) environments—and that, as a mode of thinking, it can be accessible, inclusive, and participatory—shifts it from a practice to a stance. In this sense, is design a choice that we make to perceive and move through the world in a contextual and intentional way? What does this mean for the practice of design?I respod to these question by reflecting on my experience of participating in the Indonesia Australia Design Futures project
- …