4 research outputs found
Classifying Sequences of Extreme Length with Constant Memory Applied to Malware Detection
Recent works within machine learning have been tackling inputs of
ever-increasing size, with cybersecurity presenting sequence classification
problems of particularly extreme lengths. In the case of Windows executable
malware detection, inputs may exceed MB, which corresponds to a time
series with steps. To date, the closest approach to handling
such a task is MalConv, a convolutional neural network capable of processing up
to steps. The memory of CNNs has prevented
further application of CNNs to malware. In this work, we develop a new approach
to temporal max pooling that makes the required memory invariant to the
sequence length . This makes MalConv more memory efficient, and
up to faster to train on its original dataset, while removing the
input length restrictions to MalConv. We re-invest these gains into improving
the MalConv architecture by developing a new Global Channel Gating design,
giving us an attention mechanism capable of learning feature interactions
across 100 million time steps in an efficient manner, a capability lacked by
the original MalConv CNN. Our implementation can be found at
https://github.com/NeuromorphicComputationResearchProgram/MalConv2Comment: To appear in AAAI 202
The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation
This report surveys the landscape of potential security threats from
malicious uses of AI, and proposes ways to better forecast, prevent, and
mitigate these threats. After analyzing the ways in which AI may influence the
threat landscape in the digital, physical, and political domains, we make four
high-level recommendations for AI researchers and other stakeholders. We also
suggest several promising areas for further research that could expand the
portfolio of defenses, or make attacks less effective or harder to execute.
Finally, we discuss, but do not conclusively resolve, the long-term equilibrium
of attackers and defenders.Future of Humanity Institute, University of Oxford, Centre for the Study of Existential Risk, University of Cambridge, Center for a New American Security, Electronic Frontier Foundation, OpenAI. The Future of Life Institute is acknowledged as a funder