2,798 research outputs found
The New Testament Period
If we are to approach the New Testament as part of the action of God in history, then we are committed to studying it by examining the situation at the start of the New Testament period, observing the changes and change agents (including the writing of the New Testament itself) which a historian may identify as operating during the New Testament period, and then describing the situation as it existed at the end of the New Testament period. To do this we need some way of marking off the beginning and the end of the New Testament period
Recommended from our members
Lane Compression: A Lightweight Lossless Compression Method for Machine Learning on Embedded Systems
This article presents Lane Compression, a lightweight lossless compression technique for machine learning that is based on a detailed study of the statistical properties of machine learning data. The proposed technique profiles machine learning data gathered ahead of run-time and partitions values bit-wise into different
lanes
with more distinctive statistical characteristics. Then the most appropriate compression technique is chosen for each lane out of a small number of low-cost compression techniques. Lane Compression’s compute and memory requirements are very low and yet it achieves a compression rate comparable to or better than Huffman coding. We evaluate and analyse Lane Compression on a wide range of machine learning networks for both inference and re-training. We also demonstrate the profiling prior to run-time and the ability to configure the hardware based on the profiling guarantee robust performance across different models and datasets. Hardware implementations are described and the scheme’s simplicity makes it suitable for compressing both on-chip and off-chip traffic.
Samsung Advanced Institute of Technology (SAIT
Scheduling aircraft landings - the static case
This is the publisher version of the article, obtained from the link below.In this paper, we consider the problem of scheduling aircraft (plane) landings at an airport. This problem is one of deciding a landing time for each plane such that each plane lands within a predetermined time window and that separation criteria between the landing of a plane and the landing of all successive planes are respected. We present a mixed-integer zero–one formulation of the problem for the single runway case and extend it to the multiple runway case. We strengthen the linear programming relaxations of these formulations by introducing additional constraints. Throughout, we discuss how our formulations can be used to model a number of issues (choice of objective function, precedence restrictions, restricting the number of landings in a given time period, runway workload balancing) commonly encountered in practice. The problem is solved optimally using linear programming-based tree search. We also present an effective heuristic algorithm for the problem. Computational results for both the heuristic and the optimal algorithm are presented for a number of test problems involving up to 50 planes and four runways.J.E.Beasley. would like to acknowledge the financial support of the Commonwealth Scientific and Industrial Research Organization, Australia
Characterizing Sources of Ineffectual Computations in Deep Learning Networks
Hardware accelerators for inference with neural networks can take advantage of the properties of data they process. Performance gains and reduced memory bandwidth during inference have been demonstrated by using narrower data types [1] [2] and by exploiting the ability to skip and compress values that are zero [3]-[6]. Similarly useful properties have been identified at a lower-level such as varying precision requirements [7] and bit-level sparsity [8] [9]. To date, the analysis of these potential sources of superfluous computation and communication has been constrained to a small number of older Convolutional Neural Networks (CNNs) used for image classification. It is an open question as to whether they exist more broadly. This paper aims to determine whether these properties persist in: (1) more recent and thus more accurate and better performing image classification networks, (2) models for image applications other than classification such as image segmentation and low-level computational imaging, (3) Long-Short-Term-Memory (LSTM) models for non-image applications such as those for natural language processing, and (4) quantized image classification models. We demonstrate that such properties persist and discuss the implications and opportunities for future accelerator designs
Focused quantization for sparse CNNs
Deep convolutional neural networks (CNNs) are powerful tools for a wide range
of vision tasks, but the enormous amount of memory and compute resources
required by CNNs pose a challenge in deploying them on constrained devices.
Existing compression techniques, while excelling at reducing model sizes,
struggle to be computationally friendly. In this paper, we attend to the
statistical properties of sparse CNNs and present focused quantization, a novel
quantization strategy based on power-of-two values, which exploits the weight
distributions after fine-grained pruning. The proposed method dynamically
discovers the most effective numerical representation for weights in layers
with varying sparsities, significantly reducing model sizes. Multiplications in
quantized CNNs are replaced with much cheaper bit-shift operations for
efficient inference. Coupled with lossless encoding, we built a compression
pipeline that provides CNNs with high compression ratios (CR), low computation
cost and minimal loss in accuracy. In ResNet-50, we achieved a 18.08x CR with
only 0.24% loss in top-5 accuracy, outperforming existing compression methods.
We fully compressed a ResNet-18 and found that it is not only higher in CR and
top-5 accuracy, but also more hardware efficient as it requires fewer logic
gates to implement when compared to other state-of-the-art quantization methods
assuming the same throughput.This work is supported in part by the National Key R&D Program of China (No. 2018YFB1004804), the National Natural Science Foundation of China (No. 61806192). We thank EPSRC for providing Yiren Zhao his doctoral scholarship
Blackbox Attacks on Reinforcement Learning Agents Using Approximated Temporal Information
Recent research on reinforcement learning (RL) has suggested that trained
agents are vulnerable to maliciously crafted adversarial samples. In this work,
we show how such samples can be generalised from White-box and Grey-box attacks
to a strong Black-box case, where the attacker has no knowledge of the agents,
their training parameters and their training methods. We use
sequence-to-sequence models to predict a single action or a sequence of future
actions that a trained agent will make. First, we show our approximation model,
based on time-series information from the agent, consistently predicts RL
agents' future actions with high accuracy in a Black-box setup on a wide range
of games and RL algorithms. Second, we find that although adversarial samples
are transferable from the target model to our RL agents, they often outperform
random Gaussian noise only marginally. This highlights a serious methodological
deficiency in previous work on such agents; random jamming should have been
taken as the baseline for evaluation. Third, we propose a novel use for
adversarial samplesin Black-box attacks of RL agents: they can be used to
trigger a trained agent to misbehave after a specific time delay. This appears
to be a genuinely new type of attack. It potentially enables an attacker to use
devices controlled by RL agents as time bombs
- …