44 research outputs found
On the efficient representation and execution of deep acoustic models
In this paper we present a simple and computationally efficient quantization
scheme that enables us to reduce the resolution of the parameters of a neural
network from 32-bit floating point values to 8-bit integer values. The proposed
quantization scheme leads to significant memory savings and enables the use of
optimized hardware instructions for integer arithmetic, thus significantly
reducing the cost of inference. Finally, we propose a "quantization aware"
training process that applies the proposed scheme during network training and
find that it allows us to recover most of the loss in accuracy introduced by
quantization. We validate the proposed techniques by applying them to a long
short-term memory-based acoustic model on an open-ended large vocabulary speech
recognition task.Comment: Accepted conference paper: "The Annual Conference of the
International Speech Communication Association (Interspeech), 2016
Language Models as Knowledge Bases?
Recent progress in pretraining language models on large textual corpora led
to a surge of improvements for downstream NLP tasks. Whilst learning linguistic
knowledge, these models may also be storing relational knowledge present in the
training data, and may be able to answer queries structured as
"fill-in-the-blank" cloze statements. Language models have many advantages over
structured knowledge bases: they require no schema engineering, allow
practitioners to query about an open class of relations, are easy to extend to
more data, and require no human supervision to train. We present an in-depth
analysis of the relational knowledge already present (without fine-tuning) in a
wide range of state-of-the-art pretrained language models. We find that (i)
without fine-tuning, BERT contains relational knowledge competitive with
traditional NLP methods that have some access to oracle knowledge, (ii) BERT
also does remarkably well on open-domain question answering against a
supervised baseline, and (iii) certain types of factual knowledge are learned
much more readily than others by standard language model pretraining
approaches. The surprisingly strong ability of these models to recall factual
knowledge without any fine-tuning demonstrates their potential as unsupervised
open-domain QA systems. The code to reproduce our analysis is available at
https://github.com/facebookresearch/LAMA.Comment: accepted at EMNLP 201
Geodynamic monitoring and its maintenance using modeling by numerical and similar materials methods
The paper describes the fundamental issues of deformation monitoring systems and instrumental methods for measuring the stress-strain state of the rock massif based on the use of three-component strain sensors developed by specialists from the University of Mines and Avangard OJSC.
One of the main tasks of the developed systems is the prediction and prevention of possible dynamic manifestations of rock pressure in rockburst-hazardous deposits