14,422 research outputs found

    Recurrent Neural Network Training with Dark Knowledge Transfer

    Full text link
    Recurrent neural networks (RNNs), particularly long short-term memory (LSTM), have gained much attention in automatic speech recognition (ASR). Although some successful stories have been reported, training RNNs remains highly challenging, especially with limited training data. Recent research found that a well-trained model can be used as a teacher to train other child models, by using the predictions generated by the teacher model as supervision. This knowledge transfer learning has been employed to train simple neural nets with a complex one, so that the final performance can reach a level that is infeasible to obtain by regular training. In this paper, we employ the knowledge transfer learning approach to train RNNs (precisely LSTM) using a deep neural network (DNN) model as the teacher. This is different from most of the existing research on knowledge transfer learning, since the teacher (DNN) is assumed to be weaker than the child (RNN); however, our experiments on an ASR task showed that it works fairly well: without applying any tricks on the learning scheme, this approach can train RNNs successfully even with limited training data.Comment: ICASSP 201

    RORS: Enhanced Rule-based OWL Reasoning on Spark

    Full text link
    The rule-based OWL reasoning is to compute the deductive closure of an ontology by applying RDF/RDFS and OWL entailment rules. The performance of the rule-based OWL reasoning is often sensitive to the rule execution order. In this paper, we present an approach to enhancing the performance of the rule-based OWL reasoning on Spark based on a locally optimal executable strategy. Firstly, we divide all rules (27 in total) into four main classes, namely, SPO rules (5 rules), type rules (7 rules), sameAs rules (7 rules), and schema rules (8 rules) since, as we investigated, those triples corresponding to the first three classes of rules are overwhelming (e.g., over 99% in the LUBM dataset) in our practical world. Secondly, based on the interdependence among those entailment rules in each class, we pick out an optimal rule executable order of each class and then combine them into a new rule execution order of all rules. Finally, we implement the new rule execution order on Spark in a prototype called RORS. The experimental results show that the running time of RORS is improved by about 30% as compared to Kim & Park's algorithm (2015) using the LUBM200 (27.6 million triples).Comment: 12 page

    Simplified Multiuser Detection for SCMA with Sum-Product Algorithm

    Full text link
    Sparse code multiple access (SCMA) is a novel non-orthogonal multiple access technique, which fully exploits the shaping gain of multi-dimensional codewords. However, the lack of simplified multiuser detection algorithm prevents further implementation due to the inherently high computation complexity. In this paper, general SCMA detector algorithms based on Sum-product algorithm are elaborated. Then two improved algorithms are proposed, which simplify the detection structure and curtail exponent operations quantitatively in logarithm domain. Furthermore, to analyze these detection algorithms fairly, we derive theoretical expression of the average mutual information (AMI) of SCMA (SCMA-AMI), and employ a statistical method to calculate SCMA-AMI based specific detection algorithm. Simulation results show that the performance is almost as well as the based message passing algorithm in terms of both BER and AMI while the complexity is significantly decreased, compared to the traditional Max-Log approximation method
    corecore