11,292 research outputs found
Some Novel Applications of Explanation-Based Learning to Parsing Lexicalized Tree-Adjoining Grammars
In this paper we present some novel applications of Explanation-Based
Learning (EBL) technique to parsing Lexicalized Tree-Adjoining grammars. The
novel aspects are (a) immediate generalization of parses in the training set,
(b) generalization over recursive structures and (c) representation of
generalized parses as Finite State Transducers. A highly impoverished parser
called a ``stapler'' has also been introduced. We present experimental results
using EBL for different corpora and architectures to show the effectiveness of
our approach.Comment: uuencoded postscript fil
Learning Moore Machines from Input-Output Traces
The problem of learning automata from example traces (but no equivalence or
membership queries) is fundamental in automata learning theory and practice. In
this paper we study this problem for finite state machines with inputs and
outputs, and in particular for Moore machines. We develop three algorithms for
solving this problem: (1) the PTAP algorithm, which transforms a set of
input-output traces into an incomplete Moore machine and then completes the
machine with self-loops; (2) the PRPNI algorithm, which uses the well-known
RPNI algorithm for automata learning to learn a product of automata encoding a
Moore machine; and (3) the MooreMI algorithm, which directly learns a Moore
machine using PTAP extended with state merging. We prove that MooreMI has the
fundamental identification in the limit property. We also compare the
algorithms experimentally in terms of the size of the learned machine and
several notions of accuracy, introduced in this paper. Finally, we compare with
OSTIA, an algorithm that learns a more general class of transducers, and find
that OSTIA generally does not learn a Moore machine, even when fed with a
characteristic sample
VQ-T: RNN Transducers using Vector-Quantized Prediction Network States
Beam search, which is the dominant ASR decoding algorithm for end-to-end
models, generates tree-structured hypotheses. However, recent studies have
shown that decoding with hypothesis merging can achieve a more efficient search
with comparable or better performance. But, the full context in recurrent
networks is not compatible with hypothesis merging. We propose to use
vector-quantized long short-term memory units (VQ-LSTM) in the prediction
network of RNN transducers. By training the discrete representation jointly
with the ASR network, hypotheses can be actively merged for lattice generation.
Our experiments on the Switchboard corpus show that the proposed VQ RNN
transducers improve ASR performance over transducers with regular prediction
networks while also producing denser lattices with a very low oracle word error
rate (WER) for the same beam size. Additional language model rescoring
experiments also demonstrate the effectiveness of the proposed lattice
generation scheme.Comment: Interspeech 2022 accepted pape
Querying XML data streams from wireless sensor networks: an evaluation of query engines
As the deployment of wireless sensor networks increase and their application domain widens, the opportunity for effective use of XML filtering and streaming query engines is ever more present. XML filtering engines aim to provide efficient real-time querying of streaming XML encoded data. This paper provides a detailed analysis of several such engines, focusing on the technology involved, their capabilities, their support for XPath and their performance. Our experimental evaluation identifies which filtering engine is best suited to process a given query based on its properties. Such metrics are important in establishing the best approach to filtering XML streams on-the-fly
Graph-to-Sequence Learning using Gated Graph Neural Networks
Many NLP applications can be framed as a graph-to-sequence learning problem.
Previous work proposing neural architectures on this setting obtained promising
results compared to grammar-based approaches but still rely on linearisation
heuristics and/or standard recurrent networks to achieve the best performance.
In this work, we propose a new model that encodes the full structural
information contained in the graph. Our architecture couples the recently
proposed Gated Graph Neural Networks with an input transformation that allows
nodes and edges to have their own hidden representations, while tackling the
parameter explosion problem present in previous work. Experimental results show
that our model outperforms strong baselines in generation from AMR graphs and
syntax-based neural machine translation.Comment: ACL 201
FPGA-Based Low-Power Speech Recognition with Recurrent Neural Networks
In this paper, a neural network based real-time speech recognition (SR)
system is developed using an FPGA for very low-power operation. The implemented
system employs two recurrent neural networks (RNNs); one is a
speech-to-character RNN for acoustic modeling (AM) and the other is for
character-level language modeling (LM). The system also employs a statistical
word-level LM to improve the recognition accuracy. The results of the AM, the
character-level LM, and the word-level LM are combined using a fairly simple
N-best search algorithm instead of the hidden Markov model (HMM) based network.
The RNNs are implemented using massively parallel processing elements (PEs) for
low latency and high throughput. The weights are quantized to 6 bits to store
all of them in the on-chip memory of an FPGA. The proposed algorithm is
implemented on a Xilinx XC7Z045, and the system can operate much faster than
real-time.Comment: Accepted to SiPS 201
- …