1,570 research outputs found
Determinantal and eigenvalue inequalities for matrices with numerical ranges in a sector
Let A = \pmatrix A_{11} & A_{12} \cr A_{21} & A_{22}\cr\pmatrix \in M_n,
where with , be such that the numerical range of
lies in the set \{e^{i\varphi} z \in \IC: |\Im z| \le (\Re z) \tan
\alpha\}, for some and . We
obtain the optimal containment region for the generalized eigenvalue
satisfying \lambda \pmatrix A_{11} & 0 \cr 0 & A_{22}\cr\pmatrix x = \pmatrix
0 & A_{12} \cr A_{21} & 0\cr\pmatrix x \quad \hbox{for some nonzero} x \in
\IC^n, and the optimal eigenvalue containment region of the matrix in case and are
invertible. From this result, one can show . In particular, if is a accretive-dissipative
matrix, then . These affirm some
conjectures of Drury and Lin.Comment: 6 pages, to appear in Journal of Mathematical Analysi
Investigating Linguistic Pattern Ordering in Hierarchical Natural Language Generation
Natural language generation (NLG) is a critical component in spoken dialogue
system, which can be divided into two phases: (1) sentence planning: deciding
the overall sentence structure, (2) surface realization: determining specific
word forms and flattening the sentence structure into a string. With the rise
of deep learning, most modern NLG models are based on a sequence-to-sequence
(seq2seq) model, which basically contains an encoder-decoder structure; these
NLG models generate sentences from scratch by jointly optimizing sentence
planning and surface realization. However, such simple encoder-decoder
architecture usually fail to generate complex and long sentences, because the
decoder has difficulty learning all grammar and diction knowledge well. This
paper introduces an NLG model with a hierarchical attentional decoder, where
the hierarchy focuses on leveraging linguistic knowledge in a specific order.
The experiments show that the proposed method significantly outperforms the
traditional seq2seq model with a smaller model size, and the design of the
hierarchical attentional decoder can be applied to various NLG systems.
Furthermore, different generation strategies based on linguistic patterns are
investigated and analyzed in order to guide future NLG research work.Comment: accepted by the 7th IEEE Workshop on Spoken Language Technology (SLT
2018). arXiv admin note: text overlap with arXiv:1808.0274
Canonical forms, higher rank numerical range, convexity, totally isotropic subspace, matrix equations
Results on matrix canonical forms are used to give a complete description of
the higher rank numerical range of matrices arising from the study of quantum
error correction. It is shown that the set can be obtained as the intersection
of closed half planes (of complex numbers). As a result, it is always a convex
set in . Moreover, the higher rank numerical range of a normal
matrix is a convex polygon determined by the eigenvalues. These two
consequences confirm the conjectures of Choi et al. on the subject. In
addition, the results are used to derive a formula for the optimal upper bound
for the dimension of a totally isotropic subspace of a square matrix, and
verify the solvability of certain matrix equations.Comment: 10 pages. To appear in Proceedings of the American Mathematical
Societ
- β¦