129,005 research outputs found

    Large Scale Constrained Linear Regression Revisited: Faster Algorithms via Preconditioning

    Full text link
    In this paper, we revisit the large-scale constrained linear regression problem and propose faster methods based on some recent developments in sketching and optimization. Our algorithms combine (accelerated) mini-batch SGD with a new method called two-step preconditioning to achieve an approximate solution with a time complexity lower than that of the state-of-the-art techniques for the low precision case. Our idea can also be extended to the high precision case, which gives an alternative implementation to the Iterative Hessian Sketch (IHS) method with significantly improved time complexity. Experiments on benchmark and synthetic datasets suggest that our methods indeed outperform existing ones considerably in both the low and high precision cases.Comment: Appear in AAAI-1

    An Affect-Rich Neural Conversational Model with Biased Attention and Weighted Cross-Entropy Loss

    Full text link
    Affect conveys important implicit information in human communication. Having the capability to correctly express affect during human-machine conversations is one of the major milestones in artificial intelligence. In recent years, extensive research on open-domain neural conversational models has been conducted. However, embedding affect into such models is still under explored. In this paper, we propose an end-to-end affect-rich open-domain neural conversational model that produces responses not only appropriate in syntax and semantics, but also with rich affect. Our model extends the Seq2Seq model and adopts VAD (Valence, Arousal and Dominance) affective notations to embed each word with affects. In addition, our model considers the effect of negators and intensifiers via a novel affective attention mechanism, which biases attention towards affect-rich words in input sentences. Lastly, we train our model with an affect-incorporated objective function to encourage the generation of affect-rich words in the output responses. Evaluations based on both perplexity and human evaluations show that our model outperforms the state-of-the-art baseline model of comparable size in producing natural and affect-rich responses.Comment: AAAI-1

    Analysis of the DDˉKD\bar{D}^*K system with QCD sum rules

    Full text link
    In this article, we construct the color singlet-singlet-singlet interpolating current with I(JP)=32(1)I\left(J^P\right)=\frac{3}{2}\left(1^-\right) to study the DDˉKD\bar{D}^*K system through QCD sum rules approach. In calculations, we consider the contributions of the vacuum condensates up to dimension-16 and employ the formula μ=MX/Y/Z2(2Mc)2\mu=\sqrt{M_{X/Y/Z}^{2}-\left(2{\mathbb{M}}_{c}\right)^{2}} to choose the optimal energy scale of the QCD spectral density. The numerical result MZ=4.710.11+0.19GeVM_Z=4.71_{-0.11}^{+0.19}\,\rm{GeV} indicates that there exists a resonance state ZZ lying above the DDˉKD\bar{D}^*K threshold to saturate the QCD sum rules. This resonance state ZZ may be found by focusing on the channel J/ψπKJ/\psi \pi K of the decay BJ/ψππKB\longrightarrow J/\psi \pi \pi K in the future.Comment: 9 pages, 4 figure
    corecore