36,592 research outputs found

    On a Diophantine equation with five prime variables

    Full text link
    Let [x][x] denote the integral part of the real number xx, and NN be a sufficiently large integer. In this paper, it is proved that, for 1<c<41090541999527,c21<c<\frac{4109054}{1999527}, c\not=2, the Diophantine equation N=[p1c]+[p2c]+[p3c]+[p4c]+[p5c]N=[p_1^c]+[p_2^c]+[p_3^c]+[p_4^c]+[p_5^c] is solvable in prime variables p1,p2,p3,p4,p5p_1,p_2,p_3,p_4,p_5.Comment: 17 page

    On the Waring-Goldbach Problem for One Square and Five Cubes

    Full text link
    Let Pr\mathcal{P}_r denote an almost-prime with at most rr prime factors, counted according to multiplicity. In this paper, it is proved that for every sufficiently large even integer NN, the equation \begin{equation*} N=x^2+p_1^3+p_2^3+p_3^3+p_4^3+p_5^3 \end{equation*} is solvable with xx being an almost-prime P6\mathcal{P}_6 and the other variables primes. This result constitutes an improvement upon that of Cai, who obtained the same conclusion, but with P36\mathcal{P}_{36} in place of P6\mathcal{P}_6.Comment: 16 pages. arXiv admin note: substantial text overlap with arXiv:1708.0448

    On Two Diophantine Inequalities Over Primes

    Full text link
    Let 1<c<37/18,c21<c<37/18,\,c\neq2 and NN be a sufficiently large real number. In this paper, we prove that, for almost all R(N,2N],R\in(N,2N], the Diophantine inequality p1c+p2c+p3cR<log1N|p_1^c+p_2^c+p_3^c-R|<\log^{-1}N is solvable in primes p1,p2,p3.p_1,\,p_2,\,p_3. Moreover, we also investigate the problem of six primes and prove that the Diophantine inequality p1c+p2c+p3c+p4c+p5c+p6cN<log1N|p_1^c+p_2^c+p_3^c+p_4^c+p_5^c+p_6^c-N|<\log^{-1}N is solvable in primes p1,p2,p3,p4,p5,p6p_1,\,p_2,\,p_3,\,p_4,\,p_5,\,p_6 for sufficiently large real number NN.Comment: 21 page

    On the Fourth Power Moment of the Error Term for the Divisor Problem with Congruence Conditions

    Full text link
    Let d(n;1,M1,2,M2)d(n;\ell_1,M_1,\ell_2,M_2) denote the number of factorizations n=n1n2n=n_1n_2, where each of the factors niNn_i\in\mathbb{N} belongs to a prescribed congruence class imodMi(i=1,2)\ell_i\bmod M_i\,(i=1,2). Let Δ(x;1,M1,2,M2)\Delta(x;\ell_1,M_1,\ell_2,M_2) be the error term of the asymptotic formula of nxd(n;1,M1,2,M2)\sum\limits_{n\leqslant x}d(n;\ell_1,M_1,\ell_2,M_2). In this paper, we establish an asymptotic formula of the fourth power moment of Δ(M1M2x;1,M1,2,M2)\Delta(M_1M_2x;\ell_1,M_1,\ell_2,M_2) and prove that \begin{equation*} \int_1^T\Delta^4(M_1M_2x;\ell_1,M_1,\ell_2,M_2)\mathrm{d}x=\frac{1}{32\pi^4}C_4\Big(\frac{\ell_1}{M_1},\frac{\ell_2}{M_2}\Big) T^2+O(T^{2-\vartheta_4+\varepsilon}), \end{equation*} with ϑ4=1/8\vartheta_4=1/8, which improves the previous value θ4=3/28\theta_4=3/28 of K. Liu.Comment: 21 page

    Waring-Goldbach Problem: One Square, Four Cubes and Higher Powers

    Full text link
    Let Pr\mathcal{P}_r denote an almost-prime with at most rr prime factors, counted according to multiplicity. In this paper, it is proved that, for 12b3512\leqslant b\leqslant 35 and for every sufficiently large odd integer NN, the equation \begin{equation*} N=x^2+p_1^3+p_2^3+p_3^3+p_4^3+p_5^4+p_6^b \end{equation*} is solvable with xx being an almost-prime Pr(b)\mathcal{P}_{r(b)} and the other variables primes, where r(b)r(b) is defined in the Theorem. This result constitutes an improvement upon that of L\"u and Mu.Comment: 19 pages. arXiv admin note: substantial text overlap with arXiv:1707.0780

    A Remark on the Piatetski-Shapiro-Hua Theorem

    Full text link
    In this paper, we prove that for any fixed 205/243<γ1205/243<\gamma\leqslant1, every sufficiently large NN satisfying N5(mod24)N\equiv 5 \pmod {24} can be represented as five squares of primes with one prime in Pγ\mathcal{P}_\gamma, which improves the previous result of Zhang and Zhai.Comment: 5 page

    Study of the elastocaloric effect and mechanical behavior for the NiTi shape memory alloys

    Full text link
    The NiTi shape memory alloy exhibited excellent superelastic property and elastocaloric effect. Large temperature changes of 30 K upon loading and -19 K upon unloading were obtained at room temperature, which were higher than those of the other NiTi-based materials and among the highest values reported in the elastocaloric materials. The asymmetry of the measured temperature changes between loading and unloading process was ascribed to the friction dissipation. The large temperature changes originated from the large entropy change during the stress-induced martensite transformation (MT) and the reverse MT. A large coefficient-of-performance of the material (COPmater) of 11.7 was obtained, which decreased with increasing the applied strain. These results are very attractive in the present solid-state cooling which is potential to replace the vapor compression refrigeration technologies

    Accurate front capturing asymptotic preserving scheme for nonlinear gray radiative transfer equation

    Full text link
    We develop an asymptotic preserving scheme for the gray radiative transfer equation. Two asymptotic regimes are considered: one is a diffusive regime described by a nonlinear diffusion equation for the material temperature; the other is a free streaming regime with zero opacity. To alleviate the restriction on time step and capture the correct front propagation in the diffusion limit, an implicit treatment is crucial. However, this often involves a large-scale nonlinear iterative solver as the spatial and angular dimensions are coupled. Our idea is to introduce an auxiliary variable that leads to a ``redundant" system, which is then solved with a three-stage update: prediction, correction, and projection. The benefit of this approach is that the implicit system is local to each spatial element, independent of angular variable, and thus only requires a scalar Newton's solver. We also introduce a spatial discretization with a compact stencil based on even-odd decomposition. Our method preserves both the nonlinear diffusion limit with correct front propagation speed and the free streaming limit, with a hyperbolic CFL condition

    Syntax-Enhanced Neural Machine Translation with Syntax-Aware Word Representations

    Full text link
    Syntax has been demonstrated highly effective in neural machine translation (NMT). Previous NMT models integrate syntax by representing 1-best tree outputs from a well-trained parsing system, e.g., the representative Tree-RNN and Tree-Linearization methods, which may suffer from error propagation. In this work, we propose a novel method to integrate source-side syntax implicitly for NMT. The basic idea is to use the intermediate hidden representations of a well-trained end-to-end dependency parser, which are referred to as syntax-aware word representations (SAWRs). Then, we simply concatenate such SAWRs with ordinary word embeddings to enhance basic NMT models. The method can be straightforwardly integrated into the widely-used sequence-to-sequence (Seq2Seq) NMT models. We start with a representative RNN-based Seq2Seq baseline system, and test the effectiveness of our proposed method on two benchmark datasets of the Chinese-English and English-Vietnamese translation tasks, respectively. Experimental results show that the proposed approach is able to bring significant BLEU score improvements on the two datasets compared with the baseline, 1.74 points for Chinese-English translation and 0.80 point for English-Vietnamese translation, respectively. In addition, the approach also outperforms the explicit Tree-RNN and Tree-Linearization methods.Comment: NAACL 201

    Jointly Learning Structured Analysis Discriminative Dictionary and Analysis Multiclass Classifier

    Full text link
    In this paper, we propose an analysis mechanism based structured Analysis Discriminative Dictionary Learning (ADDL) framework. ADDL seamlessly integrates the analysis discriminative dictionary learning, analysis representation and analysis classifier training into a unified model. The applied analysis mechanism can make sure that the learnt dictionaries, representations and linear classifiers over different classes are independent and discriminating as much as possible. The dictionary is obtained by minimizing a reconstruction error and an analytical incoherence promoting term that encourages the sub-dictionaries associated with different classes to be independent. To obtain the representation coefficients, ADDL imposes a sparse l2,1-norm constraint on the coding coefficients instead of using l0 or l1-norm, since the l0 or l1-norm constraint applied in most existing DL criteria makes the training phase time consuming. The codes-extraction projection that bridges data with the sparse codes by extracting special features from the given samples is calculated via minimizing a sparse codes approximation term. Then we compute a linear classifier based on the approximated sparse codes by an analysis mechanism to simultaneously consider the classification and representation powers. Thus, the classification approach of our model is very efficient, because it can avoid the extra time-consuming sparse reconstruction process with trained dictionary for each new test data as most existing DL algorithms. Simulations on real image databases demonstrate that our ADDL model can obtain superior performance over other state-of-the-arts.Comment: Accepted by IEEE TNNL
    corecore