37,152 research outputs found
On a Diophantine equation with five prime variables
Let denote the integral part of the real number , and be a
sufficiently large integer. In this paper, it is proved that, for
, the Diophantine equation
is solvable in prime variables
.Comment: 17 page
On the Waring-Goldbach Problem for One Square and Five Cubes
Let denote an almost-prime with at most prime factors,
counted according to multiplicity. In this paper, it is proved that for every
sufficiently large even integer , the equation \begin{equation*}
N=x^2+p_1^3+p_2^3+p_3^3+p_4^3+p_5^3 \end{equation*} is solvable with
being an almost-prime and the other variables primes. This
result constitutes an improvement upon that of Cai, who obtained the same
conclusion, but with in place of .Comment: 16 pages. arXiv admin note: substantial text overlap with
arXiv:1708.0448
On Two Diophantine Inequalities Over Primes
Let and be a sufficiently large real number. In this
paper, we prove that, for almost all the Diophantine inequality
is solvable in primes
Moreover, we also investigate the problem of six primes and prove that the
Diophantine inequality is
solvable in primes for sufficiently large
real number .Comment: 21 page
Waring-Goldbach Problem: One Square, Four Cubes and Higher Powers
Let denote an almost-prime with at most prime factors,
counted according to multiplicity. In this paper, it is proved that, for
and for every sufficiently large odd integer ,
the equation \begin{equation*}
N=x^2+p_1^3+p_2^3+p_3^3+p_4^3+p_5^4+p_6^b \end{equation*} is solvable with
being an almost-prime and the other variables primes,
where is defined in the Theorem. This result constitutes an improvement
upon that of L\"u and Mu.Comment: 19 pages. arXiv admin note: substantial text overlap with
arXiv:1707.0780
On the Fourth Power Moment of the Error Term for the Divisor Problem with Congruence Conditions
Let denote the number of factorizations
, where each of the factors belongs to a
prescribed congruence class . Let
be the error term of the asymptotic formula
of . In this paper, we
establish an asymptotic formula of the fourth power moment of
and prove that \begin{equation*}
\int_1^T\Delta^4(M_1M_2x;\ell_1,M_1,\ell_2,M_2)\mathrm{d}x=\frac{1}{32\pi^4}C_4\Big(\frac{\ell_1}{M_1},\frac{\ell_2}{M_2}\Big)
T^2+O(T^{2-\vartheta_4+\varepsilon}), \end{equation*} with ,
which improves the previous value of K. Liu.Comment: 21 page
A Remark on the Piatetski-Shapiro-Hua Theorem
In this paper, we prove that for any fixed , every
sufficiently large satisfying can be represented as
five squares of primes with one prime in , which improves
the previous result of Zhang and Zhai.Comment: 5 page
Study of the elastocaloric effect and mechanical behavior for the NiTi shape memory alloys
The NiTi shape memory alloy exhibited excellent superelastic property and
elastocaloric effect. Large temperature changes of 30 K upon loading and -19 K
upon unloading were obtained at room temperature, which were higher than those
of the other NiTi-based materials and among the highest values reported in the
elastocaloric materials. The asymmetry of the measured temperature changes
between loading and unloading process was ascribed to the friction dissipation.
The large temperature changes originated from the large entropy change during
the stress-induced martensite transformation (MT) and the reverse MT. A large
coefficient-of-performance of the material (COPmater) of 11.7 was obtained,
which decreased with increasing the applied strain. These results are very
attractive in the present solid-state cooling which is potential to replace the
vapor compression refrigeration technologies
Accurate front capturing asymptotic preserving scheme for nonlinear gray radiative transfer equation
We develop an asymptotic preserving scheme for the gray radiative transfer
equation. Two asymptotic regimes are considered: one is a diffusive regime
described by a nonlinear diffusion equation for the material temperature; the
other is a free streaming regime with zero opacity. To alleviate the
restriction on time step and capture the correct front propagation in the
diffusion limit, an implicit treatment is crucial. However, this often involves
a large-scale nonlinear iterative solver as the spatial and angular dimensions
are coupled. Our idea is to introduce an auxiliary variable that leads to a
``redundant" system, which is then solved with a three-stage update:
prediction, correction, and projection. The benefit of this approach is that
the implicit system is local to each spatial element, independent of angular
variable, and thus only requires a scalar Newton's solver. We also introduce a
spatial discretization with a compact stencil based on even-odd decomposition.
Our method preserves both the nonlinear diffusion limit with correct front
propagation speed and the free streaming limit, with a hyperbolic CFL
condition
Syntax-Enhanced Neural Machine Translation with Syntax-Aware Word Representations
Syntax has been demonstrated highly effective in neural machine translation
(NMT). Previous NMT models integrate syntax by representing 1-best tree outputs
from a well-trained parsing system, e.g., the representative Tree-RNN and
Tree-Linearization methods, which may suffer from error propagation. In this
work, we propose a novel method to integrate source-side syntax implicitly for
NMT. The basic idea is to use the intermediate hidden representations of a
well-trained end-to-end dependency parser, which are referred to as
syntax-aware word representations (SAWRs). Then, we simply concatenate such
SAWRs with ordinary word embeddings to enhance basic NMT models. The method can
be straightforwardly integrated into the widely-used sequence-to-sequence
(Seq2Seq) NMT models. We start with a representative RNN-based Seq2Seq baseline
system, and test the effectiveness of our proposed method on two benchmark
datasets of the Chinese-English and English-Vietnamese translation tasks,
respectively. Experimental results show that the proposed approach is able to
bring significant BLEU score improvements on the two datasets compared with the
baseline, 1.74 points for Chinese-English translation and 0.80 point for
English-Vietnamese translation, respectively. In addition, the approach also
outperforms the explicit Tree-RNN and Tree-Linearization methods.Comment: NAACL 201
Jointly Learning Structured Analysis Discriminative Dictionary and Analysis Multiclass Classifier
In this paper, we propose an analysis mechanism based structured Analysis
Discriminative Dictionary Learning (ADDL) framework. ADDL seamlessly integrates
the analysis discriminative dictionary learning, analysis representation and
analysis classifier training into a unified model. The applied analysis
mechanism can make sure that the learnt dictionaries, representations and
linear classifiers over different classes are independent and discriminating as
much as possible. The dictionary is obtained by minimizing a reconstruction
error and an analytical incoherence promoting term that encourages the
sub-dictionaries associated with different classes to be independent. To obtain
the representation coefficients, ADDL imposes a sparse l2,1-norm constraint on
the coding coefficients instead of using l0 or l1-norm, since the l0 or l1-norm
constraint applied in most existing DL criteria makes the training phase time
consuming. The codes-extraction projection that bridges data with the sparse
codes by extracting special features from the given samples is calculated via
minimizing a sparse codes approximation term. Then we compute a linear
classifier based on the approximated sparse codes by an analysis mechanism to
simultaneously consider the classification and representation powers. Thus, the
classification approach of our model is very efficient, because it can avoid
the extra time-consuming sparse reconstruction process with trained dictionary
for each new test data as most existing DL algorithms. Simulations on real
image databases demonstrate that our ADDL model can obtain superior performance
over other state-of-the-arts.Comment: Accepted by IEEE TNNL
- β¦