69 research outputs found
Finding Support Documents with a Logistic Regression Approach
Entity retrieval finds the relevant results for a user’s information needs at a finer unit called “entity”. To retrieve such entity, people usually first locate a small set of support documents which contain answer entities, and then further detect the answer entities in this set. In the literature, people view the support documents as relevant documents, and their findings as a conventional document retrieval problem. In this paper, we will state that finding support documents and that of relevant documents, although sounds similar, have important differences. Further, we propose a logistic regression approach to find support documents. Our experiment results show that the logistic regression method performs significantly better than a baseline system that treat the support document finding as a conventional document retrieval problem
Counting Value Sets: Algorithm and Complexity
Let be a prime. Given a polynomial in \F_{p^m}[x] of degree over
the finite field \F_{p^m}, one can view it as a map from \F_{p^m} to
\F_{p^m}, and examine the image of this map, also known as the value set. In
this paper, we present the first non-trivial algorithm and the first complexity
result on computing the cardinality of this value set. We show an elementary
connection between this cardinality and the number of points on a family of
varieties in affine space. We then apply Lauder and Wan's -adic
point-counting algorithm to count these points, resulting in a non-trivial
algorithm for calculating the cardinality of the value set. The running time of
our algorithm is . In particular, this is a polynomial time
algorithm for fixed if is reasonably small. We also show that the
problem is #P-hard when the polynomial is given in a sparse representation,
, and is allowed to vary, or when the polynomial is given as a
straight-line program, and is allowed to vary. Additionally, we prove
that it is NP-hard to decide whether a polynomial represented by a
straight-line program has a root in a prime-order finite field, thus resolving
an open problem proposed by Kaltofen and Koiran in
\cite{Kaltofen03,KaltofenKo05}
Research on the competitiveness of crediting rating industry using PCA method
Purpose: This study investigates the industry competitiveness problem, which plays an
important role in crediting rating industry safety. Based on a comprehensive literatures review,
we found that there is much room to improve regarding of competitiveness assessment in
crediting rating industry.
Design/methodology/approach: In this study, we propose the PCA (Principal Component
Analysis) method to illustrate the problems.
Findings: America and Canada’s companies (such as S&P and DBRS) take the leading place in
credit rating industry, and Japan’ agencies have made great progress in industry competition
(such as JCR), while China’ agencies are lagging behind (Such as CCXI).
Research limitations/implications: It requires multi-year data for analysis, but the empirical
analysis is carried out based on one-year data instead of multi-year data.
Practical implications: The research can fill the gaps for credit rating industry safety research.
And study findings and feasible suggestions are provided for academics and practitioners.
Originality/value: This paper puts forward the competitive indicators of credit rating industry,
and indicators of cause and outcome are consideredPeer Reviewe
Computing zeta functions of large polynomial systems over finite fields
In this paper, we improve the algorithms of Lauder-Wan \cite{LW} and Harvey
\cite{Ha} to compute the zeta function of a system of polynomial equations
in variables over the finite field \FF_q of elements, for large.
The dependence on in the original algorithms was exponential in . Our
main result is a reduction of the exponential dependence on to a polynomial
dependence on . As an application, we speed up a doubly exponential time
algorithm from a software verification paper \cite{BJK} (on universal
equivalence of programs over finite fields) to singly exponential time. One key
new ingredient is an effective version of the classical Kronecker theorem which
(set-theoretically) reduces the number of defining equations for a "large"
polynomial system over \FF_q when is suitably large
Cross-Modal Contrastive Learning for Robust Reasoning in VQA
Multi-modal reasoning in visual question answering (VQA) has witnessed rapid
progress recently. However, most reasoning models heavily rely on shortcuts
learned from training data, which prevents their usage in challenging
real-world scenarios. In this paper, we propose a simple but effective
cross-modal contrastive learning strategy to get rid of the shortcut reasoning
caused by imbalanced annotations and improve the overall performance. Different
from existing contrastive learning with complex negative categories on coarse
(Image, Question, Answer) triplet level, we leverage the correspondences
between the language and image modalities to perform finer-grained cross-modal
contrastive learning. We treat each Question-Answer (QA) pair as a whole, and
differentiate between images that conform with it and those against it. To
alleviate the issue of sampling bias, we further build connected graphs among
images. For each positive pair, we regard the images from different graphs as
negative samples and deduct the version of multi-positive contrastive learning.
To our best knowledge, it is the first paper that reveals a general contrastive
learning strategy without delicate hand-craft rules can contribute to robust
VQA reasoning. Experiments on several mainstream VQA datasets demonstrate our
superiority compared to the state of the arts. Code is available at
\url{https://github.com/qizhust/cmcl_vqa_pl}
Assessment of the agriculture supply chain risks for investments of agricultural small and mediumsized enterprises (SMEs) using the decision support model
A key challenge in responding to the emerging challenges in agri-food
supply chains is encouraging continued new investment. This is related
to the recognition that agricultural production is often a lengthy process
requiring ongoing investments that may not produce expected
returns for a prolonged period, thereby being highly sensitive tomarket
risks. Agricultural productions are generally susceptible to different serious
risks such as crop diseases, weather conditions, and pest infections.
Many practitioners in this domain, particularly small and medium-sized
enterprises (SMEs), have shifted toward digitalization to address such
problems. To help with this situation, the current paper develops an
integrated decision-making framework, with the Pythagorean fuzzy
sets (PFSs), the method for removal effects of criteria (MEREC), the ranksum
(RS) and the gained and Lost dominance score (GLDS) termed as
PF-MEREC-RS-GLDS approach. In this approach, the PF-MEREC-RS
method is applied to compute the subjective and objective weights of
the main risks to assess the agriculture supply chain for investments of
SMEs, and the PF-GLDS model is used to assess the preferences of
enterprises over different the main risks to assess of the agriculture supply
chain for investments of SMEs. An empirical case study is taken to
evaluate the main risks to assess the agriculture supply chain for SME
investments. Also, comparison and sensitivity investigation are made to
show the superiority of the developed framework
Preventive Effects of Collagen Peptide from Deer Sinew on Bone Loss in Ovariectomized Rats
Deer sinew (DS) has been used traditionally for various illnesses, and the major active constituent is collagen. In this study, we assessed the effects of collagen peptide from DS on bone loss in the ovariectomized rats. Wister female rats were randomly divided into six groups as follows: sham-operated (SHAM), ovariectomized control (OVX), OVX given 1.0 mg/kg/week nylestriol (OVX + N), OVX given 0.4 g/kg/day collagen peptide (OVX + H), OVX given 0.2 g/kg/day collagen peptide (OXV + M), and OVX given 0.1 g/kg/day collagen peptide (OXV + L), respectively. After 13 weeks of treatment, the rats were euthanized, and the effects of collagen peptide on body weight, uterine weight, bone mineral density (BMD), serum biochemical indicators, bone histomorphometry, and bone mechanics were observed. The data showed that BMD and concentration of serum hydroxyproline were significantly increased and the levels of serum calcium, phosphorus, and alkaline phosphatase were decreased. Besides, histomorphometric parameters and mechanical indicators were improved. However, collagen peptide of DS has no effect on estradiol level, body weight, and uterine weight. Therefore, these results suggest that the collagen peptide supplementation may also prevent and treat bone loss
- …