774 research outputs found
End-to-End Multi-View Networks for Text Classification
We propose a multi-view network for text classification. Our method
automatically creates various views of its input text, each taking the form of
soft attention weights that distribute the classifier's focus among a set of
base features. For a bag-of-words representation, each view focuses on a
different subset of the text's words. Aggregating many such views results in a
more discriminative and robust representation. Through a novel architecture
that both stacks and concatenates views, we produce a network that emphasizes
both depth and width, allowing training to converge quickly. Using our
multi-view architecture, we establish new state-of-the-art accuracies on two
benchmark tasks.Comment: 6 page
Behavioural Financial Decision Making Under Uncertainty
Ever since von Neumann and Morgenstern published the axiomisation of
Expected Utility Theory, there have been a considerable amount of ob-
servations appeared in the literature violating the expected utility theory.
To make decisions under uncertainty, people generally separate possible
outcomes into gains and losses. They are risk averse for gains but risk
seeking for losses with very large probabilities; risk averse for losses but
risk seeking for gains with very small probabilities. To accommodate
these characteristics, Prospect Theory and its improvement Cumulative
Prospect Theory were developed in order to formulate people's behaviours
under uncertainty in a descriptive and normative way. As such, values are
assigned to gains and losses and probabilities are replaced by probability
weighting functions. The CPT models built in this project are based on
the power value function and the compound invariant form of probability
weighting function. The models are calibrated with the data from Hong
Kong Mark Six lottery market. The parameters in the models are esti-
mated, hence to examine properties of the models and give an insights into
how they fit the real life situation. In the first approach, the parameter
in the value function is fixed, but the plots of the estimated probability
weighting function do not give sensible explanations of lottery player's
behaviours. In the second approach, the parameters in value function and
weighting function are both estimated from the data to give an optimal
fitting of the model
Absolute height measurement of specular surfaces with modified active fringe reflection photogrammetry
Deflectometric methods have existed for more than a decade for slope measurement of specular freeform surfaces through utilization of the deformation of a sample pattern after reflection from a test surface. Usually, these approaches require two-directional fringe patterns to be projected on a LCD screen or ground glass and require slope integration, which leads to some complexity for the whole measuring process.
This paper proposes a new mathematical measurement model for measuring topography information of freeform specular surfaces, which integrates a virtual reference specular surface into the method of active fringe reflection delfectometry and presents a straight-forward relation between height and phase. This method only requires one direction of horizontal or vertical sinusoidal fringe patterns to be projected on a LCD screen, resulting in a significant reduction in capture time over established method. Assuming the whole system has been pre-calibrated, during the measurement process, the fringe patterns are captured separately via the virtual reference and detected freeform surfaces by a CCD camera. The reference phase can be solved according to spatial geometrical relation between LCD screen and CCD camera. The captured phases can be unwrapped with a heterodyne technique and optimum frequency selection method. Based on this calculated unwrapped-phase and that proposed mathematical model, absolute height of the inspected surface can be computed. Simulated and experimental results show that this methodology can conveniently calculate topography information for freeform and structured specular surfaces without integration and reconstruction processes
High-dimensional genome-wide association study and misspecified mixed model analysis
We study behavior of the restricted maximum likelihood (REML) estimator under
a misspecified linear mixed model (LMM) that has received much attention in
recent gnome-wide association studies. The asymptotic analysis establishes
consistency of the REML estimator of the variance of the errors in the LMM, and
convergence in probability of the REML estimator of the variance of the random
effects in the LMM to a certain limit, which is equal to the true variance of
the random effects multiplied by the limiting proportion of the nonzero random
effects present in the LMM. The aymptotic results also establish convergence
rate (in probability) of the REML estimators as well as a result regarding
convergence of the asymptotic conditional variance of the REML estimator. The
asymptotic results are fully supported by the results of empirical studies,
which include extensive simulation studies that compare the performance of the
REML estimator (under the misspecified LMM) with other existing methods.Comment: 3 figure
Bootstrap Confidence Intervals for Medical Costs With Censored Observations
Medical costs data with administratively censored observations often arise in cost-effectiveness studies of treatments for life threatening diseases. Mean of medical costs incurred from the start of a treatment till death or certain timepoint after the implementation of treatment is frequently of interest. In many situations, due to the skewed nature of the cost distribution and non-uniform rate of cost accumulation over time, the currently available normal approximation confidence interval has poor coverage accuracy. In this paper, we proposed a bootstrap confidence interval for the mean of medical costs with censored observations. In simulation studies, we showed that the proposed bootstrap confidence interval had much better coverage accuracy than the normal approximation one when medical costs had a skewed distribution. When there is light censoring on medical costs (less than or equal to 25%), we found that the bootstrap confidence interval based on the simple weighted estimator is preferred due to its simplicity and good coverage accuracy. For heavily censored cost data (censoring rate greater than or equal to 30%) with larger sample sizes (n greater than or equal to 200), the bootstrap confidence intervals based on the partitioned estimator has superior performance in terms of both efficiency and coverage accuracy. We also illustrated the use of our methods in a real example
キャッサバ廃棄物の水素発酵、メタン発酵およびハイタン発酵に関する比較研究
要約のみTohoku University李玉友課
- …