4,170 research outputs found
Prediction of Human Trajectory Following a Haptic Robotic Guide Using Recurrent Neural Networks
Social intelligence is an important requirement for enabling robots to
collaborate with people. In particular, human path prediction is an essential
capability for robots in that it prevents potential collision with a human and
allows the robot to safely make larger movements. In this paper, we present a
method for predicting the trajectory of a human who follows a haptic robotic
guide without using sight, which is valuable for assistive robots that aid the
visually impaired. We apply a deep learning method based on recurrent neural
networks using multimodal data: (1) human trajectory, (2) movement of the
robotic guide, (3) haptic input data measured from the physical interaction
between the human and the robot, (4) human depth data. We collected actual
human trajectory and multimodal response data through indoor experiments. Our
model outperformed the baseline result while using only the robot data with the
observed human trajectory, and it shows even better results when using
additional haptic and depth data.Comment: 6 pages, Submitted to IEEE World Haptics Conference 201
Large-N and Large-T Properties of Panel Data Estimators and the Hausman Test
This paper examines the asymptotic properties of the popular within, GLS estimators and the Hausman test for panel data models with both large numbers of cross-section (N) and time-series (T) observations. The model we consider includes the regressors with deterministic trends in mean as well as time invariant regressors. If a time-varying regressor is correlated with time invariant regressors, the time series of the time varying regressor is not ergodic. Our asymptotic results are obtained considering the dependence of such non-ergodic time-varying regressors. We find that the within estimator is as efficient as the GLS estimator. Despite this asymptotic equivalence, however, the Hausman statistic, which is essentially a distance measure between the two estimators, is well defined and asymptotically \chi^2-distributed under the random effects assumption.
Regularization and Kernelization of the Maximin Correlation Approach
Robust classification becomes challenging when each class consists of
multiple subclasses. Examples include multi-font optical character recognition
and automated protein function prediction. In correlation-based
nearest-neighbor classification, the maximin correlation approach (MCA)
provides the worst-case optimal solution by minimizing the maximum
misclassification risk through an iterative procedure. Despite the optimality,
the original MCA has drawbacks that have limited its wide applicability in
practice. That is, the MCA tends to be sensitive to outliers, cannot
effectively handle nonlinearities in datasets, and suffers from having high
computational complexity. To address these limitations, we propose an improved
solution, named regularized maximin correlation approach (R-MCA). We first
reformulate MCA as a quadratically constrained linear programming (QCLP)
problem, incorporate regularization by introducing slack variables in the
primal problem of the QCLP, and derive the corresponding Lagrangian dual. The
dual formulation enables us to apply the kernel trick to R-MCA so that it can
better handle nonlinearities. Our experimental results demonstrate that the
regularization and kernelization make the proposed R-MCA more robust and
accurate for various classification tasks than the original MCA. Furthermore,
when the data size or dimensionality grows, R-MCA runs substantially faster by
solving either the primal or dual (whichever has a smaller variable dimension)
of the QCLP.Comment: Submitted to IEEE Acces
Bivariate Beta-LSTM
Long Short-Term Memory (LSTM) infers the long term dependency through a cell
state maintained by the input and the forget gate structures, which models a
gate output as a value in [0,1] through a sigmoid function. However, due to the
graduality of the sigmoid function, the sigmoid gate is not flexible in
representing multi-modality or skewness. Besides, the previous models lack
modeling on the correlation between the gates, which would be a new method to
adopt inductive bias for a relationship between previous and current input.
This paper proposes a new gate structure with the bivariate Beta distribution.
The proposed gate structure enables probabilistic modeling on the gates within
the LSTM cell so that the modelers can customize the cell state flow with
priors and distributions. Moreover, we theoretically show the higher upper
bound of the gradient compared to the sigmoid function, and we empirically
observed that the bivariate Beta distribution gate structure provides higher
gradient values in training. We demonstrate the effectiveness of bivariate Beta
gate structure on the sentence classification, image classification, polyphonic
music modeling, and image caption generation.Comment: AAAI 202
- …