616 research outputs found
Sieve Inference on Semi-nonparametric Time Series Models
The method of sieves has been widely used in estimating semiparametric and nonparametric models. In this paper, we first provide a general theory on the asymptotic normality of plug-in sieve M estimators of possibly irregular functionals of semi/nonparametric time series models. Next, we establish a surprising result that the asymptotic variances of plug-in sieve M estimators of irregular (i.e., slower than root-T estimable) functionals do not depend on temporal dependence. Nevertheless, ignoring the temporal dependence in small samples may not lead to accurate inference. We then propose an easy-to-compute and more accurate inference procedure based on a "pre-asymptotic" sieve variance estimator that captures temporal dependence. We construct a "pre-asymptotic" Wald statistic using an orthonormal series long run variance (OS-LRV) estimator. For sieve M estimators of both regular (i.e., root-T estimable) and irregular functionals, a scaled "pre-asymptotic" Wald statistic is asymptotically F distributed when the series number of terms in the OS-LRV estimator is held fixed. Simulations indicate that our scaled "pre-asymptotic" Wald test with F critical values has more accurate size in finite samples than the usual Wald test with chi-square critical values.Weak dependence, Sieve M estimation, Sieve Riesz representor, Irregular functional, Misspecification, Pre-asymptotic variance, Orthogonal series long run variance estimation, F distribution
Sieve Inference on Semi-nonparametric Time Series Models
The method of sieves has been widely used in estimating semiparametric and nonparametric models. In this paper, we first provide a general theory on the asymptotic normality of plug-in sieve M estimators of possibly irregular functionals of semi/nonparametric time series models. Next, we establish a surprising result that the asymptotic variances of plug-in sieve M estimators of irregular (i.e., slower than root-T estimable) functionals do not depend on temporal dependence. Nevertheless, ignoring the temporal dependence in small samples may not lead to accurate inference. We then propose an easy-to-compute and more accurate inference procedure based on a “pre-asymptotic” sieve variance estimator that captures temporal dependence. We construct a “pre-asymptotic” Wald statistic using an orthonormal series long run variance (OS-LRV) estimator. For sieve M estimators of both regular (i.e., root-T estimable) and irregular functionals, a scaled “pre-asymptotic” Wald statistic is asymptotically F distributed when the series number of terms in the OS-LRV estimator is held fixed. Simulations indicate that our scaled “pre-asymptotic” Wald test with F critical values has more accurate size in finite samples than the usual Wald test with chi-square critical values
Sieve inference on semi-nonparametric time series models
The method of sieves has been widely used in estimating semiparametric and nonparametric models. In this paper, we first provide a general theory on the asymptotic normality of plug-in sieve M estimators of possibly irregular functionals of semi/nonparametric time series models. Next, we establish a surprising result that the asymptotic variances of plug-in sieve M estimators of irregular (i.e., slower than root-T estimable) functionals do not depend on temporal dependence. Nevertheless, ignoring the temporal dependence in small samples may not lead to accurate inference. We then propose an easy-to-compute and more accurate inference procedure based on a “pre-asymptotic” sieve variance estimator that captures temporal dependence. We construct a “pre-asymptotic” Wald statistic using an orthonormal series long run variance (OS-LRV) estimator. For sieve M estimators of both regular (i.e., root-T estimable) and irregular functionals, a scaled “pre-asymptotic” Wald statistic is asymptotically F distributed when the series number of terms in the OS-LRV estimator is held fixed. Simulations indicate that our scaled “pre-asymptotic” Wald test with F critical values has more accurate size in finite samples than the usual Wald test with chi-square critical values
Vision-Language Instruction Tuning: A Review and Analysis
Instruction tuning is a crucial supervised training phase in Large Language
Models (LLMs), aiming to enhance the LLM's ability to generalize instruction
execution and adapt to user preferences. With the increasing integration of
multi-modal data into LLMs, there is growing interest in Vision-Language
Instruction Tuning (VLIT), which presents more complex characteristics compared
to pure text instruction tuning. In this paper, we systematically review the
latest VLIT settings and corresponding datasets in multi-modal LLMs and provide
insights into the intrinsic motivations behind their design. For the first
time, we offer a detailed multi-perspective categorization for existing VLIT
datasets and identify the characteristics that high-quality VLIT data should
possess. By incorporating these characteristics as guiding principles into the
existing VLIT data construction process, we conduct extensive experiments and
verify their positive impact on the performance of tuned multi-modal LLMs.
Furthermore, we discuss the current challenges and future research directions
of VLIT, providing insights for the continuous development of this field. The
code and dataset related to this paper have been open-sourced at
https://github.com/palchenli/VL-Instruction-Tuning.Comment: 34 pages, 6 figure
Deep Reinforcement Learning from Hierarchical Weak Preference Feedback
Reward design is a fundamental, yet challenging aspect of practical
reinforcement learning (RL). For simple tasks, researchers typically handcraft
the reward function, e.g., using a linear combination of several reward
factors. However, such reward engineering is subject to approximation bias,
incurs large tuning cost, and often cannot provide the granularity required for
complex tasks. To avoid these difficulties, researchers have turned to
reinforcement learning from human feedback (RLHF), which learns a reward
function from human preferences between pairs of trajectory sequences. By
leveraging preference-based reward modeling, RLHF learns complex rewards that
are well aligned with human preferences, allowing RL to tackle increasingly
difficult problems. Unfortunately, the applicability of RLHF is limited due to
the high cost and difficulty of obtaining human preference data. In light of
this cost, we investigate learning reward functions for complex tasks with less
human effort; simply by ranking the importance of the reward factors. More
specifically, we propose a new RL framework -- HERON, which compares
trajectories using a hierarchical decision tree induced by the given ranking.
These comparisons are used to train a preference-based reward model, which is
then used for policy learning. We find that our framework can not only train
high performing agents on a variety of difficult tasks, but also provide
additional benefits such as improved sample efficiency and robustness. Our code
is available at https://github.com/abukharin3/HERON.Comment: 28 Pages, 15 figure
- …