279 research outputs found
A Random Attention Model
This paper illustrates how one can deduce preference from observed choices
when attention is not only limited but also random. In contrast to earlier
approaches, we introduce a Random Attention Model (RAM) where we abstain from
any particular attention formation, and instead consider a large class of
nonparametric random attention rules. Our model imposes one intuitive
condition, termed Monotonic Attention, which captures the idea that each
consideration set competes for the decision-maker's attention. We then develop
revealed preference theory within RAM and obtain precise testable implications
for observable choice probabilities. Based on these theoretical findings, we
propose econometric methods for identification, estimation, and inference of
the decision maker's preferences. To illustrate the applicability of our
results and their concrete empirical content in specific settings, we also
develop revealed preference theory and accompanying econometric methods under
additional nonparametric assumptions on the consideration set for binary choice
problems. Finally, we provide general purpose software implementation of our
estimation and inference results, and showcase their performance using
simulations
High Quality Image Interpolation via Local Autoregressive and Nonlocal 3-D Sparse Regularization
In this paper, we propose a novel image interpolation algorithm, which is
formulated via combining both the local autoregressive (AR) model and the
nonlocal adaptive 3-D sparse model as regularized constraints under the
regularization framework. Estimating the high-resolution image by the local AR
regularization is different from these conventional AR models, which weighted
calculates the interpolation coefficients without considering the rough
structural similarity between the low-resolution (LR) and high-resolution (HR)
images. Then the nonlocal adaptive 3-D sparse model is formulated to regularize
the interpolated HR image, which provides a way to modify these pixels with the
problem of numerical stability caused by AR model. In addition, a new
Split-Bregman based iterative algorithm is developed to solve the above
optimization problem iteratively. Experiment results demonstrate that the
proposed algorithm achieves significant performance improvements over the
traditional algorithms in terms of both objective quality and visual perceptionComment: 4 pages, 5 figures, 2 tables, to be published at IEEE Visual
Communications and Image Processing (VCIP) 201
Fair Adaptive Experiments
Randomized experiments have been the gold standard for assessing the
effectiveness of a treatment or policy. The classical complete randomization
approach assigns treatments based on a prespecified probability and may lead to
inefficient use of data. Adaptive experiments improve upon complete
randomization by sequentially learning and updating treatment assignment
probabilities. However, their application can also raise fairness and equity
concerns, as assignment probabilities may vary drastically across groups of
participants. Furthermore, when treatment is expected to be extremely
beneficial to certain groups of participants, it is more appropriate to expose
many of these participants to favorable treatment. In response to these
challenges, we propose a fair adaptive experiment strategy that simultaneously
enhances data use efficiency, achieves an envy-free treatment assignment
guarantee, and improves the overall welfare of participants. An important
feature of our proposed strategy is that we do not impose parametric modeling
assumptions on the outcome variables, making it more versatile and applicable
to a wider array of applications. Through our theoretical investigation, we
characterize the convergence rate of the estimated treatment effects and the
associated standard deviations at the group level and further prove that our
adaptive treatment assignment algorithm, despite not having a closed-form
expression, approaches the optimal allocation rule asymptotically. Our proof
strategy takes into account the fact that the allocation decisions in our
design depend on sequentially accumulated data, which poses a significant
challenge in characterizing the properties and conducting statistical inference
of our method. We further provide simulation evidence to showcase the
performance of our fair adaptive experiment strategy
Two-Step Estimation and Inference with Possibly Many Included Covariates
We study the implications of including many covariates in a first-step
estimate entering a two-step estimation procedure. We find that a first order
bias emerges when the number of \textit{included} covariates is "large"
relative to the square-root of sample size, rendering standard inference
procedures invalid. We show that the jackknife is able to estimate this "many
covariates" bias consistently, thereby delivering a new automatic
bias-corrected two-step point estimator. The jackknife also consistently
estimates the standard error of the original two-step point estimator. For
inference, we develop a valid post-bias-correction bootstrap approximation that
accounts for the additional variability introduced by the jackknife
bias-correction. We find that the jackknife bias-corrected point estimator and
the bootstrap post-bias-correction inference perform excellent in simulations,
offering important improvements over conventional two-step point estimators and
inference procedures, which are not robust to including many covariates. We
apply our results to an array of distinct treatment effect, policy evaluation,
and other applied microeconomics settings. In particular, we discuss production
function and marginal treatment effect estimation in detail
lpdensity: Local Polynomial Density Estimation and Inference
Density estimation and inference methods are widely used in empirical work.
When the underlying distribution has compact support, conventional kernel-based
density estimators are no longer consistent near or at the boundary because of
their well-known boundary bias. Alternative smoothing methods are available to
handle boundary points in density estimation, but they all require additional
tuning parameter choices or other typically ad hoc modifications depending on
the evaluation point and/or approach considered. This article discusses the R
and Stata package lpdensity implementing a novel local polynomial density
estimator proposed and studied in Cattaneo, Jansson, and Ma (2020, 2021), which
is boundary adaptive and involves only one tuning parameter. The methods
implemented also cover local polynomial estimation of the cumulative
distribution function and density derivatives. In addition to point estimation
and graphical procedures, the package offers consistent variance estimators,
mean squared error optimal bandwidth selection, robust bias-corrected
inference, and confidence bands construction, among other features. A
comparison with other density estimation packages available in R using a Monte
Carlo experiment is provided
New uniqueness results for boundary value problem of fractional differential equation
In this paper, uniqueness results for boundary value problem of fractional differential equation are obtained. Both the Banach's contraction mapping principle and the theory of linear operator are used, and a comparison between the obtained results is provided
Attention Overload
We introduce an Attention Overload Model that captures the idea that
alternatives compete for the decision maker's attention, and hence the
attention frequency each alternative receives decreases as the choice problem
becomes larger. Using this nonparametric restriction on the random attention
formation, we show that a fruitful revealed preference theory can be developed,
and provide testable implications on the observed choice behavior that can be
used to partially identify the decision maker's preference. Furthermore, we
provide novel partial identification results on the underlying attention
frequency, thereby offering the first nonparametric identification result of (a
feature of) the random attention formation mechanism in the literature.
Building on our partial identification results, for both preferences and
attention frequency, we develop econometric methods for estimation and
inference. Importantly, our econometric procedures remain valid even in
settings with large number of alternatives and choice problems, an important
feature of the economic environment we consider. We also provide a software
package in R implementing our empirical methods, and illustrate them in a
simulation study
Adjacent Slice Feature Guided 2.5D Network for Pulmonary Nodule Segmentation
More and more attention has been paid to the segmentation of pulmonary
nodules. Among the current methods based on deep learning, 3D segmentation
methods directly input 3D images, which takes up a lot of memory and brings
huge computation. However, most of the 2D segmentation methods with less
parameters and calculation have the problem of lacking spatial relations
between slices, resulting in poor segmentation performance. In order to solve
these problems, we propose an adjacent slice feature guided 2.5D network. In
this paper, we design an adjacent slice feature fusion model to introduce
information from adjacent slices. To further improve the model performance, we
construct a multi-scale fusion module to capture more context information, in
addition, we design an edge-constrained loss function to optimize the
segmentation results in the edge region. Fully experiments show that our method
performs better than other existing methods in pulmonary nodule segmentation
task
Impact of Technology Habitual Domain on Ambidextrous Innovation:Case Study of a Chinese High-Tech Enterprise
- …