4,198 research outputs found
Digital Color Imaging
This paper surveys current technology and research in the area of digital
color imaging. In order to establish the background and lay down terminology,
fundamental concepts of color perception and measurement are first presented
us-ing vector-space notation and terminology. Present-day color recording and
reproduction systems are reviewed along with the common mathematical models
used for representing these devices. Algorithms for processing color images for
display and communication are surveyed, and a forecast of research trends is
attempted. An extensive bibliography is provided
Common Sense and Simplicity in Empirical Industrial Organization
This paper is a revised version of a keynote address delivered at the inaugural International Industrial Organization Conference in Boston, April 2003. I argue that new econometric tools have facilitated the estimation of models with realistic theoretical underpinnings, and because of this, have made empirical I. O. much more useful. The tools solve computational problems thereby allowing us to make the relationship between the economic model and the estimating equations transparent. This, in turn, enables us to utilize the available data more eectively. It also facilitates robustness analysis and clarifies the assumptions needed to analyze the causes of past events and/or make predictions of the likely impacts of future policy or environmental changes. The paper provides examples illustrating the value of simulation for the estimation of demand systems and of semiparametrics for the estimation of entry models.
Statistical Inference for Medical Costs and Incremental Cost-effectiveness Ratios with Censored Data
Cost-effectiveness analysis is widely conducted in the economic evaluation of new treatments, due to skyrocketing health care costs and limited resource available. Censored costs data poses a unique problem for cost estimation due to “induced informative censoring” problem. Thus, many standard approaches for survival analysis are not valid for the analysis of cost data. We first derive the confidence interval for the incremental cost-effectiveness ratio for a special case, when terminating events are different for survival time and costs. Then we study how to intuitively explain some existing estimators for costs, based on the generalized redistribute-to-the-right algorithm. Motivated by that idea, we also propose two improved survival estimators of costs, based on generalized redistribute-to-the-right algorithm and kernel method.
We first consider one special situation in conducting cost-effectiveness analysis, when the terminating events for survival time and costs are different. Traditional methods for statistical inference cannot deal with such data. We propose a new method for deriving the confidence interval for the incremental cost-effectiveness ratio under this situation, based on the counting process theory and the general theory for missing data process. The simulation studies and real data example show that our method performs very well for some practical settings.
In addition, we provide intuitive explanation to a mean cost estimator and a survival estimator for costs, based on generalized redistribute-to-the-right algorithm. Since those estimators are derived based on the inverse probability weighting principle and semiparametric efficiency theory, it is not always easy to understand how these methods work. Therefore, our work engenders a better understanding of those theoretically derived cost estimators.
Motivated by the idea of generalized redistribute-to-the-right algorithm, we propose an estimator for the survival function of costs. The proposed estimator is naturally monotone, more efficient than some existing survival estimators, and has a quite small bias in many realistic settings. We further propose a kernel-based survival estimator for costs. The latter estimator, which is asymptotically unbiased, overcomes the deficiency of the former estimator, while preserving the nice properties. Our proposed estimators outperform existing estimators under various scenarios in simulation and real data example
Novel Class Discovery for Long-tailed Recognition
While the novel class discovery has recently made great progress, existing
methods typically focus on improving algorithms on class-balanced benchmarks.
However, in real-world recognition tasks, the class distributions of their
corresponding datasets are often imbalanced, which leads to serious performance
degeneration of those methods. In this paper, we consider a more realistic
setting for novel class discovery where the distributions of novel and known
classes are long-tailed. One main challenge of this new problem is to discover
imbalanced novel classes with the help of long-tailed known classes. To tackle
this problem, we propose an adaptive self-labeling strategy based on an
equiangular prototype representation of classes. Our method infers high-quality
pseudo-labels for the novel classes by solving a relaxed optimal transport
problem and effectively mitigates the class biases in learning the known and
novel classes. We perform extensive experiments on CIFAR100, ImageNet100,
Herbarium19 and large-scale iNaturalist18 datasets, and the results demonstrate
the superiority of our method. Our code is available at
https://github.com/kleinzcy/NCDLR.Comment: TMLR2023, Final versio
Approachability in unknown games: Online learning meets multi-objective optimization
In the standard setting of approachability there are two players and a target
set. The players play repeatedly a known vector-valued game where the first
player wants to have the average vector-valued payoff converge to the target
set which the other player tries to exclude it from this set. We revisit this
setting in the spirit of online learning and do not assume that the first
player knows the game structure: she receives an arbitrary vector-valued reward
vector at every round. She wishes to approach the smallest ("best") possible
set given the observed average payoffs in hindsight. This extension of the
standard setting has implications even when the original target set is not
approachable and when it is not obvious which expansion of it should be
approached instead. We show that it is impossible, in general, to approach the
best target set in hindsight and propose achievable though ambitious
alternative goals. We further propose a concrete strategy to approach these
goals. Our method does not require projection onto a target set and amounts to
switching between scalar regret minimization algorithms that are performed in
episodes. Applications to global cost minimization and to approachability under
sample path constraints are considered
- …