2,322 research outputs found
Productive Efficiency Differences among Chinese Cities : A Stochastic Frontier Approach
This paper treats the empirical issue related to the urban productive efficiency. First, we construct a model incorporating urban inefficiency term and estimate a stochastic frontier production function at urban level. Next, we compare technical efficiency across Chinese cities during 1997-2007 by using estimated results. The urban technical inefficiency effect is found to be significant in many cities. Then, in the way of constructing the specification of the technical inefficiency in terms of various capital explanatory variables, we analyze the determinants of technical inefficiency of individual cities, and explore how the urban technical inefficiency is influenced by capital density, FDI, the domestic investment, while the impact of technical inefficiency on capital variables is investigated.
From the empirical results, FDI is able to reduce effectively the technical inefficiency. Capital intensive industry would be the engine of economic growth in the future
LED-Induced Fluorescence System for Tea Classification and Quality Assessment
A fluorescence system is developed by using several light emitting diodes
(LEDs) with different wavelengths as excitation light sources. The fluorescence
detection head consists of multi LED light sources and a multimode fiber for
fluorescence collection, where the LEDs and the corresponding filters can be
easily chosen to get appropriate excitation wavelengths for different
applications. By analyzing fluorescence spectra with the principal component
analysis method, the system is utilized in the classification of four types of
green tea beverages and two types of black tea beverages. Qualities of the Xihu
Longjing tea leaves of different grades, as well as the corresponding liquid
tea samples, are studied to further investigate the ability and application of
the system in the evaluation of classification/quality of tea and other foods
Discovering the Unknown Knowns: Turning Implicit Knowledge in the Dataset into Explicit Training Examples for Visual Question Answering
Visual question answering (VQA) is challenging not only because the model has
to handle multi-modal information, but also because it is just so hard to
collect sufficient training examples -- there are too many questions one can
ask about an image. As a result, a VQA model trained solely on human-annotated
examples could easily over-fit specific question styles or image contents that
are being asked, leaving the model largely ignorant about the sheer diversity
of questions. Existing methods address this issue primarily by introducing an
auxiliary task such as visual grounding, cycle consistency, or debiasing. In
this paper, we take a drastically different approach. We found that many of the
"unknowns" to the learned VQA model are indeed "known" in the dataset
implicitly. For instance, questions asking about the same object in different
images are likely paraphrases; the number of detected or annotated objects in
an image already provides the answer to the "how many" question, even if the
question has not been annotated for that image. Building upon these insights,
we present a simple data augmentation pipeline SimpleAug to turn this "known"
knowledge into training examples for VQA. We show that these augmented examples
can notably improve the learned VQA models' performance, not only on the VQA-CP
dataset with language prior shifts but also on the VQA v2 dataset without such
shifts. Our method further opens up the door to leverage weakly-labeled or
unlabeled images in a principled way to enhance VQA models. Our code and data
are publicly available at https://github.com/heendung/simpleAUG.Comment: Accepted to EMNLP 202
- …