165 research outputs found
AN EMERGING NICHE FOR FIRMS IN WESTERN REGIONS OF CHINA IN THE PERIOD OF TRANSITION ECONOMY TO DEVELOP RENEWABLE ENERGY INDUSTRY: A PERSPECTIVE FROM INSTITUTIONAL ENTREPRENEURSHIP
This paper explores the enabling role of institutional entrepreneurship in western regions of China in the context of transition economy. Considering the widely emerging concerns to develop environment friendly technologies and industries, which could contribute to the achievement of sustainable development both economically and ecologically, this paper proposes that western regions are standing in front of a great opportunity by stimulating development of the emerging renewable energy industry. In this process, institutional entrepreneurships under the massive institutional change, i.e, the Grand Western Development Program, could have the “path defining” effect through leveraging the emerging great niche and redirecting western regions to an innovative path instead of following the traditional resource-based development path. Key words: Western Regions, China, transition economy, renewable energy industry, institutional entrepreneurshi
Generalized Category Discovery with Clustering Assignment Consistency
Generalized category discovery (GCD) is a recently proposed open-world task.
Given a set of images consisting of labeled and unlabeled instances, the goal
of GCD is to automatically cluster the unlabeled samples using information
transferred from the labeled dataset. The unlabeled dataset comprises both
known and novel classes. The main challenge is that unlabeled novel class
samples and unlabeled known class samples are mixed together in the unlabeled
dataset. To address the GCD without knowing the class number of unlabeled
dataset, we propose a co-training-based framework that encourages clustering
consistency. Specifically, we first introduce weak and strong augmentation
transformations to generate two sufficiently different views for the same
sample. Then, based on the co-training assumption, we propose a consistency
representation learning strategy, which encourages consistency between
feature-prototype similarity and clustering assignment. Finally, we use the
discriminative embeddings learned from the semi-supervised representation
learning process to construct an original sparse network and use a community
detection method to obtain the clustering results and the number of categories
simultaneously. Extensive experiments show that our method achieves
state-of-the-art performance on three generic benchmarks and three fine-grained
visual recognition datasets. Especially in the ImageNet-100 data set, our
method significantly exceeds the best baseline by 15.5\% and 7.0\% on the
\texttt{Novel} and \texttt{All} classes, respectively.Comment: ICONIP 2023,This paper has been nominated for ICONIP2023 Best Paper
Awar
A Survey on Deep Semi-supervised Learning
Deep semi-supervised learning is a fast-growing field with a range of
practical applications. This paper provides a comprehensive survey on both
fundamentals and recent advances in deep semi-supervised learning methods from
model design perspectives and unsupervised loss functions. We first present a
taxonomy for deep semi-supervised learning that categorizes existing methods,
including deep generative methods, consistency regularization methods,
graph-based methods, pseudo-labeling methods, and hybrid methods. Then we offer
a detailed comparison of these methods in terms of the type of losses,
contributions, and architecture differences. In addition to the past few years'
progress, we further discuss some shortcomings of existing methods and provide
some tentative heuristic solutions for solving these open problems.Comment: 24 pages, 6 figure
Morph-specific differences in life history traits between the winged and wingless morphs of the aphid, Sitobion avenae (Fabricius) (Hemiptera: Aphididae)
Life history traits were evaluated in the wing polyphenic aphid, Sitobion avenae (Fabricius), by rearing the winged and wingless morphs under the laboratory conditions. Winged morph with large thoraces exhibited a significantly greater morphological investment in flight apparatus than wingless morph with small thoraces. Compared to the winged morph, the wingless morph produced significantlymore nymphs and exhibited significantly faster nymph development rates. In addition, the age at which reproduction first occurred for the winged morph was significantly delayed, and higher mortality was recorded.The results suggest that the fitness differences associated with wingsmay be related to nymph development, adult fecundity, and mortality. Based on these results, the trends and exceptions of life history traits for the wing polyphenic insects are discussed
MatrixCity: A Large-scale City Dataset for City-scale Neural Rendering and Beyond
Neural radiance fields (NeRF) and its subsequent variants have led to
remarkable progress in neural rendering. While most of recent neural rendering
works focus on objects and small-scale scenes, developing neural rendering
methods for city-scale scenes is of great potential in many real-world
applications. However, this line of research is impeded by the absence of a
comprehensive and high-quality dataset, yet collecting such a dataset over real
city-scale scenes is costly, sensitive, and technically difficult. To this end,
we build a large-scale, comprehensive, and high-quality synthetic dataset for
city-scale neural rendering researches. Leveraging the Unreal Engine 5 City
Sample project, we develop a pipeline to easily collect aerial and street city
views, accompanied by ground-truth camera poses and a range of additional data
modalities. Flexible controls over environmental factors like light, weather,
human and car crowd are also available in our pipeline, supporting the need of
various tasks covering city-scale neural rendering and beyond. The resulting
pilot dataset, MatrixCity, contains 67k aerial images and 452k street images
from two city maps of total size . On top of MatrixCity, a thorough
benchmark is also conducted, which not only reveals unique challenges of the
task of city-scale neural rendering, but also highlights potential improvements
for future works. The dataset and code will be publicly available at our
project page: https://city-super.github.io/matrixcity/.Comment: Accepted to ICCV 2023. Project page:
$\href{https://city-super.github.io/matrixcity/}{this\, https\, URL}
Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering
Neural rendering methods have significantly advanced photo-realistic 3D scene
rendering in various academic and industrial applications. The recent 3D
Gaussian Splatting method has achieved the state-of-the-art rendering quality
and speed combining the benefits of both primitive-based representations and
volumetric representations. However, it often leads to heavily redundant
Gaussians that try to fit every training view, neglecting the underlying scene
geometry. Consequently, the resulting model becomes less robust to significant
view changes, texture-less area and lighting effects. We introduce Scaffold-GS,
which uses anchor points to distribute local 3D Gaussians, and predicts their
attributes on-the-fly based on viewing direction and distance within the view
frustum. Anchor growing and pruning strategies are developed based on the
importance of neural Gaussians to reliably improve the scene coverage. We show
that our method effectively reduces redundant Gaussians while delivering
high-quality rendering. We also demonstrates an enhanced capability to
accommodate scenes with varying levels-of-detail and view-dependent
observations, without sacrificing the rendering speed.Comment: Project page: https://city-super.github.io/scaffold-gs
OmniCity: Omnipotent City Understanding with Multi-level and Multi-view Images
This paper presents OmniCity, a new dataset for omnipotent city understanding
from multi-level and multi-view images. More precisely, the OmniCity contains
multi-view satellite images as well as street-level panorama and mono-view
images, constituting over 100K pixel-wise annotated images that are
well-aligned and collected from 25K geo-locations in New York City. To
alleviate the substantial pixel-wise annotation efforts, we propose an
efficient street-view image annotation pipeline that leverages the existing
label maps of satellite view and the transformation relations between different
views (satellite, panorama, and mono-view). With the new OmniCity dataset, we
provide benchmarks for a variety of tasks including building footprint
extraction, height estimation, and building plane/instance/fine-grained
segmentation. Compared with the existing multi-level and multi-view benchmarks,
OmniCity contains a larger number of images with richer annotation types and
more views, provides more benchmark results of state-of-the-art models, and
introduces a novel task for fine-grained building instance segmentation on
street-level panorama images. Moreover, OmniCity provides new problem settings
for existing tasks, such as cross-view image matching, synthesis, segmentation,
detection, etc., and facilitates the developing of new methods for large-scale
city understanding, reconstruction, and simulation. The OmniCity dataset as
well as the benchmarks will be available at
https://city-super.github.io/omnicity
- …