325 research outputs found
Multi-fractal analysis of weighted networks
In many real complex networks, the fractal and self-similarity properties
have been found. The fractal dimension is a useful method to describe fractal
property of complex networks. Fractal analysis is inadequate if only taking one
fractal dimension to study complex networks. In this case, multifractal
analysis of complex networks are concerned. However, multifractal dimension of
weighted networks are less involved. In this paper, multifractal dimension of
weighted networks is proposed based on box-covering algorithm for fractal
dimension of weighted networks (BCANw). The proposed method is applied to
calculate the fractal dimensions of some real networks. Our numerical results
indicate that the proposed method is efficient for analysis fractal property of
weighted networks
Output regulation of nonlinear singularly perturbed systems
AbstractIn this paper, the state feedback regulator problem of nonlinear singularly perturbed systems is discussed. It is shown that, under standard assumptions, this problem is solvable if and only if a certain nonlinear partial differential equation is solvable. Once this equation is solvable, a feedback law which solves the problem can easily be constructed. The developed control law is applied to a nonlinear chemical process
Part-guided Relational Transformers for Fine-grained Visual Recognition
Fine-grained visual recognition is to classify objects with visually similar
appearances into subcategories, which has made great progress with the
development of deep CNNs. However, handling subtle differences between
different subcategories still remains a challenge. In this paper, we propose to
solve this issue in one unified framework from two aspects, i.e., constructing
feature-level interrelationships, and capturing part-level discriminative
features. This framework, namely PArt-guided Relational Transformers (PART), is
proposed to learn the discriminative part features with an automatic part
discovery module, and to explore the intrinsic correlations with a feature
transformation module by adapting the Transformer models from the field of
natural language processing. The part discovery module efficiently discovers
the discriminative regions which are highly-corresponded to the gradient
descent procedure. Then the second feature transformation module builds
correlations within the global embedding and multiple part embedding, enhancing
spatial interactions among semantic pixels. Moreover, our proposed approach
does not rely on additional part branches in the inference time and reaches
state-of-the-art performance on 3 widely-used fine-grained object recognition
benchmarks. Experimental results and explainable visualizations demonstrate the
effectiveness of our proposed approach. The code can be found at
https://github.com/iCVTEAM/PART.Comment: Published in IEEE TIP 202
- …