2,617 research outputs found
Financing problems of small and micro enterprises under digital Inclusive Finance
small and micro enterprises are the main body of Inclusive Finance and the main direction of its development. “Diffi cult
and expensive fi nancing” has hindered the development of small and micro enterprises. Inclusive Finance is a fi nancial service that takes
into account the common interests of banks and banks and can better meet the financing needs of small and micro enterprises. Taking
Inclusive Finance as the breakthrough point, this paper focuses on the challenges faced by small and micro enterprises, such as the imperfect
regulatory system, the turmoil of private fi nance, and the diffi culties in the development of the guarantee industry. Under the background of
digital Inclusive Finance, the author puts forward solutions to the fi nancing problems of small and micro enterprises, so as to promote the
healthy development of small and micro enterprises
Task Decomposition and Synchronization for Semantic Biomedical Image Segmentation
Semantic segmentation is essentially important to biomedical image analysis.
Many recent works mainly focus on integrating the Fully Convolutional Network
(FCN) architecture with sophisticated convolution implementation and deep
supervision. In this paper, we propose to decompose the single segmentation
task into three subsequent sub-tasks, including (1) pixel-wise image
segmentation, (2) prediction of the class labels of the objects within the
image, and (3) classification of the scene the image belonging to. While these
three sub-tasks are trained to optimize their individual loss functions of
different perceptual levels, we propose to let them interact by the task-task
context ensemble. Moreover, we propose a novel sync-regularization to penalize
the deviation between the outputs of the pixel-wise segmentation and the class
prediction tasks. These effective regularizations help FCN utilize context
information comprehensively and attain accurate semantic segmentation, even
though the number of the images for training may be limited in many biomedical
applications. We have successfully applied our framework to three diverse 2D/3D
medical image datasets, including Robotic Scene Segmentation Challenge 18
(ROBOT18), Brain Tumor Segmentation Challenge 18 (BRATS18), and Retinal Fundus
Glaucoma Challenge (REFUGE18). We have achieved top-tier performance in all
three challenges.Comment: IEEE Transactions on Medical Imagin
Unsupervised Feature Selection with Adaptive Structure Learning
The problem of feature selection has raised considerable interests in the
past decade. Traditional unsupervised methods select the features which can
faithfully preserve the intrinsic structures of data, where the intrinsic
structures are estimated using all the input features of data. However, the
estimated intrinsic structures are unreliable/inaccurate when the redundant and
noisy features are not removed. Therefore, we face a dilemma here: one need the
true structures of data to identify the informative features, and one need the
informative features to accurately estimate the true structures of data. To
address this, we propose a unified learning framework which performs structure
learning and feature selection simultaneously. The structures are adaptively
learned from the results of feature selection, and the informative features are
reselected to preserve the refined structures of data. By leveraging the
interactions between these two essential tasks, we are able to capture accurate
structures and select more informative features. Experimental results on many
benchmark data sets demonstrate that the proposed method outperforms many state
of the art unsupervised feature selection methods
A Pairwise Probe for Understanding BERT Fine-Tuning on Machine Reading Comprehension
Pre-trained models have brought significant improvements to many NLP tasks
and have been extensively analyzed. But little is known about the effect of
fine-tuning on specific tasks. Intuitively, people may agree that a pre-trained
model already learns semantic representations of words (e.g. synonyms are
closer to each other) and fine-tuning further improves its capabilities which
require more complicated reasoning (e.g. coreference resolution, entity
boundary detection, etc). However, how to verify these arguments analytically
and quantitatively is a challenging task and there are few works focus on this
topic. In this paper, inspired by the observation that most probing tasks
involve identifying matched pairs of phrases (e.g. coreference requires
matching an entity and a pronoun), we propose a pairwise probe to understand
BERT fine-tuning on the machine reading comprehension (MRC) task. Specifically,
we identify five phenomena in MRC. According to pairwise probing tasks, we
compare the performance of each layer's hidden representation of pre-trained
and fine-tuned BERT. The proposed pairwise probe alleviates the problem of
distraction from inaccurate model training and makes a robust and quantitative
comparison. Our experimental analysis leads to highly confident conclusions:
(1) Fine-tuning has little effect on the fundamental and low-level information
and general semantic tasks. (2) For specific abilities required for downstream
tasks, fine-tuned BERT is better than pre-trained BERT and such gaps are
obvious after the fifth layer.Comment: e.g.: 4 pages, 1 figur
- …