296 research outputs found
Effect of hukou Accessibility on Migrantsâ Long Term Settlement Intention in Destination
Migrantsâ long-term settlement intention in urban areas has been emphasized by both policy makers and researchers in promoting urbanization and coordinating regional economic development. This study advances the body of knowledge by investigating the effect of what E.S. Lee has proposed as âintervening obstaclesâ in the âpush-and-pullâ theoryâthe difficulty in obtaining hukou in migration destination, on their long-term settlement intention in urban areas. Logistic regressions were applied to examine the effect of urban registered residence system (the hukou system) accessibility on migrantsâ long-term settlement intention in urban areas, as well as the determinants of subjective evaluated difficulty in obtaining urban hukou, based on a nation-wide large-scale survey in 46 Chinese cities. Our results suggest that difficulty in obtaining urban hukou does play an important role in shaping country-wide population movement. However, the negative impact of hukou difficulty on migrant workersâ residence intention is not linear, and only when the threshold in obtaining hukou is too high and difficult to achieve will migrant workers choose to return to their hometown in the long term. Moreover, the subjective evaluation of difficulty is further influenced by personal capability and living conditions in cities. This study provides pragmatic implications for administrations from either push side or pull side to improve habitant-related development strategies
Aligning Speakers: Evaluating and Visualizing Text-based Diarization Using Efficient Multiple Sequence Alignment (Extended Version)
This paper presents a novel evaluation approach to text-based speaker
diarization (SD), tackling the limitations of traditional metrics that do not
account for any contextual information in text. Two new metrics are proposed,
Text-based Diarization Error Rate and Diarization F1, which perform utterance-
and word-level evaluations by aligning tokens in reference and hypothesis
transcripts. Our metrics encompass more types of errors compared to existing
ones, allowing us to make a more comprehensive analysis in SD. To align tokens,
a multiple sequence alignment algorithm is introduced that supports multiple
sequences in the reference while handling high-dimensional alignment to the
hypothesis using dynamic programming. Our work is packaged into two tools,
align4d providing an API for our alignment algorithm and TranscribeView for
visualizing and evaluating SD errors, which can greatly aid in the creation of
high-quality data, fostering the advancement of dialogue systems.Comment: Accepted to the 35th IEEE International Conference on Tools with
Artificial Intelligence (ICTAI) 202
A New Method for Analyzing Integrated Stealth Ability of Penetration Aircraft
AbstractTaking into account the limitations of existing stealth performance analysis methods, a method termed as the integrated stealth performance analysis method is proposed for evaluating the stealth ability of the penetration aircraft. Based on various target radar cross section (RCS) scattering characters, this article integrates the relevant parameters needed for building up target circumferential RCS scattering model and proposes the RCS scattering controlling parameters to control the changing trends of the relevant model RCS scattering characters. According to the radar dynamic detecting characters during the whole penetration course, a dynamic stealth performance evaluating model is proposed accompanied by a series of stealth ability estimation rules. This new analysis method can enhance the integrality and dependability of the stealth analysis conclusions and summarize the relationship between the target RCS scattering characters and their effects on stealth performance. The rules indicated by this relationship can be used as the reference for designing new type of stealth aircraft and setting up specific penetration tactics
Multi-scale Attention Flow for Probabilistic Time Series Forecasting
The probability prediction of multivariate time series is a notoriously
challenging but practical task. On the one hand, the challenge is how to
effectively capture the cross-series correlations between interacting time
series, to achieve accurate distribution modeling. On the other hand, we should
consider how to capture the contextual information within time series more
accurately to model multivariate temporal dynamics of time series. In this
work, we proposed a novel non-autoregressive deep learning model, called
Multi-scale Attention Normalizing Flow(MANF), where we integrate multi-scale
attention and relative position information and the multivariate data
distribution is represented by the conditioned normalizing flow. Additionally,
compared with autoregressive modeling methods, our model avoids the influence
of cumulative error and does not increase the time complexity. Extensive
experiments demonstrate that our model achieves state-of-the-art performance on
many popular multivariate datasets
Fast Bounded Online Gradient Descent Algorithms for Scalable Kernel-Based Online Learning
Kernel-based online learning has often shown state-of-the-art performance for
many online learning tasks. It, however, suffers from a major shortcoming, that
is, the unbounded number of support vectors, making it non-scalable and
unsuitable for applications with large-scale datasets. In this work, we study
the problem of bounded kernel-based online learning that aims to constrain the
number of support vectors by a predefined budget. Although several algorithms
have been proposed in literature, they are neither computationally efficient
due to their intensive budget maintenance strategy nor effective due to the use
of simple Perceptron algorithm. To overcome these limitations, we propose a
framework for bounded kernel-based online learning based on an online gradient
descent approach. We propose two efficient algorithms of bounded online
gradient descent (BOGD) for scalable kernel-based online learning: (i) BOGD by
maintaining support vectors using uniform sampling, and (ii) BOGD++ by
maintaining support vectors using non-uniform sampling. We present theoretical
analysis of regret bound for both algorithms, and found promising empirical
performance in terms of both efficacy and efficiency by comparing them to
several well-known algorithms for bounded kernel-based online learning on
large-scale datasets.Comment: ICML201
Geometric Prior Based Deep Human Point Cloud Geometry Compression
The emergence of digital avatars has raised an exponential increase in the
demand for human point clouds with realistic and intricate details. The
compression of such data becomes challenging with overwhelming data amounts
comprising millions of points. Herein, we leverage the human geometric prior in
geometry redundancy removal of point clouds, greatly promoting the compression
performance. More specifically, the prior provides topological constraints as
geometry initialization, allowing adaptive adjustments with a compact parameter
set that could be represented with only a few bits. Therefore, we can envisage
high-resolution human point clouds as a combination of geometric priors and
structural deviations. The priors could first be derived with an aligned point
cloud, and subsequently the difference of features is compressed into a compact
latent code. The proposed framework can operate in a play-and-plug fashion with
existing learning based point cloud compression methods. Extensive experimental
results show that our approach significantly improves the compression
performance without deteriorating the quality, demonstrating its promise in a
variety of applications
- âŠ