178 research outputs found
A study on the history of urban morphology in China based on discourse analysis
[EN] Urban morphology is a method widely used in China in the field of urban design and urban conservation. Since its first introduction to the Chinese context about 30 years ago, the key ideas and concepts of urban morphology underwent a significant phenomenon of ‘lost in translation’. Different origins of morphological thoughts, different versions of translation, as well as different disciplinary context, have all together led to a chaotic discourse. This paper reviews the key Chinese articles in the field of urban morphology since 1982 and draws out a group of persistent keywords, such as urban form, growth mechanism, evolution and axis that characterize the morphological approach to urban issues, to find unusual evolutionary process. By reviewing the transformation of the definition of these keywords, this paper aims to generate an evolutionary diagram of landmark ideas and concepts.Zhang, L.; Lu, A. (2018). A study on the history of urban morphology in China based on discourse analysis. En 24th ISUF International Conference. Book of Papers. Editorial Universitat Politècnica de València. 1471-1480. https://doi.org/10.4995/ISUF2017.2017.5981OCS1471148
BL-MNE: Emerging Heterogeneous Social Network Embedding through Broad Learning with Aligned Autoencoder
Network embedding aims at projecting the network data into a low-dimensional
feature space, where the nodes are represented as a unique feature vector and
network structure can be effectively preserved. In recent years, more and more
online application service sites can be represented as massive and complex
networks, which are extremely challenging for traditional machine learning
algorithms to deal with. Effective embedding of the complex network data into
low-dimension feature representation can both save data storage space and
enable traditional machine learning algorithms applicable to handle the network
data. Network embedding performance will degrade greatly if the networks are of
a sparse structure, like the emerging networks with few connections. In this
paper, we propose to learn the embedding representation for a target emerging
network based on the broad learning setting, where the emerging network is
aligned with other external mature networks at the same time. To solve the
problem, a new embedding framework, namely "Deep alIgned autoencoder based
eMbEdding" (DIME), is introduced in this paper. DIME handles the diverse link
and attribute in a unified analytic based on broad learning, and introduces the
multiple aligned attributed heterogeneous social network concept to model the
network structure. A set of meta paths are introduced in the paper, which
define various kinds of connections among users via the heterogeneous link and
attribute information. The closeness among users in the networks are defined as
the meta proximity scores, which will be fed into DIME to learn the embedding
vectors of users in the emerging network. Extensive experiments have been done
on real-world aligned social networks, which have demonstrated the
effectiveness of DIME in learning the emerging network embedding vectors.Comment: 10 pages, 9 figures, 4 tables. Full paper is accepted by ICDM 2017,
In: Proceedings of the 2017 IEEE International Conference on Data Mining
Event-guided Multi-patch Network with Self-supervision for Non-uniform Motion Deblurring
Contemporary deep learning multi-scale deblurring models suffer from many
issues: 1) They perform poorly on non-uniformly blurred images/videos; 2)
Simply increasing the model depth with finer-scale levels cannot improve
deblurring; 3) Individual RGB frames contain a limited motion information for
deblurring; 4) Previous models have a limited robustness to spatial
transformations and noise. Below, we extend the DMPHN model by several
mechanisms to address the above issues: I) We present a novel self-supervised
event-guided deep hierarchical Multi-patch Network (MPN) to deal with blurry
images and videos via fine-to-coarse hierarchical localized representations;
II) We propose a novel stacked pipeline, StackMPN, to improve the deblurring
performance under the increased network depth; III) We propose an event-guided
architecture to exploit motion cues contained in videos to tackle complex blur
in videos; IV) We propose a novel self-supervised step to expose the model to
random transformations (rotations, scale changes), and make it robust to
Gaussian noises. Our MPN achieves the state of the art on the GoPro and
VideoDeblur datasets with a 40x faster runtime compared to current multi-scale
methods. With 30ms to process an image at 1280x720 resolution, it is the first
real-time deep motion deblurring model for 720p images at 30fps. For StackMPN,
we obtain significant improvements over 1.2dB on the GoPro dataset by
increasing the network depth. Utilizing the event information and
self-supervision further boost results to 33.83dB.Comment: International Journal of Computer Vision. arXiv admin note:
substantial text overlap with arXiv:1904.0346
- …