266 research outputs found

    The CVD growth and the characterization of MoxW1-xTe2

    Get PDF

    The Relationship between Public Service Efficiency of Government and Residential Political Trust in Hong Kong

    Get PDF
    Hong Kong has a long history with its high efficiency and clean and self-disciplined government. Within the past over 20 years, different social development trend has occurred in Hong Kong. The article observed the relationship between political trust from residence and public service efficiency of government in Hong Kong from 1992 to 2015 and found that the value of public service efficiency has a significant effect on political trust in Hong Kong government, the higher the efficiency of public services, the higher the political trust. The author tried to find the path for the Hong Kong government to improve its public service quality and efficiency after testifying the positive correlation between public service efficiency and residential political trust with empirical analysis

    FAC: 3D Representation Learning via Foreground Aware Feature Contrast

    Full text link
    Contrastive learning has recently demonstrated great potential for unsupervised pre-training in 3D scene understanding tasks. However, most existing work randomly selects point features as anchors while building contrast, leading to a clear bias toward background points that often dominate in 3D scenes. Also, object awareness and foreground-to-background discrimination are neglected, making contrastive learning less effective. To tackle these issues, we propose a general foreground-aware feature contrast (FAC) framework to learn more effective point cloud representations in pre-training. FAC consists of two novel contrast designs to construct more effective and informative contrast pairs. The first is building positive pairs within the same foreground segment where points tend to have the same semantics. The second is that we prevent over-discrimination between 3D segments/objects and encourage foreground-to-background distinctions at the segment level with adaptive feature learning in a Siamese correspondence network, which adaptively learns feature correlations within and across point cloud views effectively. Visualization with point activation maps shows that our contrast pairs capture clear correspondences among foreground regions during pre-training. Quantitative experiments also show that FAC achieves superior knowledge transfer and data efficiency in various downstream 3D semantic segmentation and object detection tasks.Comment: 11 pages, IEEE/CVF Conference on Computer Vision and Pattern Recognition 2023 (CVPR 2023), the work is mainly supported by the Natural Science Foundation Project of Fujian Province (2020J01826

    Identification of discrete-time output error model for industrial processes with time delay subject to load disturbance

    Get PDF
    In this paper, a bias-eliminated output error model identification method is proposed for industrial processes with time delay subject to unknown load disturbance with deterministic dynamics. By viewing the output response arising from such load disturbance as a dynamic parameter for estimation, a recursive least-squares identification algorithm is developed in the discrete-time domain to estimate the linear model parameters together with the load disturbance response, while the integer delay parameter is derived by using a one-dimensional searching approach to minimize the output fitting error. An auxiliary model is constructed to realize consistent estimation of the model parameters against stochastic noise. Moreover, dual adaptive forgetting factors are introduced with tuning guidelines to improve the convergence rates of estimating the model parameters and the load disturbance response, respectively. The convergence of model parameter estimation is analyzed with a rigorous proof. Illustrative examples for open- and closed-loop identification are shown to demonstrate the effectiveness and merit of the proposed identification method

    AI-Generated Images as Data Source: The Dawn of Synthetic Era

    Full text link
    The advancement of visual intelligence is intrinsically tethered to the availability of large-scale data. In parallel, generative Artificial Intelligence (AI) has unlocked the potential to create synthetic images that closely resemble real-world photographs. This prompts a compelling inquiry: how much visual intelligence could benefit from the advance of generative AI? This paper explores the innovative concept of harnessing these AI-generated images as new data sources, reshaping traditional modeling paradigms in visual intelligence. In contrast to real data, AI-generated data exhibit remarkable advantages, including unmatched abundance and scalability, the rapid generation of vast datasets, and the effortless simulation of edge cases. Built on the success of generative AI models, we examine the potential of their generated data in a range of applications, from training machine learning models to simulating scenarios for computational modeling, testing, and validation. We probe the technological foundations that support this groundbreaking use of generative AI, engaging in an in-depth discussion on the ethical, legal, and practical considerations that accompany this transformative paradigm shift. Through an exhaustive survey of current technologies and applications, this paper presents a comprehensive view of the synthetic era in visual intelligence. A project associated with this paper can be found at https://github.com/mwxely/AIGS .Comment: 20 pages, 11 figure

    DivAvatar: Diverse 3D Avatar Generation with a Single Prompt

    Full text link
    Text-to-Avatar generation has recently made significant strides due to advancements in diffusion models. However, most existing work remains constrained by limited diversity, producing avatars with subtle differences in appearance for a given text prompt. We design DivAvatar, a novel framework that generates diverse avatars, empowering 3D creatives with a multitude of distinct and richly varied 3D avatars from a single text prompt. Different from most existing work that exploits scene-specific 3D representations such as NeRF, DivAvatar finetunes a 3D generative model (i.e., EVA3D), allowing diverse avatar generation from simply noise sampling in inference time. DivAvatar has two key designs that help achieve generation diversity and visual quality. The first is a noise sampling technique during training phase which is critical in generating diverse appearances. The second is a semantic-aware zoom mechanism and a novel depth loss, the former producing appearances of high textual fidelity by separate fine-tuning of specific body parts and the latter improving geometry quality greatly by smoothing the generated mesh in the features space. Extensive experiments show that DivAvatar is highly versatile in generating avatars of diverse appearances

    Self-Supervised 3D Action Representation Learning with Skeleton Cloud Colorization

    Full text link
    3D Skeleton-based human action recognition has attracted increasing attention in recent years. Most of the existing work focuses on supervised learning which requires a large number of labeled action sequences that are often expensive and time-consuming to annotate. In this paper, we address self-supervised 3D action representation learning for skeleton-based action recognition. We investigate self-supervised representation learning and design a novel skeleton cloud colorization technique that is capable of learning spatial and temporal skeleton representations from unlabeled skeleton sequence data. We represent a skeleton action sequence as a 3D skeleton cloud and colorize each point in the cloud according to its temporal and spatial orders in the original (unannotated) skeleton sequence. Leveraging the colorized skeleton point cloud, we design an auto-encoder framework that can learn spatial-temporal features from the artificial color labels of skeleton joints effectively. Specifically, we design a two-steam pretraining network that leverages fine-grained and coarse-grained colorization to learn multi-scale spatial-temporal features. In addition, we design a Masked Skeleton Cloud Repainting task that can pretrain the designed auto-encoder framework to learn informative representations. We evaluate our skeleton cloud colorization approach with linear classifiers trained under different configurations, including unsupervised, semi-supervised, fully-supervised, and transfer learning settings. Extensive experiments on NTU RGB+D, NTU RGB+D 120, PKU-MMD, NW-UCLA, and UWA3D datasets show that the proposed method outperforms existing unsupervised and semi-supervised 3D action recognition methods by large margins and achieves competitive performance in supervised 3D action recognition as well.Comment: This work is an extension of our ICCV 2021 paper [arXiv:2108.01959] https://openaccess.thecvf.com/content/ICCV2021/html/Yang_Skeleton_Cloud_Colorization_for_Unsupervised_3D_Action_Representation_Learning_ICCV_2021_paper.htm
    corecore