268 research outputs found

    The CVD growth and the characterization of MoxW1-xTe2

    Get PDF

    The Relationship between Public Service Efficiency of Government and Residential Political Trust in Hong Kong

    Get PDF
    Hong Kong has a long history with its high efficiency and clean and self-disciplined government. Within the past over 20 years, different social development trend has occurred in Hong Kong. The article observed the relationship between political trust from residence and public service efficiency of government in Hong Kong from 1992 to 2015 and found that the value of public service efficiency has a significant effect on political trust in Hong Kong government, the higher the efficiency of public services, the higher the political trust. The author tried to find the path for the Hong Kong government to improve its public service quality and efficiency after testifying the positive correlation between public service efficiency and residential political trust with empirical analysis

    FAC: 3D Representation Learning via Foreground Aware Feature Contrast

    Full text link
    Contrastive learning has recently demonstrated great potential for unsupervised pre-training in 3D scene understanding tasks. However, most existing work randomly selects point features as anchors while building contrast, leading to a clear bias toward background points that often dominate in 3D scenes. Also, object awareness and foreground-to-background discrimination are neglected, making contrastive learning less effective. To tackle these issues, we propose a general foreground-aware feature contrast (FAC) framework to learn more effective point cloud representations in pre-training. FAC consists of two novel contrast designs to construct more effective and informative contrast pairs. The first is building positive pairs within the same foreground segment where points tend to have the same semantics. The second is that we prevent over-discrimination between 3D segments/objects and encourage foreground-to-background distinctions at the segment level with adaptive feature learning in a Siamese correspondence network, which adaptively learns feature correlations within and across point cloud views effectively. Visualization with point activation maps shows that our contrast pairs capture clear correspondences among foreground regions during pre-training. Quantitative experiments also show that FAC achieves superior knowledge transfer and data efficiency in various downstream 3D semantic segmentation and object detection tasks.Comment: 11 pages, IEEE/CVF Conference on Computer Vision and Pattern Recognition 2023 (CVPR 2023), the work is mainly supported by the Natural Science Foundation Project of Fujian Province (2020J01826

    Identification of discrete-time output error model for industrial processes with time delay subject to load disturbance

    Get PDF
    In this paper, a bias-eliminated output error model identification method is proposed for industrial processes with time delay subject to unknown load disturbance with deterministic dynamics. By viewing the output response arising from such load disturbance as a dynamic parameter for estimation, a recursive least-squares identification algorithm is developed in the discrete-time domain to estimate the linear model parameters together with the load disturbance response, while the integer delay parameter is derived by using a one-dimensional searching approach to minimize the output fitting error. An auxiliary model is constructed to realize consistent estimation of the model parameters against stochastic noise. Moreover, dual adaptive forgetting factors are introduced with tuning guidelines to improve the convergence rates of estimating the model parameters and the load disturbance response, respectively. The convergence of model parameter estimation is analyzed with a rigorous proof. Illustrative examples for open- and closed-loop identification are shown to demonstrate the effectiveness and merit of the proposed identification method

    AI-Generated Images as Data Source: The Dawn of Synthetic Era

    Full text link
    The advancement of visual intelligence is intrinsically tethered to the availability of large-scale data. In parallel, generative Artificial Intelligence (AI) has unlocked the potential to create synthetic images that closely resemble real-world photographs. This prompts a compelling inquiry: how much visual intelligence could benefit from the advance of generative AI? This paper explores the innovative concept of harnessing these AI-generated images as new data sources, reshaping traditional modeling paradigms in visual intelligence. In contrast to real data, AI-generated data exhibit remarkable advantages, including unmatched abundance and scalability, the rapid generation of vast datasets, and the effortless simulation of edge cases. Built on the success of generative AI models, we examine the potential of their generated data in a range of applications, from training machine learning models to simulating scenarios for computational modeling, testing, and validation. We probe the technological foundations that support this groundbreaking use of generative AI, engaging in an in-depth discussion on the ethical, legal, and practical considerations that accompany this transformative paradigm shift. Through an exhaustive survey of current technologies and applications, this paper presents a comprehensive view of the synthetic era in visual intelligence. A project associated with this paper can be found at https://github.com/mwxely/AIGS .Comment: 20 pages, 11 figure

    StyleGaussian: Instant 3D Style Transfer with Gaussian Splatting

    Full text link
    We introduce StyleGaussian, a novel 3D style transfer technique that allows instant transfer of any image's style to a 3D scene at 10 frames per second (fps). Leveraging 3D Gaussian Splatting (3DGS), StyleGaussian achieves style transfer without compromising its real-time rendering ability and multi-view consistency. It achieves instant style transfer with three steps: embedding, transfer, and decoding. Initially, 2D VGG scene features are embedded into reconstructed 3D Gaussians. Next, the embedded features are transformed according to a reference style image. Finally, the transformed features are decoded into the stylized RGB. StyleGaussian has two novel designs. The first is an efficient feature rendering strategy that first renders low-dimensional features and then maps them into high-dimensional features while embedding VGG features. It cuts the memory consumption significantly and enables 3DGS to render the high-dimensional memory-intensive features. The second is a K-nearest-neighbor-based 3D CNN. Working as the decoder for the stylized features, it eliminates the 2D CNN operations that compromise strict multi-view consistency. Extensive experiments show that StyleGaussian achieves instant 3D stylization with superior stylization quality while preserving real-time rendering and strict multi-view consistency. Project page: https://kunhao-liu.github.io/StyleGaussian

    DivAvatar: Diverse 3D Avatar Generation with a Single Prompt

    Full text link
    Text-to-Avatar generation has recently made significant strides due to advancements in diffusion models. However, most existing work remains constrained by limited diversity, producing avatars with subtle differences in appearance for a given text prompt. We design DivAvatar, a novel framework that generates diverse avatars, empowering 3D creatives with a multitude of distinct and richly varied 3D avatars from a single text prompt. Different from most existing work that exploits scene-specific 3D representations such as NeRF, DivAvatar finetunes a 3D generative model (i.e., EVA3D), allowing diverse avatar generation from simply noise sampling in inference time. DivAvatar has two key designs that help achieve generation diversity and visual quality. The first is a noise sampling technique during training phase which is critical in generating diverse appearances. The second is a semantic-aware zoom mechanism and a novel depth loss, the former producing appearances of high textual fidelity by separate fine-tuning of specific body parts and the latter improving geometry quality greatly by smoothing the generated mesh in the features space. Extensive experiments show that DivAvatar is highly versatile in generating avatars of diverse appearances
    corecore