415 research outputs found

    StyleGAN3: Generative Networks for Improving the Equivariance of Translation and Rotation

    Full text link
    StyleGAN can use style to affect facial posture and identity features, and noise to affect hair, wrinkles, skin color and other details. Among these, the outcomes of the picture processing will vary slightly between different versions of styleGAN. As a result, the comparison of performance differences between styleGAN2 and the two modified versions of styleGAN3 will be the main focus of this study. We used the FFHQ dataset as the dataset and FID, EQ-T, and EQ-R were used to be the assessment of the model. In the end, we discovered that Stylegan3 version is a better generative network to improve the equivariance. Our findings have a positive impact on the creation of animation and videos

    Rate-Distortion Optimized Post-Training Quantization for Learned Image Compression

    Full text link
    Quantizing floating-point neural network to its fixed-point representation is crucial for Learned Image Compression (LIC) because it ensures the decoding consistency for interoperability and reduces space-time complexity for implementation. Existing solutions often have to retrain the network for model quantization which is time consuming and impractical. This work suggests the use of Post-Training Quantization (PTQ) to directly process pretrained, off-the-shelf LIC models. We theoretically prove that minimizing the mean squared error (MSE) in PTQ is sub-optimal for compression task and thus develop a novel Rate-Distortion (R-D) Optimized PTQ (RDO-PTQ) to best retain the compression performance. Such RDO-PTQ just needs to compress few images (e.g., 10) to optimize the transformation of weight, bias, and activation of underlying LIC model from its native 32-bit floating-point (FP32) format to 8-bit fixed-point (INT8) precision for fixed-point inference onwards. Experiments reveal outstanding efficiency of the proposed method on different LICs, showing the closest coding performance to their floating-point counterparts. And, our method is a lightweight and plug-and-play approach without any need of model retraining which is attractive to practitioners

    The Development of NBA in China: A Glocalization Perspective

    Get PDF
    The growing sport industry and 1.3 billion potential consumers in China have been garnering tremendous attention from more and more overseas professional sport leagues. Comparatively, the National Basketball Association (NBA) has had remarkable success in the Chinese market. From the perspective of sport competition or marketing operations, the NBA’s achievement in China provides a model for other overseas sport leagues. This case study was organized by summarizing the developmental history of NBA in China, analyzing its current promotional practices, investigating into its marketing strategies, and extrapolating practical references for other sport leagues aiming to penetrating into the Chinese marketplace. In the perspective of glocalization, multinational corporations should combine both standardized and adapted elements to conceptualize globally and act locally (Tanahashi, 2008). By taking this approach, marketers can meet the needs of local consumers effectively while still maintaining some extent of global standardization (Singh, Kumar, & Baack, 2005). To obtain in-depth understanding about NBA globalization and localization in China, we conducted one-on-one interviews with Chinese academic scholars in sport management and practitioners in Chinese basketball industry and NBA China. Two focus groups with six participants in each group were conducted to learn the perception of NBA products from the perspective of Chinese consumer. The qualitative data analysis was organized around four major aspects: products, media, management and public relations, which were highlighted in the glocalization of transnational corporations (Yang, 2003; Zhang, 2007) The current case study concluded that although NBA has achieved huge successes in the areas of building a large fan base, increasing media exposure, and garnering net income after its entry to China, it still faces many challenges. One viable solution for the NBA is to bring authentic American cultural commodities while adding Chinese characteristics to accommodate local fans. Combining global heroes such as Michael Jordan and Kobe Bryant and local hero such as Yao Ming, Yi Jianlian, and Jeremy Lin, NBA games will continue to appeal to millions of Chinese fans. Meantime, NBA management needs to continue seeking ways to work out and through the differences in government models and cultural contexts between China and United States. Some viable actions include the promotion of Chinese youth basketball, the training service for elite basketball players, and government-level public relations. In addition, this study suggested that the research framework of glocalization would be an ever intriguing inquiry needed for other sport organizations or leagues seeking expansion to overseas markets

    Dynamic evolution of ceftazidime–avibactam resistance due to interchanges between blaKPC-2 and blaKPC-145 during treatment of Klebsiella pneumoniae infection

    Get PDF
    BackgroundThe emergence of ceftazidime–avibactam (CZA) resistance among carbapenem-resistant Klebsiella pneumoniae (CRKP) is of major concern due to limited therapeutic options.MethodsIn this study, 10 CRKP strains were isolated from different samples of a patient with CRKP infection receiving CZA treatment. Whole-genome sequencing (WGS) and conjugation experiments were performed to determine the transferability of the carbapenem resistance gene.ResultsThis infection began with a KPC-2-producing K. pneumoniae (CZA MIC = 2 μg/mL, imipenem MIC ≥ 16 μg/mL). After 20 days of CZA treatment, the strains switched to the amino acid substitution of T263A caused by a novel KPC-producing gene, blaKPC-145, which restored carbapenem susceptibility but showed CZA resistance (CZA MIC ≥ 256 μg/mL, imipenem MIC = 1 μg/mL). The blaKPC-145 gene was located on a 148,185-bp untransformable IncFII-type plasmid. The subsequent use of carbapenem against KPC-145-producing K. pneumoniae infection led to a reversion of KPC-2 production (CZA MIC = 2 μg/mL, imipenem MIC ≥ 16 μg/mL). WGS analysis showed that all isolates belonged to ST11-KL47, and the number of SNPs was 14. This implied that these blaKPC-positive K. pneumoniae isolates might originate from a single clone and have been colonized for a long time during the 120-day treatment period.ConclusionThis is the first report of CZA resistance caused by blaKPC-145, which emerged during the treatment with CZA against blaKPC-2-positive K. pneumoniae-associated infection in China. These findings indicated that routine testing for antibiotic susceptibility and carbapenemase genotype is essential during CZA treatment

    Generative Software Engineering

    Full text link
    The rapid development of deep learning techniques, improved computational power, and the availability of vast training data have led to significant advancements in pre-trained models and large language models (LLMs). Pre-trained models based on architectures such as BERT and Transformer, as well as LLMs like ChatGPT, have demonstrated remarkable language capabilities and found applications in Software engineering. Software engineering tasks can be divided into many categories, among which generative tasks are the most concern by researchers, where pre-trained models and LLMs possess powerful language representation and contextual awareness capabilities, enabling them to leverage diverse training data and adapt to generative tasks through fine-tuning, transfer learning, and prompt engineering. These advantages make them effective tools in generative tasks and have demonstrated excellent performance. In this paper, we present a comprehensive literature review of generative tasks in SE using pre-trained models and LLMs. We accurately categorize SE generative tasks based on software engineering methodologies and summarize the advanced pre-trained models and LLMs involved, as well as the datasets and evaluation metrics used. Additionally, we identify key strengths, weaknesses, and gaps in existing approaches, and propose potential research directions. This review aims to provide researchers and practitioners with an in-depth analysis and guidance on the application of pre-trained models and LLMs in generative tasks within SE
    • …
    corecore