559 research outputs found

    The core starbursts of the galaxy NGC 3628: Radio very long baseline interferometry and X-ray studies

    Full text link
    We present radio very long baseline interferometry (VLBI) and X-ray studies of the starburst galaxy NGC 3628. The VLBI observation at 1.5 GHz reveals seven compact (0.7-7 parsec) radio sources in the central \sim250 parsec region of NGC 3628. Based on their morphology, high radio brightness temperatures (10510710^5-10^7 K), and steep radio spectra, none of these seven sources can be associated with active galactic nuclei (AGNs); instead, they can be identified as supernova remnants (SNRs), with three of them appearing consistent with partial shells. Notably, one of them (W2) is likely a nascent radio supernova and appears to be consistent with the star formation rate of NGC 3628 when assuming a canonical initial mass function. The VLBI observation provides the first precise measurement of the diameter of the radio sources in NGC 3628, which allow us to fit a well-constrained radio surface brightness - diameter (ΣD\Sigma-D) correlation by including the detected SNRs. Furthermore, future VLBI observations can be conducted to measure the expansion velocity of the detected SNRs. In addition to our radio VLBI study, we analyze Chandra and XMM-Newton spectra of NGC 3628. The spectral fitting indicates that the SNR activities could well account for the observed X-ray emissions. Along with the Chandra X-ray image, it further reveals that the X-ray emission is likely maintained by the galactic-scale outflow triggered by SN activities. These results provide strong evidence that SN-triggered activities play a critical role in generating both radio and X-ray emissions in NGC 3628 and further suggest that the galaxy NGC 3628 is in an early stage of starbursts.Comment: 15 pages, 4 Tables and 6 Figures, accepted for publication in Ap

    Research on China’s fiscal and taxation policy of new energy vehicle industry technological innovation

    Get PDF
    Technological innovation in the new energy vehicle industry is conducive to the achievement of China’s major strategic goal of ‘carbon peak and carbon neutrality’. This research involved an empirical study on the relevant data of 14 listed new energy vehicle companies from 2012 to 2019. It used the entropy weight method to obtain the technological innovation index through the four indicators of research and development (R&D) investment, fixed asset investment, intangible assets, and patent application volume. Taking fiscal subsidies and tax burdens as independent variables, a fixed effect model was used to analyze the impact of fiscal and taxation policies on technological innovation in the new energy vehicle industry. The research results show that financial subsidies will encourage new energy vehicle companies to carry out technological innovation, the tax burden has no significant impact on the technological innovation of new energy vehicle enterprises, the scale and age of enterprises, as well as the proportion of R&D personnel to the total number of employees, will all encourage new energy vehicle companies to carry out technological innovation. Based on this, we put forward specific suggestions on further improving the fiscal subsidy and tax incentive policies

    Crossover effects of servant leadership and job social support on employee spouses:the mediating role of employee organization-based self-esteem

    Get PDF
    The present study investigated the crossover effects of employee perceptions of servant leadership and job social support on the family satisfaction and quality of family life experienced by the employees’ spouses. These effects were explored through a focus on the mediating role of employee organization-based self-esteem (OBSE). Results from a three-wave field survey of 199 employee–spouse dyads in the People’s Republic of China support our hypotheses, indicating that OBSE fully mediates the positive effects of servant leadership and job social support on family satisfaction and quality of family life. These findings provide new theoretical directions for work–family research

    Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation

    Full text link
    Large text-to-image diffusion models have exhibited impressive proficiency in generating high-quality images. However, when applying these models to video domain, ensuring temporal consistency across video frames remains a formidable challenge. This paper proposes a novel zero-shot text-guided video-to-video translation framework to adapt image models to videos. The framework includes two parts: key frame translation and full video translation. The first part uses an adapted diffusion model to generate key frames, with hierarchical cross-frame constraints applied to enforce coherence in shapes, textures and colors. The second part propagates the key frames to other frames with temporal-aware patch matching and frame blending. Our framework achieves global style and local texture temporal consistency at a low cost (without re-training or optimization). The adaptation is compatible with existing image diffusion techniques, allowing our framework to take advantage of them, such as customizing a specific subject with LoRA, and introducing extra spatial guidance with ControlNet. Extensive experimental results demonstrate the effectiveness of our proposed framework over existing methods in rendering high-quality and temporally-coherent videos.Comment: Accepted to SIGGRAPH Asia 2023. Project page: https://www.mmlab-ntu.com/project/rerender

    StyleGANEX: StyleGAN-Based Manipulation Beyond Cropped Aligned Faces

    Full text link
    Recent advances in face manipulation using StyleGAN have produced impressive results. However, StyleGAN is inherently limited to cropped aligned faces at a fixed image resolution it is pre-trained on. In this paper, we propose a simple and effective solution to this limitation by using dilated convolutions to rescale the receptive fields of shallow layers in StyleGAN, without altering any model parameters. This allows fixed-size small features at shallow layers to be extended into larger ones that can accommodate variable resolutions, making them more robust in characterizing unaligned faces. To enable real face inversion and manipulation, we introduce a corresponding encoder that provides the first-layer feature of the extended StyleGAN in addition to the latent style code. We validate the effectiveness of our method using unaligned face inputs of various resolutions in a diverse set of face manipulation tasks, including facial attribute editing, super-resolution, sketch/mask-to-face translation, and face toonification.Comment: ICCV 2023. Code: https://github.com/williamyang1991/StyleGANEX Project page: https://www.mmlab-ntu.com/project/styleganex

    VToonify: Controllable High-Resolution Portrait Video Style Transfer

    Full text link
    Generating high-quality artistic portrait videos is an important and desirable task in computer graphics and vision. Although a series of successful portrait image toonification models built upon the powerful StyleGAN have been proposed, these image-oriented methods have obvious limitations when applied to videos, such as the fixed frame size, the requirement of face alignment, missing non-facial details and temporal inconsistency. In this work, we investigate the challenging controllable high-resolution portrait video style transfer by introducing a novel VToonify framework. Specifically, VToonify leverages the mid- and high-resolution layers of StyleGAN to render high-quality artistic portraits based on the multi-scale content features extracted by an encoder to better preserve the frame details. The resulting fully convolutional architecture accepts non-aligned faces in videos of variable size as input, contributing to complete face regions with natural motions in the output. Our framework is compatible with existing StyleGAN-based image toonification models to extend them to video toonification, and inherits appealing features of these models for flexible style control on color and intensity. This work presents two instantiations of VToonify built upon Toonify and DualStyleGAN for collection-based and exemplar-based portrait video style transfer, respectively. Extensive experimental results demonstrate the effectiveness of our proposed VToonify framework over existing methods in generating high-quality and temporally-coherent artistic portrait videos with flexible style controls.Comment: ACM Transactions on Graphics (SIGGRAPH Asia 2022). Code: https://github.com/williamyang1991/VToonify Project page: https://www.mmlab-ntu.com/project/vtoonify
    corecore