189 research outputs found

    FSD-C10, a Fasudil derivative, promotes neuroregeneration through indirect and direct mechanisms.

    Get PDF
    FSD-C10, a Fasudil derivative, was shown to reduce severity of experimental autoimmune encephalomyelitis (EAE), an animal model of multiple sclerosis (MS), through the modulation of the immune response and induction of neuroprotective molecules in the central nervous system (CNS). However, whether FSD-C10 can promote neuroregeneration remains unknown. In this study, we further analyzed the effect of FSD-C10 on neuroprotection and remyelination. FSD-C10-treated mice showed a longer, thicker and more intense MAP2 and synaptophysin positive signal in the CNS, with significantly fewer CD4(+) T cells, macrophages and microglia. Importantly, the CNS of FSD-C10-treated mice showed a shift of activated macrophages/microglia from the type 1 to type 2 status, elevated numbers of oligodendrocyte precursor cells (OPCs) and oligodendrocytes, and increased levels of neurotrophic factors NT-3, GDNF and BDNF. FSD-C10-treated microglia significantly inhibited Th1/Th17 cell differentiation and increased the number of IL-10(+) CD4(+) T cells, and the conditioned medium from FSD-C10-treated microglia promoted OPC survival and oligodendrocyte maturation. Addition of FSD-C10 directly promoted remyelination in a chemical-induced demyelination model on organotypic slice culture, in a BDNF-dependent manner. Together, these findings demonstrate that FSD-C10 promotes neural repair through mechanisms that involved both immunomodulation and induction of neurotrophic factors

    ToonTalker: Cross-Domain Face Reenactment

    Full text link
    We target cross-domain face reenactment in this paper, i.e., driving a cartoon image with the video of a real person and vice versa. Recently, many works have focused on one-shot talking face generation to drive a portrait with a real video, i.e., within-domain reenactment. Straightforwardly applying those methods to cross-domain animation will cause inaccurate expression transfer, blur effects, and even apparent artifacts due to the domain shift between cartoon and real faces. Only a few works attempt to settle cross-domain face reenactment. The most related work AnimeCeleb requires constructing a dataset with pose vector and cartoon image pairs by animating 3D characters, which makes it inapplicable anymore if no paired data is available. In this paper, we propose a novel method for cross-domain reenactment without paired data. Specifically, we propose a transformer-based framework to align the motions from different domains into a common latent space where motion transfer is conducted via latent code addition. Two domain-specific motion encoders and two learnable motion base memories are used to capture domain properties. A source query transformer and a driving one are exploited to project domain-specific motion to the canonical space. The edited motion is projected back to the domain of the source with a transformer. Moreover, since no paired data is provided, we propose a novel cross-domain training scheme using data from two domains with the designed analogy constraint. Besides, we contribute a cartoon dataset in Disney style. Extensive evaluations demonstrate the superiority of our method over competing methods

    A study of strong pulses detected from PSR B0656+14 using Urumqi 25-m radio telescope at 1540MHz

    Full text link
    We report on the properties of strong pulses from PSR B0656+14 by analyzing the data obtained using Urumqi 25-m radio telescope at 1540 MHz from August 2007 to September 2010. In 44 hrs of observational data, a total of 67 pulses with signal-to-noise ratios above a 5-{\sigma} threshold were detected. The peak flux densities of these pulses are 58 to 194 times that of the average profile, and the pulse energies of them are 3 to 68 times that of the average pulse. These pulses are clustered around phases about 5 degrees ahead of the peak of the average profile. Comparing with the width of the average profile, they are relatively narrow, with the full widths at half-maximum range from 0.28 to 1.78 degrees. The distribution of pulse-energies of the pulses follows a lognormal distribution. These sporadic strong pulses detected from PSR B0656+14 are different in character from the typical giant pulses, and from its regular pulses.Comment: 6 pages, 3 figures, Accepted by RA

    SadTalker: Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation

    Full text link
    Generating talking head videos through a face image and a piece of speech audio still contains many challenges. ie, unnatural head movement, distorted expression, and identity modification. We argue that these issues are mainly because of learning from the coupled 2D motion fields. On the other hand, explicitly using 3D information also suffers problems of stiff expression and incoherent video. We present SadTalker, which generates 3D motion coefficients (head pose, expression) of the 3DMM from audio and implicitly modulates a novel 3D-aware face render for talking head generation. To learn the realistic motion coefficients, we explicitly model the connections between audio and different types of motion coefficients individually. Precisely, we present ExpNet to learn the accurate facial expression from audio by distilling both coefficients and 3D-rendered faces. As for the head pose, we design PoseVAE via a conditional VAE to synthesize head motion in different styles. Finally, the generated 3D motion coefficients are mapped to the unsupervised 3D keypoints space of the proposed face render, and synthesize the final video. We conduct extensive experiments to show the superior of our method in terms of motion and video quality.Comment: Project page: https://sadtalker.github.i

    VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild

    Full text link
    We present VideoReTalking, a new system to edit the faces of a real-world talking head video according to input audio, producing a high-quality and lip-syncing output video even with a different emotion. Our system disentangles this objective into three sequential tasks: (1) face video generation with a canonical expression; (2) audio-driven lip-sync; and (3) face enhancement for improving photo-realism. Given a talking-head video, we first modify the expression of each frame according to the same expression template using the expression editing network, resulting in a video with the canonical expression. This video, together with the given audio, is then fed into the lip-sync network to generate a lip-syncing video. Finally, we improve the photo-realism of the synthesized faces through an identity-aware face enhancement network and post-processing. We use learning-based approaches for all three steps and all our modules can be tackled in a sequential pipeline without any user intervention. Furthermore, our system is a generic approach that does not need to be retrained to a specific person. Evaluations on two widely-used datasets and in-the-wild examples demonstrate the superiority of our framework over other state-of-the-art methods in terms of lip-sync accuracy and visual quality.Comment: Accepted by SIGGRAPH Asia 2022 Conference Proceedings. Project page: https://vinthony.github.io/video-retalking

    3D GAN Inversion with Facial Symmetry Prior

    Full text link
    Recently, a surge of high-quality 3D-aware GANs have been proposed, which leverage the generative power of neural rendering. It is natural to associate 3D GANs with GAN inversion methods to project a real image into the generator's latent space, allowing free-view consistent synthesis and editing, referred as 3D GAN inversion. Although with the facial prior preserved in pre-trained 3D GANs, reconstructing a 3D portrait with only one monocular image is still an ill-pose problem. The straightforward application of 2D GAN inversion methods focuses on texture similarity only while ignoring the correctness of 3D geometry shapes. It may raise geometry collapse effects, especially when reconstructing a side face under an extreme pose. Besides, the synthetic results in novel views are prone to be blurry. In this work, we propose a novel method to promote 3D GAN inversion by introducing facial symmetry prior. We design a pipeline and constraints to make full use of the pseudo auxiliary view obtained via image flipping, which helps obtain a robust and reasonable geometry shape during the inversion process. To enhance texture fidelity in unobserved viewpoints, pseudo labels from depth-guided 3D warping can provide extra supervision. We design constraints aimed at filtering out conflict areas for optimization in asymmetric situations. Comprehensive quantitative and qualitative evaluations on image reconstruction and editing demonstrate the superiority of our method.Comment: Project Page is at https://feiiyin.github.io/SPI

    Comparison of fruit morphology and nutrition metabolism in different cultivars of kiwifruit across developmental stages

    Get PDF
    Kiwifruit (Actinidia) is becoming increasingly popular worldwide due to its favorable flavour and high vitamin C content. However, quality parameters vary among cultivars. To determine the differences in quality and metabolic parameters of kiwifruit, we monitored the growth processes of ‘Kuilv’ (Actinidia arguta), ‘Hongyang’ (Actinidia chinensis) and ‘Hayward’ (Actinidia deliciosa). We found that ‘Kuilv’ required the shortest time for fruit development, while ‘Hayward’ needed the longest time to mature. The fruit size of ‘Hayward’ was the largest and that of ‘Kuilv’ was the smallest. Furthermore, ‘Hongyang’ showed a double-S shape of dry matter accumulation, whereas ‘Kuilv’ and ‘Hayward’ showed a linear or single-S shape pattern of dry matter accumulation during development. The three cultivars demonstrated the same trend for total soluble solids accumulation, which did not rise rapidly until 90–120 days after anthesis. However, the accumulation of organic acids and soluble sugars varied among the cultivars. During later fruit development, the content of glucose, fructose and quinic acid in ‘Kuilv’ fruit was far lower than that in ‘Hongyang’ and ‘Hayward’. On the contrary, ‘Kuilv’ had the highest sucrose content among the three cultivars. At maturity, the antioxidative enzymatic systems were significantly different among the three kiwifruit cultivars. ‘Hongyang’ showed higher activities of superoxide dismutase than the other cultivars, while the catalase content of ‘Hayward’ was significantly higher than that of ‘Hongyang’ and ‘Kuilv’. These results provided knowledge that could be implemented for the marketing, handling and post-harvest technologies of the different kiwifruit cultivars
    • 

    corecore