655 research outputs found

    Carbon emissions in China: How far can new efforts bend the curve?

    Get PDF
    While China is on track to meet its global climate commitments through 2020, China’s post-2020 CO2 emissions trajectory is highly uncertain, with projections varying widely across studies. Over the past year, the Chinese government has announced new policy directives to deepen economic reform, protect the environment, and limit fossil energy use in China. To evaluate how new policy directives could affect energy and climate change outcomes, we simulate two levels of policy effort—a Continued Effort scenario that extends current policies beyond 2020 and an Accelerated Effort scenario that reflects newly announced policies—on the evolution of China’s energy and economic system over the next several decades. Importantly, we find that both levels of policy effort would bend down the CO2 emissions trajectory before 2050 without undermining economic development, although coal use and CO2 emissions peak about 10 years earlier in the Accelerated Effort scenario

    Co3O4@CoS core-shell nanosheets on carbon cloth for high performance supercapacitor electrodes

    Get PDF
    In this work, a two-step electrodeposition strategy is developed for the synthesis of core-shell Co3O4@CoS nanosheet arrays on carbon cloth (CC) for supercapacitor applications. Porous Co3O4 nanosheet arrays are first directly grown on CC by electrodeposition, followed by the coating of a thin layer of CoS on the surface of Co3O4 nanosheets via the secondary electrodeposition. The morphology control of the ternary composites can be easily achieved by altering the number of cyclic voltammetry (CV) cycles of CoS deposition. Electrochemical performance of the composite electrodes was evaluated by cyclic voltammetry, galvanostatic charge-discharge and electrochemical impedance spectroscopy techniques. The results demonstrate that the Co3O4@CoS/CC with 4 CV cycles of CoS deposition possesses the largest specific capacitance 887.5 F·g-1 at a scan rate of 10 mV·s-1 (764.2 F·g-1 at a current density of 1.0 A·g-1), and excellent cycling stability (78.1% capacitance retention) at high current density of 5.0 A·g-1 after 5000 cycles. The porous nanostructures on CC not only provide large accessible surface area for fast ions diffusion, electron transport and efficient utilization of active CoS and Co3O4, but also reduce the internal resistance of electrodes, which leads to superior electrochemical performance of Co3O4@CoS/CC composite at 4 cycles of CoS deposition. © 2017 by the authors.National Natural Science Foundation of China [21371057]; International Science and Technology Cooperation Program of China [2016YFE0131200, 2015DFA51220]; International Cooperation Project of Shanghai Municipal Science and Technology Committee [15520721100

    Breathing Life into Faces: Speech-driven 3D Facial Animation with Natural Head Pose and Detailed Shape

    Full text link
    The creation of lifelike speech-driven 3D facial animation requires a natural and precise synchronization between audio input and facial expressions. However, existing works still fail to render shapes with flexible head poses and natural facial details (e.g., wrinkles). This limitation is mainly due to two aspects: 1) Collecting training set with detailed 3D facial shapes is highly expensive. This scarcity of detailed shape annotations hinders the training of models with expressive facial animation. 2) Compared to mouth movement, the head pose is much less correlated to speech content. Consequently, concurrent modeling of both mouth movement and head pose yields the lack of facial movement controllability. To address these challenges, we introduce VividTalker, a new framework designed to facilitate speech-driven 3D facial animation characterized by flexible head pose and natural facial details. Specifically, we explicitly disentangle facial animation into head pose and mouth movement and encode them separately into discrete latent spaces. Then, these attributes are generated through an autoregressive process leveraging a window-based Transformer architecture. To augment the richness of 3D facial animation, we construct a new 3D dataset with detailed shapes and learn to synthesize facial details in line with speech content. Extensive quantitative and qualitative experiments demonstrate that VividTalker outperforms state-of-the-art methods, resulting in vivid and realistic speech-driven 3D facial animation
    corecore