141 research outputs found

    Improvements to the Overpotential of All-Solid-State Lithium-Ion Batteries during the Past Ten Years

    Get PDF
    After the research that shows that Li10GeP2S12 (LGPS)-type sulfide solid electrolytes can reach the high ionic conductivity at the room temperature, sulfide solid electrolytes have been intensively developed with regard to ionic conductivity and mechanical properties. As a result, an increasing volume of research has been conducted to employ all-solid-state lithium batteries in electric automobiles within the next five years. To achieve this goal, it is important to review the research over the past decade, and understand the requirements for future research necessary to realize the practical applications of all-solid-state lithium batteries. To date, research on all-solid-state lithium batteries has focused on achieving overpotential properties similar to those of conventional liquid-lithium-ion batteries by increasing the ionic conductivity of the solid electrolytes. However, the increase in the ionic conductivity should be accompanied by improvements of the electronic conductivity within the electrode to enable practical applications. This essay provides a critical overview of the recent progress and future research directions of the all-solid-state lithium batteries for practical applications

    The role of collective motion in the ultrafast charge transfer in van der Waals heterostructures.

    Get PDF
    The success of van der Waals heterostructures made of graphene, metal dichalcogenides and other layered materials, hinges on the understanding of charge transfer across the interface as the foundation for new device concepts and applications. In contrast to conventional heterostructures, where a strong interfacial coupling is essential to charge transfer, recent experimental findings indicate that van der Waals heterostructues can exhibit ultrafast charge transfer despite the weak binding of these heterostructures. Here we find, using time-dependent density functional theory molecular dynamics, that the collective motion of excitons at the interface leads to plasma oscillations associated with optical excitation. By constructing a simple model of the van der Waals heterostructure, we show that there exists an unexpected criticality of the oscillations, yielding rapid charge transfer across the interface. Application to the MoS2/WS2 heterostructure yields good agreement with experiments, indicating near complete charge transfer within a timescale of 100 fs

    PhaseAug: A Differentiable Augmentation for Speech Synthesis to Simulate One-to-Many Mapping

    Full text link
    Previous generative adversarial network (GAN)-based neural vocoders are trained to reconstruct the exact ground truth waveform from the paired mel-spectrogram and do not consider the one-to-many relationship of speech synthesis. This conventional training causes overfitting for both the discriminators and the generator, leading to the periodicity artifacts in the generated audio signal. In this work, we present PhaseAug, the first differentiable augmentation for speech synthesis that rotates the phase of each frequency bin to simulate one-to-many mapping. With our proposed method, we outperform baselines without any architecture modification. Code and audio samples will be available at https://github.com/mindslab-ai/phaseaug.Comment: Submitted to ICASSP 202

    GraphTensor: Comprehensive GNN-Acceleration Framework for Efficient Parallel Processing of Massive Datasets

    Full text link
    We present GraphTensor, a comprehensive open-source framework that supports efficient parallel neural network processing on large graphs. GraphTensor offers a set of easy-to-use programming primitives that appreciate both graph and neural network execution behaviors from the beginning (graph sampling) to the end (dense data processing). Our framework runs diverse graph neural network (GNN) models in a destination-centric, feature-wise manner, which can significantly shorten training execution times in a GPU. In addition, GraphTensor rearranges multiple GNN kernels based on their system hyperparameters in a self-governing manner, thereby reducing the processing dimensionality and the latencies further. From the end-to-end execution viewpoint, GraphTensor significantly shortens the service-level GNN latency by applying pipeline parallelism for efficient graph dataset preprocessing. Our evaluation shows that GraphTensor exhibits 1.4x better training performance than emerging GNN frameworks under the execution of large-scale, real-world graph workloads. For the end-to-end services, GraphTensor reduces training latencies of an advanced version of the GNN frameworks (optimized for multi-threaded graph sampling) by 2.4x, on average
    corecore