114 research outputs found

    Go beyond End-to-End Training: Boosting Greedy Local Learning with Context Supply

    Full text link
    Traditional end-to-end (E2E) training of deep networks necessitates storing intermediate activations for back-propagation, resulting in a large memory footprint on GPUs and restricted model parallelization. As an alternative, greedy local learning partitions the network into gradient-isolated modules and trains supervisely based on local preliminary losses, thereby providing asynchronous and parallel training methods that substantially reduce memory cost. However, empirical experiments reveal that as the number of segmentations of the gradient-isolated module increases, the performance of the local learning scheme degrades substantially, severely limiting its expansibility. To avoid this issue, we theoretically analyze the greedy local learning from the standpoint of information theory and propose a ContSup scheme, which incorporates context supply between isolated modules to compensate for information loss. Experiments on benchmark datasets (i.e. CIFAR, SVHN, STL-10) achieve SOTA results and indicate that our proposed method can significantly improve the performance of greedy local learning with minimal memory and computational overhead, allowing for the boost of the number of isolated modules. Our codes are available at https://github.com/Tab-ct/ContSup.Comment: 9 figures, 12 table

    SDiT: Spiking Diffusion Model with Transformer

    Full text link
    Spiking neural networks (SNNs) have low power consumption and bio-interpretable characteristics, and are considered to have tremendous potential for energy-efficient computing. However, the exploration of SNNs on image generation tasks remains very limited, and a unified and effective structure for SNN-based generative models has yet to be proposed. In this paper, we explore a novel diffusion model architecture within spiking neural networks. We utilize transformer to replace the commonly used U-net structure in mainstream diffusion models. It can generate higher quality images with relatively lower computational cost and shorter sampling time. It aims to provide an empirical baseline for research of generative models based on SNNs. Experiments on MNIST, Fashion-MNIST, and CIFAR-10 datasets demonstrate that our work is highly competitive compared to existing SNN generative models

    The Quantum Chemical Investigation on the Structure-Activity Relationship of a Schiff Base Corrosion Inhibitor

    Get PDF
    This study investigated the relationship between the molecular structure and the corrosion inhibition efficiency of three corrosion inhibitors for steel in acidic media using the DFT method. First, the molecular conformations of the three compounds were optimized and the populations of charges and frontier orbitals were obtained at the B3LYP/6-311G level. Quantum chemical parameters were also obtained by calculations, including the highest occupied molecular orbital energy (EHOMO), the energy gap (ELUMO−EHOMO), the total energy of the molecule, the dipole moment and the number of electrons transferred (ΔN). The results of the correlation between quantum chemical parameters and inhibition efficiencies demonstrated that the inhibition efficiency of the inhibitors increased with the decrease of ELUMO-EHOMO and the increase of ΔN. The regions with nitrogen and oxygen atoms are the sites most likely to bond with iron atoms by donating electrons

    Modulation bandwidth improvement of III-V/Si hybrid MOS optical modulator by reducing parasitic capacitance

    Get PDF
    In this work, we numerically and experimentally examined the impact of parasitic capacitance on the modulation bandwidth of a III-V/Si hybrid metal-oxide-semiconductor (MOS) optical modulator. The numerical analysis revealed that the parasitic capacitance between the III-V membrane and the Si slab should be considered to achieve high-speed modulation, particularly in the case of a thick gate oxide. We also fabricated a high-speed InGaAsP/Si hybrid MOS optical modulator with a low capacitance using a SiO2-embedded Si waveguide. The fabricated device exhibited a modulation efficiency of 0.245 Vcm and a 3 dB bandwidth of up to 10 GHz. Clear eye patterns with 25 Gbps non-return-to-zero (NRZ) modulation and 40 Gbps 4-level pulse amplitude modulation (PAM-4) were obtained without pre-emphasis

    A skewed loss function for correcting predictive bias in brain age prediction

    Get PDF
    In neuroimaging, the difference between predicted brain age and chronological age, known as brain age delta, has shown its potential as a biomarker related to various pathological phenotypes. There is a frequently observed bias when estimating brain age delta using regression models. This bias manifests as an overestimation of brain age for young participants and an underestimation of brain age for older participants. Therefore, the brain age delta is negatively correlated with chronological age, which can be problematic when evaluating relationships between brain age delta and other age-associated variables. This paper proposes a novel bias correction method for regression models by introducing a skewed loss function to replace the normal symmetric loss function. The regression model then behaves differently depending on whether it makes overestimations or underestimations. Our approach works with any type of MR image and no specific preprocessing is required, as long as the image is sensitive to age-related changes. The proposed approach has been validated using three classic deep learning models, namely ResNet, VGG, and GoogleNet on publicly available neuroimaging aging datasets. It shows flexibility across different model architectures and different choices of hyperparameters. The corrected brain age delta from our approach then has no linear relationship with chronological age and achieves higher predictive accuracy than a commonly-used two-stage approach

    Stateless and Verifiable Execution Layer for Meta-Protocols on Bitcoin

    Get PDF
    The Bitcoin ecosystem has continued to evolve beyond its initial promises of decentralization, transparency, and security. Recent advancements have notably been made with the integration of Layer-2 solutions, which address scalability issues by offloading transactions from the main blockchain. This facilitates faster and more cost-effective transactions while maintaining integrity. The advent of inscriptions and ordinal protocols has further broadened the spectrum of capabilities, enabling the creation of unique, indivisible assets on the blockchain. Despite these technological strides, the inherent limitations of Bitcoin\u27s script being Turing-incomplete restrict complex executions directly on the blockchain, necessitating the use of Bitcoin indexers. These indexers act as off-chain execution layers, allowing for the incorporation of Turing-complete programming languages to manage and update state transitions based on blockchain data. However, this off-chain solution introduces challenges to data integrity and availability, compounded by the decentralized nature of blockchain which complicates data maintenance and accuracy. To address these challenges, we propose a new modular indexer architecture that enables a fully decentralized and user-verified network, mitigating the risks associated with traditional decentralized indexer networks susceptible to Sybil attacks. Our solution, INDECURE, leverages polynomial commitments as checkpoints to streamline the verification process, significantly reducing the overhead associated with integrity checks of state transitions. By implementing a robust data attestation procedure, INDECURE ensures the reliability of state information against malicious alterations, facilitating trustless verifications by users. Our preliminary evaluations of INDECURE across various indexer protocols—BRC20, Bitmap, and satsnames—demonstrate its superiority in reducing computation time and data block size while maintaining high integrity in state transitions. This modular approach not only enhances the security and efficiency of Bitcoin\u27s off-chain executions but also sets a foundational layer for scalable, secure blockchain applications

    On the Effectiveness of Speech Self-supervised Learning for Music

    Full text link
    Self-supervised learning (SSL) has shown promising results in various speech and natural language processing applications. However, its efficacy in music information retrieval (MIR) still remains largely unexplored. While previous SSL models pre-trained on music recordings may have been mostly closed-sourced, recent speech models such as wav2vec2.0 have shown promise in music modelling. Nevertheless, research exploring the effectiveness of applying speech SSL models to music recordings has been limited. We explore the music adaption of SSL with two distinctive speech-related models, data2vec1.0 and Hubert, and refer to them as music2vec and musicHuBERT, respectively. We train 1212 SSL models with 95M parameters under various pre-training configurations and systematically evaluate the MIR task performances with 13 different MIR tasks. Our findings suggest that training with music data can generally improve performance on MIR tasks, even when models are trained using paradigms designed for speech. However, we identify the limitations of such existing speech-oriented designs, especially in modelling polyphonic information. Based on the experimental results, empirical suggestions are also given for designing future musical SSL strategies and paradigms

    MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised Training

    Full text link
    Self-supervised learning (SSL) has recently emerged as a promising paradigm for training generalisable models on large-scale data in the fields of vision, text, and speech. Although SSL has been proven effective in speech and audio, its application to music audio has yet to be thoroughly explored. This is primarily due to the distinctive challenges associated with modelling musical knowledge, particularly its tonal and pitched characteristics of music. To address this research gap, we propose an acoustic Music undERstanding model with large-scale self-supervised Training (MERT), which incorporates teacher models to provide pseudo labels in the masked language modelling (MLM) style acoustic pre-training. In our exploration, we identified a superior combination of teacher models, which outperforms conventional speech and audio approaches in terms of performance. This combination includes an acoustic teacher based on Residual Vector Quantization - Variational AutoEncoder (RVQ-VAE) and a musical teacher based on the Constant-Q Transform (CQT). These teachers effectively guide our student model, a BERT-style transformer encoder, to better model music audio. In addition, we introduce an in-batch noise mixture augmentation to enhance the representation robustness. Furthermore, we explore a wide range of settings to overcome the instability in acoustic language model pre-training, which allows our designed paradigm to scale from 95M to 330M parameters. Experimental results indicate that our model can generalise and perform well on 14 music understanding tasks and attains state-of-the-art (SOTA) overall scores. The code and models are online: https://github.com/yizhilll/MERT
    • 

    corecore