193 research outputs found

    Union-net: A deep neural network model adapted to small data sets

    Full text link
    In real applications, generally small data sets can be obtained. At present, most of the practical applications of machine learning use classic models based on big data to solve the problem of small data sets. However, the deep neural network model has complex structure, huge model parameters, and training requires more advanced equipment, which brings certain difficulties to the application. Therefore, this paper proposes the concept of union convolution, designing a light deep network model union-net with a shallow network structure and adapting to small data sets. This model combines convolutional network units with different combinations of the same input to form a union module. Each union module is equivalent to a convolutional layer. The serial input and output between the 3 modules constitute a "3-layer" neural network. The output of each union module is fused and added as the input of the last convolutional layer to form a complex network with a 4-layer network structure. It solves the problem that the deep network model network is too deep and the transmission path is too long, which causes the loss of the underlying information transmission. Because the model has fewer model parameters and fewer channels, it can better adapt to small data sets. It solves the problem that the deep network model is prone to overfitting in training small data sets. Use the public data sets cifar10 and 17flowers to conduct multi-classification experiments. Experiments show that the Union-net model can perform well in classification of large data sets and small data sets. It has high practical value in daily application scenarios. The model code is published at https://github.com/yeaso/union-netComment: 13 pages, 6 figure

    Improved micro-continuum approach for capillary-dominated multiphase flow with reduced spurious velocity

    Get PDF
    A diverse range of multiphase flow and transport occurs in multiscale porous media. The multiphase micro-continuum Darcy-Brinkmann-Stokes (DBS) model has been developed to simulate the multiphase flow at both the pore and continuum scales via single-field equations. However, the unacceptable spurious velocities produced by the conventional micro-continuum DBS model present challenges to the modeling of capillary-dominated flow dynamics. This study improves the micro-continuum DBS model to mitigate these spurious velocities at the gas-liquid interface and contact-line regions. A hybrid interpolation scheme is proposed to improve the computational accuracy of the interface curvature and reduce the spurious velocity around the gas-liquid interface by 1-2 orders of magnitude. At the porous boundary, the normal to the gas-liquid interface is corrected, and the normal to the solid-fluid interface is smoothed to guarantee the prescribed wettability condition and decrease the spurious velocities at the contact-line region by an order of magnitude. A series of static and dynamic benchmark cases are investigated to demonstrate that the improved DBS model can simulate capillary-dominated multiphase flows with negligible spurious velocities at capillary numbers as low as 10-4 in both simple and complex geometries. The improved DBS model can combine X-ray computed micro-tomography images to perform multiscale simulations of capillary-dominated multiphase flow and understand the effect of sub-resolution porosity on fluid dynamics in naturally multiscale rocks

    Approximating Human-Like Few-shot Learning with GPT-based Compression

    Full text link
    In this work, we conceptualize the learning process as information compression. We seek to equip generative pre-trained models with human-like learning capabilities that enable data compression during inference. We present a novel approach that utilizes the Generative Pre-trained Transformer (GPT) to approximate Kolmogorov complexity, with the aim of estimating the optimal Information Distance for few-shot learning. We first propose using GPT as a prior for lossless text compression, achieving a noteworthy compression ratio. Experiment with LLAMA2-7B backbone achieves a compression ratio of 15.5 on enwik9. We justify the pre-training objective of GPT models by demonstrating its equivalence to the compression length, and, consequently, its ability to approximate the information distance for texts. Leveraging the approximated information distance, our method allows the direct application of GPT models in quantitative text similarity measurements. Experiment results show that our method overall achieves superior performance compared to embedding and prompt baselines on challenging NLP tasks, including semantic similarity, zero and one-shot text classification, and zero-shot text ranking

    Few-Shot Non-Parametric Learning with Deep Latent Variable Model

    Full text link
    Most real-world problems that machine learning algorithms are expected to solve face the situation with 1) unknown data distribution; 2) little domain-specific knowledge; and 3) datasets with limited annotation. We propose Non-Parametric learning by Compression with Latent Variables (NPC-LV), a learning framework for any dataset with abundant unlabeled data but very few labeled ones. By only training a generative model in an unsupervised way, the framework utilizes the data distribution to build a compressor. Using a compressor-based distance metric derived from Kolmogorov complexity, together with few labeled data, NPC-LV classifies without further training. We show that NPC-LV outperforms supervised methods on all three datasets on image classification in low data regime and even outperform semi-supervised learning methods on CIFAR-10. We demonstrate how and when negative evidence lowerbound (nELBO) can be used as an approximate compressed length for classification. By revealing the correlation between compression rate and classification accuracy, we illustrate that under NPC-LV, the improvement of generative models can enhance downstream classification accuracy.Comment: Accepted to NeurIPS202

    Bacterial Diversity in Soybean Rhizosphere Soil at Seedling and Mature Stages

    Get PDF
    • …
    corecore