194 research outputs found

    Union-net: A deep neural network model adapted to small data sets

    Full text link
    In real applications, generally small data sets can be obtained. At present, most of the practical applications of machine learning use classic models based on big data to solve the problem of small data sets. However, the deep neural network model has complex structure, huge model parameters, and training requires more advanced equipment, which brings certain difficulties to the application. Therefore, this paper proposes the concept of union convolution, designing a light deep network model union-net with a shallow network structure and adapting to small data sets. This model combines convolutional network units with different combinations of the same input to form a union module. Each union module is equivalent to a convolutional layer. The serial input and output between the 3 modules constitute a "3-layer" neural network. The output of each union module is fused and added as the input of the last convolutional layer to form a complex network with a 4-layer network structure. It solves the problem that the deep network model network is too deep and the transmission path is too long, which causes the loss of the underlying information transmission. Because the model has fewer model parameters and fewer channels, it can better adapt to small data sets. It solves the problem that the deep network model is prone to overfitting in training small data sets. Use the public data sets cifar10 and 17flowers to conduct multi-classification experiments. Experiments show that the Union-net model can perform well in classification of large data sets and small data sets. It has high practical value in daily application scenarios. The model code is published at https://github.com/yeaso/union-netComment: 13 pages, 6 figure

    Commodifying Non-English Foreign Language via Chinese University Websites

    Get PDF
    This paper examines the commodification of non-English foreign languages through the official websites of 42 China’s double first-class universities. Informed by the concepts of language as commodity (Heller 2010), this study examines how non-English foreign languages are ideologically constructed as valued resources exchange for decent job, advanced education and China’s regional integration. However, the study also finds that even these websites try their best to portray non-English foreign languages as valuable commodity, the concept of English as the default language still permeates in the whole promoting process. Therefore, there is still some tensions between ideal promoting vision and actual practices. This study can shed lights on the valorization of multilingual education in China and the promotion of non-English foreign languages to the world

    Editorial: Cell therapy, liver diseases, and regeneration

    Get PDF

    Text Classification Using Novel Term Weighting Scheme-Based Improved TF-IDF for Internet Media Reports

    Get PDF
    With the rapid development of the internet technology, a large amount of internet text data can be obtained. The text classification (TC) technology plays a very important role in processing massive text data, but the accuracy of classification is directly affected by the performance of term weighting in TC. Due to the original design of information retrieval (IR), term frequency-inverse document frequency (TF-IDF) is not effective enough for TC, especially for processing text data with unbalanced distributions in internet media reports. Therefore, the variance between the DF value of a particular term and the average of all DFs , namely, the document frequency variance (ADF), is proposed to enhance the ability in processing text data with unbalanced distribution. Then, the normal TF-IDF is modified by the proposed ADF for processing unbalanced text collection in four different ways, namely, TF-IADF, TF-IADF+, TF-IADFnorm, and TF-IADF+norm. As a result, an effective model can be established for the TC task of internet media reports. A series of simulations have been carried out to evaluate the performance of the proposed methods. Compared with TF-IDF on state-of-the-art classification algorithms, the effectiveness and feasibility of the proposed methods are confirmed by simulation results

    RACE: An Efficient Redundancy-aware Accelerator for Dynamic Graph Neural Network

    Get PDF
    Dynamic Graph Neural Network (DGNN) has recently attracted a significant amount of research attention from various domains, because most real-world graphs are inherently dynamic. Despite many research efforts, for DGNN, existing hardware/software solutions still suffer significantly from redundant computation and memory access overhead, because they need to irregularly access and recompute all graph data of each graph snapshot. To address these issues, we propose an efficient redundancy-aware accelerator, RACE, which enables energy-efficient execution of DGNN models. Specifically, we propose a redundancy-aware incremental execution approach into the accelerator design for DGNN to instantly achieve the output features of the latest graph snapshot by correctly and incrementally refining the output features of the previous graph snapshot and also enable regular accesses of vertices\u27 input features. Through traversing the graph on the fly, RACE identifies the vertices that are not affected by graph updates between successive snapshots to reuse these vertices\u27 states (i.e., their output features) of the previous snapshot for the processing of the latest snapshot. The vertices affected by graph updates are also tracked to incrementally recompute their new states using their neighbors\u27 input features of the latest snapshot for correctness. In this way, the processing and accessing of many graph data that are not affected by graph updates can be correctly eliminated, enabling smaller redundant computation and memory access overhead. Besides, the input features, which are accessed more frequently, are dynamically identified according to graph topology and are preferentially resident in the on-chip memory for less off-chip communications. Experimental results show that RACE achieves on average 1139× and 84.7× speedups for DGNN inference, with average 2242× and 234.2× energy savings, in comparison with the state-of-the-art software DGNN running on Intel Xeon CPU and NVIDIA A100 GPU, respectively. Moreover, for DGNN inference, RACE obtains on average 13.1×, 11.7×, 10.4×, and 7.9× speedup and 14.8×, 12.9×, 11.5×, and 8.9× energy savings over the state-of-the-art Graph Neural Network accelerators, i.e., AWB-GCN, GCNAX, ReGNN, and I-GCN, respectively

    Clean process to utilize the potassium-containing phosphorous rock with simultaneous HCl and KCl production via the steam-mediated reactions

    Get PDF
    In this paper, a clean process based on the steam-mediated reactions for simultaneous HCl and KCl production using the potassium (K)-containing phosphorous rock as a precursor is proposed. Through hydrochloric acid (HCl) leaching, not only the generation of H3PO4and CaCl2 (via further precipitation) were realized but also the acid-insoluble residue [phosphorous-rock slag (PS)] rich in elements, that is, K, Al, Si, and so on, in the form of microcline (KAlSi3O8) and quartz (SiO2) was obtained and became readily available for further HCl and KCl generation. Over 95 % of the elements, that is, K, Al, and Si, come into the final products, and the overall acid consumption (based on HCl) is significantly reduced (90%) due to recovery of acids. The impacts of the key operational parameters such as temperature, duration, and reagent impregnate ratio were rigorously analyzed via a supervised machine learning approach, and the optimal conditions were determined [reaction temperature, X1, 850 °C; reaction duration, X2, 40 min; and impregnate ratio (PS over CaCl2), X3, 2.5] with approximately ± 10% uncertainties. Thermodynamic analysis indicates that the introduction of steam to PS + CaCl2 not only enhances the chemical potential for the formation of HCl and KCl but also provides the transport advantage in continuously removing the generated products, that is, HCl and KCl, out of the system. Molecular simulation indicates that the presence of both steam and SiO2 in the PS matrix plays critical roles in decomposing PS + CaCl2 at high temperature. The shrinking core model shows that both the intrinsic kinetics and transport are influential with the activation energy being around 14.63 kJ/mol. The potential reaction pathway is postulated
    corecore