289 research outputs found

    The investigation of the potential of municipal solid waste incineration (MSWI) fly ash as a mineral resource

    Get PDF
    Please click Additional Files below to see the full abstract

    Alkali activation of MSWI bottom ash: Effects of the SiO2/Na2O ratio

    Get PDF
    Due to its high mineral content, the valorization of bottom ash from municipal solid waste incineration (MSWI) as potential precursor in the application of alkali activated materials is attracting attention. In literature there is a large variation on using of the activator solutions to activate MSWI bottom ash. In most studies, the bulk composition rather than reactive fraction of MSWI bottom ash is considered in the alkali activation design. However a large part of the Si present in MSWI bottom ash is in the form of non-reactive quartz. In this study, mainly slag fraction was considered, the glass, ceramic and natural stony materials were removed before MSWI bottom ash was used as precursor. An efficient activator solution was developed by considering the reactive silica content of MSWI bottom ash determined by a dissolution test. Alkali activator was made of NaOH solution with concentration varying from 4M to 8M and Na2SiO3 solution with moduli of 0.75 to 1.5. The effects of SiO2/Na2O ratio, where the oxide ratio for SiO2 consisting of the reactive Si contributed by MSWI bottom ash slag and by the Na2SiO3 in the activator solution, on the compressive strength of alkali activated MSWI bottom ash were studied. XRD was used to determine the reaction products. SEM was used to observe the morphology of synthesized binder phase and EDX will be used to determine the binder chemistry

    Neural Architecture Search for Convolutional and Transformer Deep Neural Networks

    Get PDF
    Deep Neural Networks (DNN) have dominated computer vision tasks during the last decade. However, in recent years, the demand for highly customized deep neural networks has increased significantly, which has made manual design of architectures a tedious and time-consuming process. Neural architecture search (NAS), which aims to find the optimal network architecture automatically, has significantly improved network performance in many computer vision tasks. Convolutional Neural Networks (CNN)-based architecture has been the mainstream architecture in computer vision tasks. Although many NAS methods have improved the performance of CNNs, the computation cost of these methods is not affordable by most researchers, which is about thousands of GPU hours. To this end, we propose a new indicator in the CNN neural architecture search process that significantly reduces the time required by model training and evaluation during the NAS in Chapter 2. Recently, transformers without CNN-based backbones have been found to achieve impressive performance for image recognition. We propose the first neural architecture search method for the Vision Transformer models and improves the performance of the ViT models in Chapter 3. We introduce convolutional layers and propose a new hierarchical NAS method to tackle the huge search space. Finally, in Chapter 4, we introduce a new search space for ViT model and utilize the NAS method to find the optimal architecture. We find the redundancy among transformer blocks and propose a new Attention-Sharing technique to improve the network capability. We also utilize NAS techniques to decide whether each block should have the sharing attention. To sum up, we propose methods for deep neural network architecture search, including CNN and ViT models. We intend to enhance the effectiveness of NAS methods and the performance of existing models. We hope the works will contribute to the field and make NAS methods affordable to all researchers

    The Impact of Human Capital on Enterprise Management

    Get PDF
    China is the country with the largest population in the world. It has abundant human resources, but it is very scarce in terms of human capital, which is lower than that of the most developed countries. The scarcity of human capital directly affects the enterprise growth. Human capital plays a key role in the operation and development of enterprises. Giving internal employees of businesses full credit for the value of their human capital may act as a powerful engine for their growth. Human capital indicates the market’s demand for labor and the value that labor may provide to the economy, whereas human resources reflect the number and caliber of laborers in a nation. It has to do with how supply and demand on the market are changing. Investment in human resources produces human capital. This paper discussed an all-encompassing notion of human capital, as well as it’s required conditions of transformation of human resources into human capital. It also emphasizes the integration of human capital within a dynamic multi-loop nexus of social capital, learning, and knowledge management. In addition, the characteristics of human capital management in modern enterprises and the feasibility of human resource transforming into human capital were also discussed. The transformation of human resources into human capital shows that labor has played its own role in the production system. The employee is not only an important condition for enterprise growth, but it can also promote the progress of science and technology and increase the efficiency of the productivity

    Modify Training Directions in Function Space to Reduce Generalization Error

    Full text link
    We propose theoretical analyses of a modified natural gradient descent method in the neural network function space based on the eigendecompositions of neural tangent kernel and Fisher information matrix. We firstly present analytical expression for the function learned by this modified natural gradient under the assumptions of Gaussian distribution and infinite width limit. Thus, we explicitly derive the generalization error of the learned neural network function using theoretical methods from eigendecomposition and statistics theory. By decomposing of the total generalization error attributed to different eigenspace of the kernel in function space, we propose a criterion for balancing the errors stemming from training set and the distribution discrepancy between the training set and the true data. Through this approach, we establish that modifying the training direction of the neural network in function space leads to a reduction in the total generalization error. Furthermore, We demonstrate that this theoretical framework is capable to explain many existing results of generalization enhancing methods. These theoretical results are also illustrated by numerical examples on synthetic data
    • …
    corecore