18 research outputs found

    Solubility trapping as a potential secondary mechanism for CO2 sequestration during enhanced gas recovery by CO2 injection in conventional natural gas reservoirs : an experimental approach

    Get PDF
    This study aims to experimentally investigate the potential of solubility trapping mechanism in increasing CO2 storage during EGR by CO2 injection and sequestration in conventional natural gas reservoirs. A laboratory core flooding process was carried out to simulate EGR on a sandstone core at 0, 5, 10 wt% NaCl formation water salinity at 1300 psig, 50 °C and 0.3 ml/min injection rate. The results show that CO2 storage capacity was improved significantly when solubility trapping was considered. Lower connate water salinities (0 and 5 wt%) showed higher CO2 solubility from IFT measurements. With 10% connate water salinity, the highest accumulation of the CO2 in the reservoir was realised with about 63% of the total CO2 injected stored; an indication of improved storage capacity. Therefore, solubility trapping can potentially increase the CO2 storage capacity of the gas reservoir by serving as a secondary trapping mechanism in addition to the primary structural and stratigraphic trapping and improving CH4 recovery

    EffConv: Efficient Learning of Kernel Sizes for Convolution Layers of CNNs

    No full text
    Determining kernel sizes of a CNN model is a crucial and non-trivial design choice and significantly impacts its performance. The majority of kernel size design methods rely on complex heuristic tricks or leverage neural architecture search that requires extreme computational resources. Thus, learning kernel sizes, using methods such as modeling kernels as a combination of basis functions, jointly with the model weights has been proposed as a workaround. However, previous methods cannot achieve satisfactory results or are inefficient for large-scale datasets. To fill this gap, we design a novel efficient kernel size learning method in which a size predictor model learns to predict optimal kernel sizes for a classifier given a desired number of parameters. It does so in collaboration with a kernel predictor model that predicts the weights of the kernels - given kernel sizes predicted by the size predictor - to minimize the training objective, and both models are trained end-to-end. Our method only needs a small fraction of the training epochs of the original CNN to train these two models and find proper kernel sizes for it. Thus, it offers an efficient and effective solution for the kernel size learning problem. Our extensive experiments on MNIST, CIFAR-10, STL-10, and ImageNet-32 demonstrate that our method can achieve the best training time vs. accuracy trade-off compared to previous kernel size learning methods and significantly outperform them on challenging datasets such as STL-10 and ImageNet-32. Our implementations are available at https://github.com/Alii-Ganjj/EffConv
    corecore