242 research outputs found

    Microwave Power Measurements: Standards and Transfer Techniques

    Get PDF
    In this chapter, precision power measurement, which is probably the most important area in RF and microwave metrology, will be discussed. Firstly, the background of RF and microwave power measurements and standards will be introduced. Secondly, the working principle of primary power standard (i.e., microcalorimeter) will be described, followed by the discussions of direct comparison transfer technique. Finally, there will be some discussions about the performance evaluation and uncertainty estimation for microwave power measurements

    CT Experiments and Image Processing for the Water-Oil Displacement at Pore Scale

    Get PDF
    AbstractWe established a CT experimental method for the study of the water-oil displacement at pore scale. The microscopic core model made up of reservoir coring materials could truthfully reflect the surface property and pore structure of reservoir rocks. We scanned the core model at different water flooding stages using SkyScan1174v2 CT scanner, and high resolution images were obtained. The present paper adopted a new image segmentation method, which depends on the discriminatory analysis constrained by the measured porosity and oil saturation. This new method improved the accuracy of image segmentation. We utilized the new algorithm to carry out the segmentation of pores and residual oil from the scanning images. The segmentation results were in agreement with those measured from the core experiments

    Un marco de aprendizaje mutuo para redes podadas y cuantificadas

    Get PDF
    Model compression is an important topic in deep learning research. It can be mainly divided into two directions: model pruning and model quantization. However, both methods will more or less affect the original accuracy of the model. In this paper, we propose a mutual learning framework for pruned and quantized networks. We regard the pruned network and the quantized network as two sets of features that are not parallel. The purpose of our mutual learning framework is to better integrate the two sets of features and achieve complementary advantages, which we call feature augmentation. To verify the effectiveness of our framework, we select a pairwise combination of 3 state-of-the-art pruning algorithms and 3 state-of-theart quantization algorithms. Extensive experiments on CIFAR-10, CIFAR-100 and Tiny-imagenet show the benefits of our framework: through the mutual learning of the two networks, we obtain a pruned network and a quantization network with higher accuracy than traditional approaches.La compresi贸n de modelos es un tema importante en la investigaci贸n del aprendizaje profundo. Se puede dividir principalmente en dos direcciones: poda de modelos y cuantizaci贸n de modelos. Sin embargo, ambos m茅todos afectar谩n m谩s o menos la precisi贸n original del modelo. En este art铆culo, proponemos un marco de aprendizaje mutuo para redes podadas y cuantificadas. Consideramos la red podada y la red quantized como dos conjuntos de caracter铆sticas que no son paralelas. El prop贸sito de nuestro marco de aprendizaje mutuo es integrar mejor los dos conjuntos de funciones y lograr ventajas complementarias, lo que llamamos aumento de funciones. Para verificar la efectividad de nuestro marco, seleccionamos una combinaci贸n por pares de 3 algoritmos de poda de 煤ltima generaci贸n y 3 algoritmos de cuantificaci贸n de 煤ltima generaci贸n. Extensos experimentos en CIFAR- 10, CIFAR-100 y Tiny-imagenet muestran los beneficios de nuestro marco: a trav茅s del aprendizaje mutuo de las dos redes, obtenemos una red pruned y una red de cuantificaci贸n con mayor precisi贸n que los enfoques tradicionales.Facultad de Inform谩tic

    Activating More Information in Arbitrary-Scale Image Super-Resolution

    Get PDF
    Single-image super-resolution (SISR) has experienced vigorous growth with the rapid development of deep learning. However, handling arbitrary scales (e.g., integers, nonintegers, or asymmetric) using a single model remains a challenging task. Existing super-resolution (SR) networks commonly employ static convolutions during feature extraction, which cannoteffectively perceive changes in scales. Moreover, these continuous scale upsampling modules only utilize the scale factors, without considering the diversity of local features. To activate more information for better reconstruction, two plug-in and compatible modules for fixed-scale networks are designed to perform arbitrary-scale SR tasks. Firstly, we design a Scale-aware Local Feature Adaptation Module (SLFAM), which adaptively adjusts the attention weights of dynamic filters based on the local features and scales. It enables the network to possess stronger representation capabilities. Then we propose a Local Feature AdaptationUpsampling Module (LFAUM), which combines scales and local features to perform arbitrary-scale reconstruction. It allows the upsampling to adapt to local structures. Besides, deformable convolution is utilized letting more information to be activated in the reconstruction, enabling the network to better adapt to the texture features. Extensive experiments on various benchmark datasets demonstrate that integrating the proposed modules into a fixed-scale SR network enables it to achieve satisfactory results with non-integer or asymmetric scales while maintaining advanced performance with integer scales
    corecore