980 research outputs found

    BLADE: Filter Learning for General Purpose Computational Photography

    Full text link
    The Rapid and Accurate Image Super Resolution (RAISR) method of Romano, Isidoro, and Milanfar is a computationally efficient image upscaling method using a trained set of filters. We describe a generalization of RAISR, which we name Best Linear Adaptive Enhancement (BLADE). This approach is a trainable edge-adaptive filtering framework that is general, simple, computationally efficient, and useful for a wide range of problems in computational photography. We show applications to operations which may appear in a camera pipeline including denoising, demosaicing, and stylization

    Tomographic Study of Internal Erosion of Particle Flows in Porous Media

    Full text link
    In particle-laden flows through porous media, porosity and permeability are significantly affected by the deposition and erosion of particles. Experiments show that the permeability evolution of a porous medium with respect to a particle suspension is not smooth, but rather exhibits significant jumps followed by longer periods of continuous permeability decrease. Their origin seems to be related to internal flow path reorganization by avalanches of deposited material due to erosion inside the porous medium. We apply neutron tomography to resolve the spatio-temporal evolution of the pore space during clogging and unclogging to prove the hypothesis of flow path reorganization behind the permeability jumps. This mechanistic understanding of clogging phenomena is relevant for a number of applications from oil production to filters or suffosion as the mechanisms behind sinkhole formation.Comment: 18 pages, 9 figure

    Multimodal X-ray imaging of nanocontainer-treated macrophages and calcium distribution in the perilacunar bone matrix

    Get PDF
    Studies of biological systems typically require the application of several complementary methods able to yield statistically-relevant results at a unique level of sensitivity. Combined X-ray fluorescence and ptychography offer excellent elemental and structural imaging contrasts at the nanoscale. They enable a robust correlation of elemental distributions with respect to the cellular morphology. Here we extend the applicability of the two modalities to higher X-ray excitation energies, permitting iron mapping. Using a long-range scanning setup, we applied the method to two vital biomedical cases. We quantified the iron distributions in a population of macrophages treated with Mycobacterium-tuberculosis-targeting iron-oxide nanocontainers. Our work allowed to visualize the internalization of the nanocontainer agglomerates in the cytosol. From the iron areal mass maps, we obtained a distribution of antibiotic load per agglomerate and an average areal concentration of nanocontainers in the agglomerates. In the second application we mapped the calcium content in a human bone matrix in close proximity to osteocyte lacunae (perilacunar matrix). A concurrently acquired ptychographic image was used to remove the mass-thickness effect from the raw calcium map. The resulting ptychography-enhanced calcium distribution allowed then to observe a locally lower degree of mineralization of the perilacunar matrix

    Reservoir Characterisation: - Multi-Scales Permeability Data Integration: Lake Albert Basin, Uganda

    No full text
    Imperial Users onl

    Inversion by Direct Iteration: An Alternative to Denoising Diffusion for Image Restoration

    Full text link
    Inversion by Direct Iteration (InDI) is a new formulation for supervised image restoration that avoids the so-called ``regression to the mean'' effect and produces more realistic and detailed images than existing regression-based methods. It does this by gradually improving image quality in small steps, similar to generative denoising diffusion models. Image restoration is an ill-posed problem where multiple high-quality images are plausible reconstructions of a given low-quality input. Therefore, the outcome of a single step regression model is typically an aggregate of all possible explanations, therefore lacking details and realism. The main advantage of InDI is that it does not try to predict the clean target image in a single step but instead gradually improves the image in small steps, resulting in better perceptual quality. While generative denoising diffusion models also work in small steps, our formulation is distinct in that it does not require knowledge of any analytic form of the degradation process. Instead, we directly learn an iterative restoration process from low-quality and high-quality paired examples. InDI can be applied to virtually any image degradation, given paired training data. In conditional denoising diffusion image restoration the denoising network generates the restored image by repeatedly denoising an initial image of pure noise, conditioned on the degraded input. Contrary to conditional denoising formulations, InDI directly proceeds by iteratively restoring the input low-quality image, producing high-quality results on a variety of image restoration tasks, including motion and out-of-focus deblurring, super-resolution, compression artifact removal, and denoising

    Upscaling and Inverse Modeling of Groundwater Flow and Mass Transport in Heterogeneous Aquifers

    Full text link
    Dividimos el trabajo en tres bloques: En el primer bloque, se han revisado las técnicas de escalado que utilizan una media simple, el método laplaciano simple, el laplaciano con piel y el escalado con mallado no uniforme y se han evaluado en un ejercicio tridimensional de escalado de la conductividad hidráulica. El campo usado como referencia es una realización condicional a escala fina de la conductividad hidráulica del experimento de macrodispersión realizado en la base de la fuerza aérea estadounidense de Columbus en Misuri (MADE en su acrónimo inglés). El objetivo de esta sección es doble, primero, comparar la efectividad de diferentes técnicas de escalado para producir modelos capaces de reproducir el comportamiento observado del movimiento del penacho de tritio, y segundo, demostrar y analizar las condiciones bajo las cuales el escalado puede proporcionar un modelo a una escala gruesa en el que el flujo y el transporte puedan predecirse con al ecuación de advección-dispersión en condiciones aparentemente no fickianas. En otros casos, se observa que la discrepancia en la predicción del transporte entre las dos escalas persiste, y la ecuación de advección-dispersión no es suficiente para explicar el transporte en la escala gruesa. Por esta razón, se ha desarrollado una metodología para el escalado del transporte en formaciones muy heterogéneas en tres dimensiones. El método propuesto se basa en un escalado de la conductividad hidráulica por el método laplaciano con piel y centrado en los interbloques, seguido de un escalado de los parámetros de transporte que requiere la inclusión de un proceso de transporte con transferencia de masa multitasa para compensar la pérdida de heterogeneidad inherente al cambio de escala. El método propuesto no sólo reproduce el flujo y el transporte en la escala gruesa, sino que reproduce también la incertidumbre asociada con las predicciones según puede observarse analizando la variabilidad del conjunto de curvas de llegada.Li ., L. (2011). Upscaling and Inverse Modeling of Groundwater Flow and Mass Transport in Heterogeneous Aquifers [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/12268Palanci

    Efficient Support for Application-Specific Video Adaptation

    Get PDF
    As video applications become more diverse, video must be adapted in different ways to meet the requirements of different applications when there are insufficient resources. In this dissertation, we address two sorts of requirements that cannot be addressed by existing video adaptation technologies: (i) accommodating large variations in resolution and (ii) collecting video effectively in a multi-hop sensor network. In addition, we also address requirements for implementing video adaptation in a sensor network. Accommodating large variation in resolution is required by the existence of display devices with widely disparate screen sizes. Existing resolution adaptation technologies usually aim at adapting video between two resolutions. We examine the limitations of these technologies that prevent them from supporting a large number of resolutions efficiently. We propose several hybrid schemes and study their performance. Among these hybrid schemes, Bonneville, a framework that combines multiple encodings with limited scalability, can make good trade-offs when organizing compressed video to support a wide range of resolutions. Video collection in a sensor network requires adapting video in a multi-hop storeand- forward network and with multiple video sources. This task cannot be supported effectively by existing adaptation technologies, which are designed for real-time streaming applications from a single source over IP-style end-to-end connections. We propose to adapt video in the network instead of at the network edge. We also propose a framework, Steens, to compose adaptation mechanisms on multiple nodes. We design two signaling protocols in Steens to coordinate multiple nodes. Our simulations show that in-network adaptation can use buffer space on intermediate nodes for adaptation and achieve better video quality than conventional network-edge adaptation. Our simulations also show that explicit collaboration among multiple nodes through signaling can improve video quality, waste less bandwidth, and maintain bandwidth-sharing fairness. The implementation of video adaptation in a sensor network requires system support for programmability, retaskability, and high performance. We propose Cascades, a component-based framework, to provide the required support. A prototype implementation of Steens in this framework shows that the performance overhead is less than 5% compared to a hard-coded C implementation

    Incorporation of fault rock properties into production simulation models

    Get PDF
    This thesis has two aims. First, to investigate the importance of incorporating the multiphase flow properties of faults into production simulation models. Second, to investigate methodologies to incorporate the multiphase flow properties of faults into production simulation models. Tests using simple simulation models suggest that in some situations it is not particularly important to take into account the multiphase flow properties of faults, whereas in other situations the multiphase properties have proved very important. The differences depend on drive mechanism, well position, and the capillary pressure distribution along the fault as well on the parameters that need to be modelled (e. g. bottom-hole pressures, hydrocarbon production rates, water cuts, etc. ). The results show that it is possible for hydrocarbons to flow across a sealing fault (i. e. 100% water saturation) as a result of its threshold pressure being overcome. The relative permeability of fault rocks may be one of the largest unknowns in simulating fluid in structurally complex petroleum reservoirs. Microstructural and petrophysical measurements are conducted on faults from core within the Pierce Field, North Sea. The results are used to calculate transmissibility multipliers (TMs) required to take into account the effect of faults on fluid flow within the Pierce production simulation model. The fault multiphase flow behaviour is approximated by varying the TMs as a function of height above the free water level. This methodology results in an improved history match of production data. Further, the improved model is then used to plan the optimal time to conduct a follow-up 3D seismic survey to identify unswept compartments. Further, an alternative model was proposed to overcome some of the possible limitations that the previous TM treatments may have at certain stages of a reservoir life. The similar behaviour of the different proposed fault models for the Pierce Field indicate that the current faulting system in this model is not largely responsible for the history mismatch in water production. Multiphase flow properties of faults can be incorporated into production simulation models using dynamic pseudofunctions. In this thesis, different dynamic pseudofunctions are generated by conducting high-resolution fluid flow models at the scale of the reservoir simulation grid block, using flow rates similar to those that are likely to be encountered within petroleum reservoirs. In these high-resolution models, both the fault and reservoir rock are given their own capillary pressure and relative permeability curves. The results of the simulations are used to create pseudocurves that are then incorporated into the up-scaled production simulation model to account for the presence of both the fault and undeformed reservoir. Different flow regimes are used to compare the performance of each pseudoisation method with the conventional, single-phase TM fault representations. The results presented in this thesis show that it is more important to incorporate fault multiphase properties in capillary dominated flow regimes than in those that are viscosity dominated. It should, however, be emphasised that the Brooks-Corey relations used to estimate relative permeability and capillary pressure curves of the fault rock in this study have a significant influence on some of these conclusions. In other words, these conclusions may not be valid if the relative permeability curves of fault rocks are very different to those calculated using the aforementioned relationships. Finally, an integrated workflow is outlined showing how dynamic pseudofunctions can be generated in fault juxtaposition models by taking advantage of the dynamic flux preservation feature in Eclipse 10OTM simulator

    Quantifying U-Net Uncertainty in Multi-Parametric MRI-based Glioma Segmentation by Spherical Image Projection

    Full text link
    The projection of planar MRI data onto a spherical surface is equivalent to a nonlinear image transformation that retains global anatomical information. By incorporating this image transformation process in our proposed spherical projection-based U-Net (SPU-Net) segmentation model design, multiple independent segmentation predictions can be obtained from a single MRI. The final segmentation is the average of all available results, and the variation can be visualized as a pixel-wise uncertainty map. An uncertainty score was introduced to evaluate and compare the performance of uncertainty measurements. The proposed SPU-Net model was implemented on the basis of 369 glioma patients with MP-MRI scans (T1, T1-Ce, T2, and FLAIR). Three SPU-Net models were trained to segment enhancing tumor (ET), tumor core (TC), and whole tumor (WT), respectively. The SPU-Net model was compared with (1) the classic U-Net model with test-time augmentation (TTA) and (2) linear scaling-based U-Net (LSU-Net) segmentation models in terms of both segmentation accuracy (Dice coefficient, sensitivity, specificity, and accuracy) and segmentation uncertainty (uncertainty map and uncertainty score). The developed SPU-Net model successfully achieved low uncertainty for correct segmentation predictions (e.g., tumor interior or healthy tissue interior) and high uncertainty for incorrect results (e.g., tumor boundaries). This model could allow the identification of missed tumor targets or segmentation errors in U-Net. Quantitatively, the SPU-Net model achieved the highest uncertainty scores for three segmentation targets (ET/TC/WT): 0.826/0.848/0.936, compared to 0.784/0.643/0.872 using the U-Net with TTA and 0.743/0.702/0.876 with the LSU-Net (scaling factor = 2). The SPU-Net also achieved statistically significantly higher Dice coefficients, underscoring the improved segmentation accuracy.Comment: 31 pages, 9 figures, 1 tabl
    corecore