1,678 research outputs found

    A Low-cost Depth Imaging Mobile Platform for Canola Phenotyping

    Get PDF
    To meet the high demand for supporting and accelerating progress in the breeding of novel traits, plant scientists and breeders have to measure a large number of plants and their characteristics accurately. A variety of imaging methodologies are being deployed to acquire data for quantitative studies of complex traits. When applied to a large number of plants such as canola plants, however, a complete three-dimensional (3D) model is time-consuming and expensive for high-throughput phenotyping with an enormous amount of data. In some contexts, a full rebuild of entire plants may not be necessary. In recent years, many 3D plan phenotyping techniques with high cost and large-scale facilities have been introduced to extract plant phenotypic traits, but these applications may be affected by limited research budgets and cross environments. This thesis proposed a low-cost depth and high-throughput phenotyping mobile platform to measure canola plant traits in cross environments. Methods included detecting and counting canola branches and seedpods, monitoring canola growth stages, and fusing color images to improve images resolution and achieve higher accuracy. Canola plant traits were examined in both controlled environment and field scenarios. These methodologies were enhanced by different imaging techniques. Results revealed that this phenotyping mobile platform can be used to investigate canola plant traits in cross environments with high accuracy. The results also show that algorithms for counting canola branches and seedpods enable crop researchers to analyze the relationship between canola genotypes and phenotypes and estimate crop yields. In addition to counting algorithms, fusing techniques can be helpful for plant breeders with more comfortable access plant characteristics by improving the definition and resolution of color images. These findings add value to the automation, low-cost depth and high-throughput phenotyping for canola plants. These findings also contribute a novel multi-focus image fusion that exhibits a competitive performance with outperforms some other state-of-the-art methods based on the visual saliency maps and gradient domain fast guided filter. This proposed platform and counting algorithms can be applied to not only canola plants but also other closely related species. The proposed fusing technique can be extended to other fields, such as remote sensing and medical image fusion

    Detail Enhanced Multi-Exposure Image Fusion Based On Edge Preserving Filters

    Get PDF
    Recent computational photography techniques play a significant role to overcome the limitation of standard digital cameras for handling wide dynamic range of real-world scenes contain brightly and poorly illuminated areas. In many of such techniques [1,2,3], it is often desirable to fuse details from images captured at different exposure settings, while avoiding visual artifacts. One such technique is High Dynamic Range (HDR) imaging that provides a solution to recover radiance maps from photographs taken with conventional imaging equipment. The process of HDR image composition needs the knowledge of exposure times and Camera Response Function (CRF), which is required to linearize the image data before combining Low Dynamic Range (LDR) exposures into HDR image. One of the long-standing challenges in HDR imaging technology is the limited Dynamic Range (DR) of conventional display devices and printing technology. Due to which these devices are unable to reproduce full DR. Although DR can be reduced by using a tone-mapping, but this comes at an unavoidable trade-off with increased computational cost. Therefore, it is desirable to maximize information content of the synthesized scene from a set of multi-exposure images without computing HDR radiance map and tone-mapping.This research attempts to develop a novel detail enhanced multi-exposure image fusion approach based on texture features, which exploits the edge preserving and intra-region smoothing property of nonlinear diffusion filters based on Partial Differential Equations (PDE). With the captured multi-exposure image series, we first decompose images into Base Layers (BLs) and Detail Layers (DLs) to extract sharp details and fine details, respectively. The magnitude of the gradient of the image intensity is utilized to encourage smoothness at homogeneous regions in preference to inhomogeneous regions. In the next step texture features of the BL to generate a decision mask (i.e., local range) have been considered that guide the fusion of BLs in multi-resolution fashion. Finally, well-exposed fused image is obtained that combines fused BL and the DL at each scale across all the input exposures. The combination of edge-preserving filters with Laplacian pyramid is shown to lead to texture detail enhancement in the fused image.Furthermore, Non-linear adaptive filter is employed for BL and DL decomposition that has better response near strong edges. The texture details are then added to the fused BL to reconstruct a detail enhanced LDR version of the image. This leads to an increased robustness of the texture details while at the same time avoiding gradient reversal artifacts near strong edges that may appear in fused image after DL enhancement.Finally, we propose a novel technique for exposure fusion in which Weighted Least Squares (WLS) optimization framework is utilized for weight map refinement of BLs and DLs, which lead to a new simple weighted average fusion framework. Computationally simple texture features (i.e. DL) and color saturation measure are preferred for quickly generating weight maps to control the contribution from an input set of multi-exposure images. Instead of employing intermediate HDR reconstruction and tone-mapping steps, well-exposed fused image is generated for displaying on conventional display devices. Simulation results are compared with a number of existing single resolution and multi-resolution techniques to show the benefits of the proposed scheme for the variety of cases. Moreover, the approaches proposed in this thesis are effective for blending flash and no-flash image pair, and multi-focus images, that is, input images photographed with and without flash, and images focused on different targets, respectively. A further advantage of the present technique is that it is well suited for detail enhancement in the fused image

    Detail Enhanced Multi-Exposure Image Fusion Based On Edge Preserving Filters

    Get PDF
    Recent computational photography techniques play a significant role to overcome the limitation of standard digital cameras for handling wide dynamic range of real-world scenes contain brightly and poorly illuminated areas. In many of such techniques [1,2,3], it is often desirable to fuse details from images captured at different exposure settings, while avoiding visual artifacts. One such technique is High Dynamic Range (HDR) imaging that provides a solution to recover radiance maps from photographs taken with conventional imaging equipment. The process of HDR image composition needs the knowledge of exposure times and Camera Response Function (CRF), which is required to linearize the image data before combining Low Dynamic Range (LDR) exposures into HDR image. One of the long-standing challenges in HDR imaging technology is the limited Dynamic Range (DR) of conventional display devices and printing technology. Due to which these devices are unable to reproduce full DR. Although DR can be reduced by using a tone-mapping, but this comes at an unavoidable trade-off with increased computational cost. Therefore, it is desirable to maximize information content of the synthesized scene from a set of multi-exposure images without computing HDR radiance map and tone-mapping.This research attempts to develop a novel detail enhanced multi-exposure image fusion approach based on texture features, which exploits the edge preserving and intra-region smoothing property of nonlinear diffusion filters based on Partial Differential Equations (PDE). With the captured multi-exposure image series, we first decompose images into Base Layers (BLs) and Detail Layers (DLs) to extract sharp details and fine details, respectively. The magnitude of the gradient of the image intensity is utilized to encourage smoothness at homogeneous regions in preference to inhomogeneous regions. In the next step texture features of the BL to generate a decision mask (i.e., local range) have been considered that guide the fusion of BLs in multi-resolution fashion. Finally, well-exposed fused image is obtained that combines fused BL and the DL at each scale across all the input exposures. The combination of edge-preserving filters with Laplacian pyramid is shown to lead to texture detail enhancement in the fused image.Furthermore, Non-linear adaptive filter is employed for BL and DL decomposition that has better response near strong edges. The texture details are then added to the fused BL to reconstruct a detail enhanced LDR version of the image. This leads to an increased robustness of the texture details while at the same time avoiding gradient reversal artifacts near strong edges that may appear in fused image after DL enhancement.Finally, we propose a novel technique for exposure fusion in which Weighted Least Squares (WLS) optimization framework is utilized for weight map refinement of BLs and DLs, which lead to a new simple weighted average fusion framework. Computationally simple texture features (i.e. DL) and color saturation measure are preferred for quickly generating weight maps to control the contribution from an input set of multi-exposure images. Instead of employing intermediate HDR reconstruction and tone-mapping steps, well-exposed fused image is generated for displaying on conventional display devices. Simulation results are compared with a number of existing single resolution and multi-resolution techniques to show the benefits of the proposed scheme for the variety of cases.            Moreover, the approaches proposed in this thesis are effective for blending flash and no-flash image pair, and multi-focus images, that is, input images photographed with and without flash, and images focused on different targets, respectively. A further advantage of the present technique is that it is well suited for detail enhancement in the fused image

    Multi-Modal Perception for Selective Rendering

    Get PDF
    A major challenge in generating high-fidelity virtual environments (VEs) is to be able to provide realism at interactive rates. The high-fidelity simulation of light and sound is still unachievable in real-time as such physical accuracy is very computationally demanding. Only recently has visual perception been used in high-fidelity rendering to improve performance by a series of novel exploitations; to render parts of the scene that are not currently being attended to by the viewer at a much lower quality without the difference being perceived. This paper investigates the effect spatialised directional sound has on the visual attention of a user towards rendered images. These perceptual artefacts are utilised in selective rendering pipelines via the use of multi-modal maps. The multi-modal maps are tested through psychophysical experiments to examine their applicability to selective rendering algorithms, with a series of fixed cost rendering functions, and are found to perform significantly better than only using image saliency maps that are naively applied to multi-modal virtual environments

    Visual saliency guided high dynamic range image compression

    Get PDF
    Recent years have seen the emergence of the visual saliency-based image and video compression for low dynamic range (LDR) visual content. The high dynamic range (HDR) imaging is yet to follow such an approach for compression as the state-of-the-art visual saliency detection models are mainly concerned with LDR content. Although a few HDR saliency detection models have been proposed in the recent years, they lack the comprehensive validation. Current HDR image compression schemes do not differentiate salient and non-salient regions, which has been proved redundant in terms of the Human Visual System. In this paper, we propose a novel visual saliency guided layered compression scheme for HDR images. The proposed saliency detection model is robust and highly correlates with the ground truth saliency maps obtained from eye tracker. The results show a reduction of bit-rates up to 50% while retaining the same high visual quality in terms of HDR-Visual Difference Predictor (HDR-VDP) and the visual saliency-induced index for perceptual image quality assessment (VSI) metrics in the salient regions
    corecore