25 research outputs found

    Specularity Removal from Imaging Spectroscopy Data via Entropy Minimisation

    Get PDF
    In this paper, we present a method to remove specularities from imaging spectroscopy data. We do this by making use of the dichromatic model so as to cast the problem in a linear regression setting. We do this so as to employ the average radiance for each pixel as a means to map the spectra onto a two-dimensional space. This permits the use of an entropy minimisation approach so as to recover the slope of a line described by a linear regressor. We show how this slope can be used to recover the specular coefficient in the dichromatic model and provide experiments on real-world imaging spectroscopy data. We also provide comparison with an alternative and effect a quantitative analysis that shows our method is robust to changes the degree of specularity of the image or the location of the light source in the scene

    Preliminary estimation of fat depth in the lamb short loin using a hyperspectral camera

    Full text link
    © 2018 CSIRO. The objectives of the present study were to describe the approach used for classifying surface tissue, and for estimating fat depth in lamb short loins and validating the approach. Fat versus non-fat pixels were classified and then used to estimate the fat depth for each pixel in the hyperspectral image. Estimated reflectance, instead of image intensity or radiance, was used as the input feature for classification. The relationship between reflectance and the fat/non-fat classification label was learnt using support vector machines. Gaussian processes were used to learn regression for fat depth as a function of reflectance. Data to train and test the machine learning algorithms was collected by scanning 16 short loins. The near-infrared hyperspectral camera captured lines of data of the side of the short loin (i.e. with the subcutaneous fat facing the camera). Advanced single-lens reflex camera took photos of the same cuts from above, such that a ground truth of fat depth could be semi-automatically extracted and associated with the hyperspectral data. A subset of the data was used to train the machine learning model, and to test it. The results of classifying pixels as either fat or non-fat achieved a 96% accuracy. Fat depths of up to 12 mm were estimated, with an R 2 of 0.59, a mean absolute bias of 1.72 mm and root mean square error of 2.34 mm. The techniques developed and validated in the present study will be used to estimate fat coverage to predict total fat, and, subsequently, lean meat yield in the carcass

    Visible hyperspectral imaging for predicting intra-muscular fat content from sheep carcasses

    Get PDF
    Intramuscular fat (IMF) content plays a key role in the quality attributes of meat, such as sensory properties and health considerations. The tenderness, flavour and juiciness of meat are examples of sensory attributes influenced by IMF content. Traditionally, IMF content in meat was determined using destructive, time consuming and at times unsuitable methods in industry applications. However, with recent advancement of technology, there has been an interest in exlporing ways to ascertain meat quality without damage. Hyperspectral imaging analysis is an emerging technology that combines the use of spectroscopy and computer imaging analysis to obtain both the spectral and spatial information of objects of interest. Hyperspectral imaging was initially developed for remote sensing, but has recently emerged as powerful tool for non-destructive analysis of quality in the food industry and has had very accurate results in the prediction of meat qualities such as IMF content. In this thesis, we use a data set of 101 hyperspectral images of sheep carcasses to investigate the ability of multivariate statistical methods to accurately predict IMF content

    Fusion of hyperspectral, multispectral, color and 3D point cloud information for the semantic interpretation of urban environments

    Get PDF
    In this paper, we address the semantic interpretation of urban environments on the basis of multi-modal data in the form of RGB color imagery, hyperspectral data and LiDAR data acquired from aerial sensor platforms. We extract radiometric features based on the given RGB color imagery and the given hyperspectral data, and we also consider different transformations to potentially better data representations. For the RGB color imagery, these are achieved via color invariants, normalization procedures or specific assumptions about the scene. For the hyperspectral data, we involve techniques for dimensionality reduction and feature selection as well as a transformation to multispectral Sentinel-2-like data of the same spatial resolution. Furthermore, we extract geometric features describing the local 3D structure from the given LiDAR data. The defined feature sets are provided separately and in different combinations as input to a Random Forest classifier. To assess the potential of the different feature sets and their combination, we present results achieved for the MUUFL Gulfport Hyperspectral and LiDAR Airborne Data Set

    Beyond eleven color names for image understanding

    Get PDF
    Altres ajuts: CERCA Programme/Generalitat de CatalunyaColor description is one of the fundamental problems of image understanding. One of the popular ways to represent colors is by means of color names. Most existing work on color names focuses on only the eleven basic color terms of the English language. This could be limiting the discriminative power of these representations, and representations based on more color names are expected to perform better. However, there exists no clear strategy to choose additional color names. We collect a dataset of 28 additional color names. To ensure that the resulting color representation has high discriminative power we propose a method to order the additional color names according to their complementary nature with the basic color names. This allows us to compute color name representations with high discriminative power of arbitrary length. In the experiments we show that these new color name descriptors outperform the existing color name descriptor on the taskof visual tracking, person re-identification and image classification

    Non-parametric Methods for Automatic Exposure Control, Radiometric Calibration and Dynamic Range Compression

    Get PDF
    Imaging systems are essential to a wide range of modern day applications. With the continuous advancement in imaging systems, there is an on-going need to adapt and improve the imaging pipeline running inside the imaging systems. In this thesis, methods are presented to improve the imaging pipeline of digital cameras. Here we present three methods to improve important phases of the imaging process, which are (i) ``Automatic exposure adjustment'' (ii) ``Radiometric calibration'' (iii) ''High dynamic range compression''. These contributions touch the initial, intermediate and final stages of imaging pipeline of digital cameras. For exposure control, we propose two methods. The first makes use of CCD-based equations to formulate the exposure control problem. To estimate the exposure time, an initial image was acquired for each wavelength channel to which contrast adjustment techniques were applied. This helps to recover a reference cumulative distribution function of image brightness at each channel. The second method proposed for automatic exposure control is an iterative method applicable for a broad range of imaging systems. It uses spectral sensitivity functions such as the photopic response functions for the generation of a spectral power image of the captured scene. A target image is then generated using the spectral power image by applying histogram equalization. The exposure time is hence calculated iteratively by minimizing the squared difference between target and the current spectral power image. Here we further analyze the method by performing its stability and controllability analysis using a state space representation used in control theory. The applicability of the proposed method for exposure time calculation was shown on real world scenes using cameras with varying architectures. Radiometric calibration is the estimate of the non-linear mapping of the input radiance map to the output brightness values. The radiometric mapping is represented by the camera response function with which the radiance map of the scene is estimated. Our radiometric calibration method employs an L1 cost function by taking advantage of Weisfeld optimization scheme. The proposed calibration works with multiple input images of the scene with varying exposure. It can also perform calibration using a single input with few constraints. The proposed method outperforms, quantitatively and qualitatively, various alternative methods found in the literature of radiometric calibration. Finally, to realistically represent the estimated radiance maps on low dynamic range display (LDR) devices, we propose a method for dynamic range compression. Radiance maps generally have higher dynamic range (HDR) as compared to the widely used display devices. Thus, for display purposes, dynamic range compression is required on HDR images. Our proposed method generates few LDR images from the HDR radiance map by clipping its values at different exposures. Using contrast information of each LDR image generated, the method uses an energy minimization approach to estimate the probability map of each LDR image. These probability maps are then used as label set to form final compressed dynamic range image for the display device. The results of our method were compared qualitatively and quantitatively with those produced by widely cited and professionally used methods
    corecore