152 research outputs found

    The reproduction angular error for evaluating the performance of illuminant estimation algorithms

    Get PDF
    The angle between the RGBs of the measured illuminant and estimated illuminant colors - the recovery angular error - has been used to evaluate the performance of the illuminant estimation algorithms. However we noticed that this metric is not in line with how the illuminant estimates are used. Normally, the illuminant estimates are ‘divided out’ from the image to, hopefully, provide image colors that are not confounded by the color of the light. However, even though the same reproduction results the same scene might have a large range of recovery errors. In this work the scale of the problem with the recovery error is quantified. Next we propose a new metric for evaluating illuminant estimation algorithms, called the reproduction angular error, which is defined as the angle between the RGB of a white surface when the actual and estimated illuminations are ‘divided out’. Our new metric ties algorithm performance to how the illuminant estimates are used. For a given algorithm, adopting the new reproduction angular error leads to different optimal parameters. Further the ranked list of best to worst algorithms changes when the reproduction angular is used. The importance of using an appropriate performance metric is established

    Effectiveness of specularity removal from hyperspectral images on the quality of spectral signatures of food products

    Get PDF
    Specularity or highlight problem exists widely in hyperspectral images, provokes reflectance deviation from its true value, and can hide major defects in food objects or detecting spurious false defects causing failure of inspection and detection processes. In this study, a new non-iterative method based on the dichromatic reflection model and principle component analysis (PCA) was proposed to detect and remove specular highlight components from hyperspectral images acquired by various imaging modes and under different configurations for numerous agro-food products. To demonstrate the effectiveness of this approach, the details of the proposed method were described and the experimental results on various spectral images were presented. The results revealed that the method worked well on all hyperspectral and multispectral images examined in this study, effectively reduced the specularity and significantly improves the quality of the extracted spectral data. Besides the spectral images from available databases, the robustness of this approach was further validated with real captured hyperspectral images of different food materials. By using qualitative and quantitative evaluation based on running time and peak signal to noise ratio (PSNR), the experimental results showed that the proposed method outperforms other specularity removal methods over the datasets of hyperspectral and multispectral images.info:eu-repo/semantics/acceptedVersio

    Multispectral photography for earth resources

    Get PDF
    A guide for producing accurate multispectral results for earth resource applications is presented along with theoretical and analytical concepts of color and multispectral photography. Topics discussed include: capabilities and limitations of color and color infrared films; image color measurements; methods of relating ground phenomena to film density and color measurement; sensitometry; considerations in the selection of multispectral cameras and components; and mission planning

    Expanding Dimensionality in Cinema Color: Impacting Observer Metamerism through Multiprimary Display

    Get PDF
    Television and cinema display are both trending towards greater ranges and saturation of reproduced colors made possible by near-monochromatic RGB illumination technologies. Through current broadcast and digital cinema standards work, system designs employing laser light sources, narrow-band LED, quantum dots and others are being actively endorsed in promotion of Wide Color Gamut (WCG). Despite artistic benefits brought to creative content producers, spectrally selective excitations of naturally different human color response functions exacerbate variability of observer experience. An exaggerated variation in color-sensing is explicitly counter to the exhaustive controls and calibrations employed in modern motion picture pipelines. Further, singular standard observer summaries of human color vision such as found in the CIE’s 1931 and 1964 color matching functions and used extensively in motion picture color management are deficient in recognizing expected human vision variability. Many researchers have confirmed the magnitude of observer metamerism in color matching in both uniform colors and imagery but few have shown explicit color management with an aim of minimized difference in observer perception variability. This research shows that not only can observer metamerism influences be quantitatively predicted and confirmed psychophysically but that intentionally engineered multiprimary displays employing more than three primaries can offer increased color gamut with drastically improved consistency of experience. To this end, a seven-channel prototype display has been constructed based on observer metamerism models and color difference indices derived from the latest color vision demographic research. This display has been further proven in forced-choice paired comparison tests to deliver superior color matching to reference stimuli versus both contemporary standard RGB cinema projection and recently ratified standard laser projection across a large population of color-normal observers

    Computer vision system in real-time for color determination on flat surface food

    Get PDF
    Artificial vision systems also known as computer vision are potent quality inspection tools, which can be applied in pattern recognition for fruits and vegetables analysis. The aim of this research was to design, implement and calibrate a new computer vision system (CVS) in real-time for the color measurement on flat surface food. For this purpose was designed and implemented a device capable of performing this task (software and hardware), which consisted of two phases: a) image acquisition and b) image processing and analysis. Both the algorithm and the graphical interface (GUI) were developed in Matlab. The CVS calibration was performed using a conventional colorimeter (Model CIEL* a* b*), where were estimated the errors of the color parameters: eL* = 5.001%, and ea* = 2.287%, and eb* = 4.314 % which ensure adequate and efficient automation application in industrial processes in the quality control in the food industry sector

    Multiplexed Illumination for Classifying Visually Similar Objects

    Full text link
    Distinguishing visually similar objects like forged/authentic bills and healthy/unhealthy plants is beyond the capabilities of even the most sophisticated classifiers. We propose the use of multiplexed illumination to extend the range of objects that can be successfully classified. We construct a compact RGB-IR light stage that images samples under different combinations of illuminant position and colour. We then develop a methodology for selecting illumination patterns and training a classifier using the resulting imagery. We use the light stage to model and synthetically relight training samples, and propose a greedy pattern selection scheme that exploits this ability to train in simulation. We then apply the trained patterns to carry out fast classification of new objects. We demonstrate the approach on visually similar artificial and real fruit samples, showing a marked improvement compared with fixed-illuminant approaches as well as a more conventional code selection scheme. This work allows fast classification of previously indistinguishable objects, with potential applications in forgery detection, quality control in agriculture and manufacturing, and skin lesion classification.Comment: Submitted to Computer Vision and Image Understanding (CVIU

    Translational Functional Imaging in Surgery Enabled by Deep Learning

    Get PDF
    Many clinical applications currently rely on several imaging modalities such as Positron Emission Tomography (PET), Magnetic Resonance Imaging (MRI), Computed Tomography (CT), etc. All such modalities provide valuable patient data to the clinical staff to aid clinical decision-making and patient care. Despite the undeniable success of such modalities, most of them are limited to preoperative scans and focus on morphology analysis, e.g. tumor segmentation, radiation treatment planning, anomaly detection, etc. Even though the assessment of different functional properties such as perfusion is crucial in many surgical procedures, it remains highly challenging via simple visual inspection. Functional imaging techniques such as Spectral Imaging (SI) link the unique optical properties of different tissue types with metabolism changes, blood flow, chemical composition, etc. As such, SI is capable of providing much richer information that can improve patient treatment and care. In particular, perfusion assessment with functional imaging has become more relevant due to its involvement in the treatment and development of several diseases such as cardiovascular diseases. Current clinical practice relies on Indocyanine Green (ICG) injection to assess perfusion. Unfortunately, this method can only be used once per surgery and has been shown to trigger deadly complications in some patients (e.g. anaphylactic shock). This thesis addressed common roadblocks in the path to translating optical functional imaging modalities to clinical practice. The main challenges that were tackled are related to a) the slow recording and processing speed that SI devices suffer from, b) the errors introduced in functional parameter estimations under changing illumination conditions, c) the lack of medical data, and d) the high tissue inter-patient heterogeneity that is commonly overlooked. This framework follows a natural path to translation that starts with hardware optimization. To overcome the limitation that the lack of labeled clinical data and current slow SI devices impose, a domain- and task-specific band selection component was introduced. The implementation of such component resulted in a reduction of the amount of data needed to monitor perfusion. Moreover, this method leverages large amounts of synthetic data, which paired with unlabeled in vivo data is capable of generating highly accurate simulations of a wide range of domains. This approach was validated in vivo in a head and neck rat model, and showed higher oxygenation contrast between normal and cancerous tissue, in comparison to a baseline using all available bands. The need for translation to open surgical procedures was met by the implementation of an automatic light source estimation component. This method extracts specular reflections from low exposure spectral images, and processes them to obtain an estimate of the light source spectrum that generated such reflections. The benefits of light source estimation were demonstrated in silico, in ex vivo pig liver, and in vivo human lips, where the oxygenation estimation error was reduced when utilizing the correct light source estimated with this method. These experiments also showed that the performance of the approach proposed in this thesis surpass the performance of other baseline approaches. Video-rate functional property estimation was achieved by two main components: a regression and an Out-of-Distribution (OoD) component. At the core of both components is a compact SI camera that is paired with state-of-the-art deep learning models to achieve real time functional estimations. The first of such components features a deep learning model based on a Convolutional Neural Network (CNN) architecture that was trained on highly accurate physics-based simulations of light-tissue interactions. By doing this, the challenge of lack of in vivo labeled data was overcome. This approach was validated in the task of perfusion monitoring in pig brain and in a clinical study involving human skin. It was shown that this approach is capable of monitoring subtle perfusion changes in human skin in an arm clamping experiment. Even more, this approach was capable of monitoring Spreading Depolarizations (SDs) (deoxygenation waves) in the surface of a pig brain. Even though this method is well suited for perfusion monitoring in domains that are well represented with the physics-based simulations on which it was trained, its performance cannot be guaranteed for outlier domains. To handle outlier domains, the task of ischemia monitoring was rephrased as an OoD detection task. This new functional estimation component comprises an ensemble of Invertible Neural Networks (INNs) that only requires perfused tissue data from individual patients to detect ischemic tissue as outliers. The first ever clinical study involving a video-rate capable SI camera in laparoscopic partial nephrectomy was designed to validate this approach. Such study revealed particularly high inter-patient tissue heterogeneity under the presence of pathologies (cancer). Moreover, it demonstrated that this personalized approach is now capable of monitoring ischemia at video-rate with SI during laparoscopic surgery. In conclusion, this thesis addressed challenges related to slow image recording and processing during surgery. It also proposed a method for light source estimation to facilitate translation to open surgical procedures. Moreover, the methodology proposed in this thesis was validated in a wide range of domains: in silico, rat head and neck, pig liver and brain, and human skin and kidney. In particular, the first clinical trial with spectral imaging in minimally invasive surgery demonstrated that video-rate ischemia monitoring is now possible with deep learning

    High Dynamic Range Spectral Imaging Pipeline For Multispectral Filter Array Cameras

    Get PDF
    Spectral filter arrays imaging exhibits a strong similarity with color filter arrays. This permits us to embed this technology in practical vision systems with little adaptation of the existing solutions. In this communication, we define an imaging pipeline that permits high dynamic range (HDR)-spectral imaging, which is extended from color filter arrays. We propose an implementation of this pipeline on a prototype sensor and evaluate the quality of our implementation results on real data with objective metrics and visual examples. We demonstrate that we reduce noise, and, in particular we solve the problem of noise generated by the lack o

    A Novel Framework for Interactive Visualization and Analysis of Hyperspectral Image Data

    Get PDF

    Semantik renk deÄŸiÅŸmezliÄŸi

    Get PDF
    Color constancy aims to perceive the actual color of an object, disregarding the effectof the light source. Recent works showed that utilizing the semantic information inan image enhances the performance of the computational color constancy methods.Considering the recent success of the segmentation methods and the increased numberof labeled images, we propose a color constancy method that combines individualilluminant estimations of detected objects which are computed using the classes of theobjects and their associated colors. Then we introduce a weighting system that valuesthe applicability of the object classes to the color constancy problem. Lastly, weintroduce another metric expressing the detected object and how well it fits the learnedmodel of its class. Finally, we evaluate our proposed method on a popular colorconstancy dataset, confirming that each weight addition enhances the performanceof the global illuminant estimation. Experimental results show promising results,outperforming the conventional methods while competing with the state of the artmethods.--M.S. - Master of Scienc
    • …
    corecore