4 research outputs found

    Evaluation and optimal design of spectral sensitivities for digital color imaging

    Get PDF
    The quality of an image captured by color imaging system primarily depends on three factors: sensor spectral sensitivity, illumination and scene. While illumination is very important to be known, the sensitivity characteristics is critical to the success of imaging applications, and is necessary to be optimally designed under practical constraints. The ultimate image quality is judged subjectively by human visual system. This dissertation addresses the evaluation and optimal design of spectral sensitivity functions for digital color imaging devices. Color imaging fundamentals and device characterization are discussed in the first place. For the evaluation of spectral sensitivity functions, this dissertation concentrates on the consideration of imaging noise characteristics. Both signal-independent and signal-dependent noises form an imaging noise model and noises will be propagated while signal is processed. A new colorimetric quality metric, unified measure of goodness (UMG), which addresses color accuracy and noise performance simultaneously, is introduced and compared with other available quality metrics. Through comparison, UMG is designated as a primary evaluation metric. On the optimal design of spectral sensitivity functions, three generic approaches, optimization through enumeration evaluation, optimization of parameterized functions, and optimization of additional channel, are analyzed in the case of the filter fabrication process is unknown. Otherwise a hierarchical design approach is introduced, which emphasizes the use of the primary metric but the initial optimization results are refined through the application of multiple secondary metrics. Finally the validity of UMG as a primary metric and the hierarchical approach are experimentally tested and verified

    Defining Acceptable Colour Tolerances for Identity Branding in Natural Viewing Conditions

    Full text link
    Graphic arts provide the channel for the reproduction of most brand communications. The reproduction tolerances in the graphic arts industry are based on standards that aim to produce visually acceptable outcomes. To communicate with their target audience brands, use a set of visual cues that may include the definition of a single or combinations of them to represent themselves. The outcomes are often defined entirely by their colour specification without an associating it to target parameters or suitable colour thresholds. This paper researches into the feasibility of defining colour tolerances for brand graphical representations. The National Health Service branding was used as a test case borne out of a need to resolve differences between contracted suppliers of brand graphics. Psychophysical evaluation of colour coded navigation used to facilitate wayfinding in hospitals under the varying illuminances across the estate was found to have a maximum acceptable colour difference threshold of 5Ī”E00. The simulation of defined illumination levels in hospitals, between 25-3000 lux, resulted in an acceptable colour tolerance estimation for colour coded navigation of 3.6Ī”E00. Using ICC media relative correction an experiment was designed to test the extent to which substrate white points could be corrected for colour differences between brand proofs and reproductions. Branded stationery and publications substrate corrections to achieve visual matches had acceptable colour difference thresholds of 9.5Ī”E*ab for solid colours but only 2.5Ī”E*ab. Substrate white point corrections on displays were found to be approximately 12Ī”E*ab for solids and 5Ī”E*ab for tints. Where display media were concerned the use of non-medical grade to view medical images and branded content was determined to be inefficient, unless suitable greyscale functions were employed. A STRESS test was carried out, for TC 1-93 Greyscale Calculation for Self-Luminous Devices, to compare DICOM GSDF with Whittleā€™s log brightness. Whittleā€™s function was found to outperform DICOM GSDF. The colour difference formulas used in this research were tested, using near neutral samples 2 judged by observers using estimated magnitude differences. The CIEDE2000 formula was found to outperform CIELAB despite unexpected outcomes when tested using displays. CIELAB was outperformed in Ī”L* by CIEDE2000 for displays. Overall it was found that identity branding colour reproduction was mostly suited to graphic arts tolerances however, to address specific communications, approved tolerances reflecting viewing environments would be the most efficient approach. The findings in this research highlights the need for brand visualisation to consider the adoption of a strategy that includes graphic arts approaches. This is the first time that the subject of defining how brands achieve tolerances for their targeted visual communications has been researched

    Classification of skin tumours through the analysis of unconstrained images

    Get PDF
    Skin cancer is the most frequent malignant neoplasm for Caucasian individuals. According to the Skin Cancer Foundation, the incidence of melanoma, the most malignant of skin tumours, and resultant mortality, have increased exponentially during the past 30 years, and continues to grow. [1]. Although often intractable in advanced stages, skin cancer in general and melanoma in particular, if detected in an early stage, can achieve cure ratios of over 95% [1,55]. Early screening of the lesions is, therefore, crucial, if a cure is to be achieved. Most skin lesions classification systems rely on a human expert supported dermatoscopy, which is an enhanced and zoomed photograph of the lesion zone. Nevertheless and although contrary claims exist, as far as is known by the author, classification results are currently rather inaccurate and need to be verified through a laboratory analysis of a piece of the lesionā€™s tissue. The aim of this research was to design and implement a system that was able to automatically classify skin spots as inoffensive or dangerous, with a small margin of error; if possible, with higher accuracy than the results achieved normally by a human expert and certainly better than any existing automatic system. The system described in this thesis meets these criteria. It is able to capture an unconstrained image of the affected skin area and extract a set of relevant features that may lead to, and be representative of, the four main classification characteristics of skin lesions: Asymmetry; Border; Colour; and Diameter. These relevant features are then evaluated either through a Bayesian statistical process - both a simple k-Nearest Neighbour as well as a Fuzzy k-Nearest Neighbour classifier - a Support Vector Machine and an Artificial Neural Network in order to classify the skin spot as either being a Melanoma or not. The characteristics selected and used through all this work are, to the authorā€™s knowledge, combined in an innovative manner. Rather than simply selecting absolute values from the images characteristics, those numbers were combined into ratios, providing a much greater independence from environment conditions during the process of image capture. Along this work, image gathering became one of the most challenging activities. In fact several of the initially potential sources failed and so, the author had to use all the pictures he could find, namely on the Internet. This limited the test set to 136 images, only. Nevertheless, the process results were excellent. The algorithms developed were implemented into a fully working system which was extensively tested. It gives a correct classification of between 76% and 92% ā€“ depending on the percentage of pictures used to train the system. In particular, the system gave no false negatives. This is crucial, since a system which gave false negatives may deter a patient from seeking further treatment with a disastrous outcome. These results are achieved by detecting precise edges for every lesion image, extracting features considered relevant according to the giving different weights to the various extracted features and submitting these values to six classification algorithms ā€“ k-Nearest Neighbour, Fuzzy k-Nearest Neighbour, NaĆÆve Bayes, Tree Augmented NaĆÆve Bayes, Support Vector Machine and Multilayer Perceptron - in order to determine the most reliable combined process. Training was carried out in a supervised way ā€“ all the lesions were previously classified by an expert on the field before being subject to the scrutiny of the system. The author is convinced that the work presented on this PhD thesis is a valid contribution to the field of skin cancer diagnostics. Albeit its scope is limited ā€“ one lesion per image ā€“ the results achieved by this arrangement of segmentation, feature extraction and classification algorithms showed this is the right path to achieving a reliable early screening system. If and when, to all these data, values for age, gender and evolution might be used as classification features, the results will, no doubt, become even more accurate, allowing for an improvement in the survival rates of skin cancer patients

    Rethinking auto-colourisation of natural Images in the context of deep learning

    Get PDF
    Auto-colourisation is the ill-posed problem of creating a plausible full-colour image from a grey-scale prior. The current state of the art utilises image-to-image Generative Adversarial Networks (GANs). The standard method for training colourisation is reformulating RGB images into a luminance prior and two-channel chrominance supervisory signal. However, progress in auto-colourisation is inherently limited by multiple prerequisite dilemmas, where unsolved problems are mutual prerequisites. This thesis advances the field of colourisation on three fronts: architecture, measures, and data. Changes are recommended to common GAN colourisation architectures. Firstly, removing batch normalisation from the discriminator to allow the discriminator to learn the primary statistics of plausible colour images. Secondly, eliminating the direct L1 loss on the generator as L1 will limit the discovery of the plausible colour manifold. The lack of an objective measure of plausible colourisation necessitates resource-intensive human evaluation and repurposed objective measures from other fields. There is no consensus on the best objective measure due to a knowledge gap regarding how well objective measures model the mean human opinion of plausible colourisation. An extensible data set of human-evaluated colourisations, the Human Evaluated Colourisation Dataset (HECD) is presented. The results from this dataset are compared to the commonly-used objective measures and uncover a poor correlation between the objective measures and mean human opinion. The HECD can assess the future appropriateness of proposed objective measures. An interactive tool supplied with the HECD allows for a first exploration of the space of plausible colourisation. Finally, it will be shown that the luminance channel is not representative of the legacy black-and-white images that will be presented to models when deployed; This leads to out-of-distribution errors in all three channels of the final colour image. A novel technique is proposed to simulate priors that match any black-and-white media for which the spectral response is known
    corecore