20 research outputs found

    Segmentation of images by color features: a survey

    Get PDF
    En este articulo se hace la revisiĂłn del estado del arte sobre la segmentaciĂłn de imagenes de colorImage segmentation is an important stage for object recognition. Many methods have been proposed in the last few years for grayscale and color images. In this paper, we present a deep review of the state of the art on color image segmentation methods; through this paper, we explain the techniques based on edge detection, thresholding, histogram-thresholding, region, feature clustering and neural networks. Because color spaces play a key role in the methods reviewed, we also explain in detail the most commonly color spaces to represent and process colors. In addition, we present some important applications that use the methods of image segmentation reviewed. Finally, a set of metrics frequently used to evaluate quantitatively the segmented images is shown

    Colour for the Advancement of Deep Learning in Computer Vision

    Get PDF
    This thesis explores several research areas for Deep Learning related to computer vision concerning colours. First, this thesis considers one of the most long standing challenges that has remained for Deep Learning which is, how can Deep Learning algorithms learn successfully without using human annotated data? To that end, this thesis examines using colours in images to learn meaningful representations of vision as a substitute for learning from hand-annotated data. Second, is another related topic to the previous, which is the application of Deep Learning to automate the complex graphics task of image colourisation, which is the process of adding colours to black and white images. Third, this thesis explores colour spaces and how the representations of colours in images affect the performance in Deep Learning models

    Digital Image Analysis of Vitiligo for Monitoring of Vitiligo Treatment

    Get PDF
    Vitiligo is an acquired pigmentary skin disorder characterized by depigmented macules that result from damage to and destruction of epidermal melanocytes. Visually, the vitiligous areas are paler in contrast to normal skin or completely white due to the lack of pigment melanin. The course of vitiligo is unpredictable where the vitiligous skin lesions may remain stable for years before worsening. Vitiligo treatments have two objectives, to arrest disease progression and to re-pigment the vitiligous skin lesions. To monitor the efficacy of the treatment, dermatologists observe the disease directly, or indirectly using digital photos. Currently there is no objective method to determine the efficacy of the vitiligo treatment. Physician's Global Assessment (PGA) scale is the current scoring system used by dermatologists to evaluate the treatment. The scale is based on the degree of repigmentation within lesions over time. This quantitative tool however may not be help to detect slight changes due to treatment as it would still be largely dependent on the human eye and judgment to produce the scorings. In addition, PGA score is also subjective, as it varies with dermatologists. The progression of vitiligo treatment can be very slow and can take more than 6 months. It is observed that dermatologists find it visually hard to determine the areas of skin repigmentation due to this slow progress and as a result the observations are made after a longer time frame. The objective of this research is to develop a tool that enables dermatologists to determine and quantify areas of repigmentation objectively over a shorter time frame during treatment. The approaches towards achieving this objective are based on digital image processing techniques. Skin color is due to the combination of skin histological parameters, namely pigment melanin and haemoglobin. However in digital imaging, color is produced by combining three different spectral bands, namely red, green, and blue (RGB). It is believed that the spatial distribution of melanin and haemoglobin in skin image could be separated. It is found that skin color distribution lies on a two-dimensional melanin-haemoglobin color subspace. In order to determine repigmentation (due to pigment melanin) it is necessary to perform a conversion from RGB skin image to this two-dimensional color subspace. Using principal component analysis (PCA) as a dimensional reduction tool, the two-dimensional subspace can be represented by its first and second principal components. Independent component analysis is employed to convert the twodimensional subspace into a skin image that represents skin areas due to melanin and haemoglobin only. In the skin image that represents skin areas due to melanin, vitiligous skin lesions are identified as skin areas that lack melanin. Segmentation is performed to separate the healthy skin and the vitiligous lesions. The difference in the vitiligous surface areas between skin images before and after treatment will be expressed as a percentage of repigmentation in each vitiligo lesion. This percentage will represent the repigmentation progression of a particular body region. Results of preliminary and pre-clinical trial study show that our vitiligo monitoring system has been able to determine repigmentation progression objectively and thus treatment efficacy on a shorter time cycle. An intensive clinical trial is currently undertaken in Hospital Kuala Lumpur using our developed system. VI

    Dimension reduction of image and audio space

    Full text link
    The reduction of data necessary for storage or transmission is a desirable goal in the digital video and audio domain. Compression schemes strive to reduce the amount of storage space or bandwidth necessary to keep or move the data. Data reduction can be accomplished so that visually or audibly unnecessary data is removed or recoded thus aiding the compression phase of the data processing. The characterization and identification of data that can be successfully removed or reduced is the purpose of this work. New philosophy, theory and methods for data processing are presented towards the goal of data reduction. The philosophy and theory developed in this work establish a foundation for high speed data reduction suitable for multi-media applications. The developed methods encompass motion detection and edge detection as features of the systems. The philosophy of energy flow analysis in video processing enables the consideration of noise in digital video data. Research into noise versus motion leads to an efficient and successful method of identifying motion in a sequence. The research of the underlying statistical properties of vector quantization provides an insight into the performance characteristics of vector quantization and leads to successful improvements in application. The underlying statistical properties of the vector quantization process are analyzed and three theorems are developed and proved. The theorems establish the statistical distributions and probability densities of various metrics of the vector quantization process. From these properties, an intelligent and efficient algorithm design is developed and tested. The performance improvements in both time and quality are established through algorithm analysis and empirical testing. The empirical results are presented

    Calm Displays and Their Applications : Making Emissive Displays Mimic Reflective Surfaces Using Visual Psychophysics, Light Sensing and Colour Science

    Get PDF
    Ph. D. Thesis.Our environment is increasingly full of obtrusive display panels, which become illuminating surfaces when on, and void black rectangles when off. Some researchers argue that emissive displays are incompatible with Weiser and Seely Brown's vision of "calm technology", due to their inability to seamlessly blend into the background. Indeed, Mankoff has shown that for any ambient technology, the ability to move into the periphery is the most relevant factor in their usability. In this thesis, a background mode for displays is proposed based on the idea that displays can look like an ordinary piece of reflective paper showing the same content. The thesis consists of three main parts. In the first part (Chapter 4), human colour matching performance between an emissive display and reflective paper under chromatic lighting conditions is measured in a psychophysical experiment. We find that threshold discrimination ellipses vary with condition (16.0×6.0 ΔEab on average), with lower sensitivity to chroma than hue changes. Match distributions are bimodal for some conditions. In the second part (Chapter 5), an algorithm enabling emissive displays to look like reflective paper is described and evaluated, giving an average error of ΔEab = 10.2 between display and paper. A field study showed that paper-like displays are more acceptable in bedrooms and that people are more likely to keep them always on than normal displays. Finally, the third part (Chapter 6) concerns the development and four-week trial of a paper-like display application. Using the autobiographical design method, a system for sharing bedtime with a remote partner was developed. We see that once unobtrusive, display systems are desired for use even in spaces like bedrooms. Paper-like displays enable both emerging and existing devices to move into the periphery and become “invisible”, and therefore provide a new building block of calm technology that is not achievable using simple emissive displays

    Colour and Colorimetry Multidisciplinary Contributions Vol. XIb

    Get PDF
    It is well known that the subject of colour has an impact on a range of disciplines. Colour has been studied in depth for many centuries, and as well as contributing to theoretical and scientific knowledge, there have been significant developments in applied colour research, which has many implications for the wider socio-economic community. At the 7th Convention of Colorimetry in Parma, on the 1st October 2004, as an evolution of the previous SIOF Group of Colorimetry and Reflectoscopy founded in 1995, the "Gruppo del Colore" was established. The objective was to encourage multi and interdisciplinary collaboration and networking between people in Italy that addresses problems and issues on colour and illumination from a professional, cultural and scientific point of view. On the 16th of September 2011 in Rome, in occasion of the VII Color Conference, the members assembly decided to vote for the autonomy of the group. The autonomy of the Association has been achieved in early 2012. These are the proceedings of the English sessions of the XI Conferenza del Colore

    Mise en correspondance stéréoscopique d'images couleur en présence d'occultations

    Get PDF
    This work deals with stereo-vision and more precisely matching of pixels using correlation measures. Matching is an important task in computer vision, the accuracy of the three-dimensional reconstruction depending on the accuracy of the matching. The problems of matching are: intensity distortions, noises, untextured areas, foreshortening and occlusions. Our research concerns matching color images and takes into account the problem of occlusions.First, we distinguish the different elements that can compose a matching algorithm. This description allows us to introduce a classification of matching methods into four families : local methods, global methods, mixed methods and multi-pass methods.Second, we set up an evaluation and comparison protocol based on fourteen image pairs, five evaluation areas and ten criteria. This protocol also provides disparity, ambiguity, inaccuracy and correct disparity maps. This protocol enables us to study the behavior of the methods we proposed.Third, forty correlation measures are classified into five families : cross-correlation-based measures, classical statistics-based measures, derivative-based measures, non-parametric measures and robust measures. We also propose six new measures based on robust statistics. The results show us the most robust measures near occlusions : the robust measures including the six new measures.Fourth, we propose to generalize dense correlation-based matching to color by choosing a color system and by generalizing the correlation measures to color. Ten color systems have been evaluated and three different methods have been compared : to compute the correlation with each color component and then to merge the results; to process a principal component analysis and then to compute the correlation with the first principal component; to compute the correlation directly with colors. We can conclude that the fusion method is the best.Finally, in order to take into account the problem of occlusions, we present new algorithms that use two correlation measures: a classic measure in non-occluded area and a robust measure in the whole occlusion area. We introduce four different methods: edge detection methods, weighted correlation methods, post-detection methods and fusion method. This latter method is the most efficient.Cette thèse se situe dans le cadre de la vision par ordinateur et concerne plus précisément l'étape de mise en correspondance de pixels en stéréovision binoculaire. Cette étape consiste à retrouver les pixels homologues dans deux images d'une même scène, prises de deux points de vue différents. Une des manières de réaliser la mise en correspondance est de faire appel à des mesures de corrélation. Les algorithmes utilisés se heurtent alors aux difficultés suivantes : les changements de luminosité, les bruits, les raccourcissements, les zones peu texturées et les occultations. Les travaux qui ont été réalisés sont une étude sur les méthodes à base de corrélation, en prenant en compte le problème des occultations et l'utilisation d'images couleur.Dans un premier chapitre, nous établissons un état de l'art des méthodes de mise en correspondance de pixels. Nous donnons un modèle générique des méthodes s'appuyant sur la définition d'éléments constituants. Nous distinguons alors quatre catégories de méthodes : les méthodes locales, les méthodes globales, les méthodes mixtes et les méthodes à multiples passages. Le second chapitre aborde le problème de l'évaluation des méthodes de mise en correspondance de pixels. Après avoir donné un état de l'art des protocoles existants, nous proposons un protocole d'évaluation et de comparaison qui prend en compte des images avec vérité terrain et qui distingue différentes zones d'occultations. Dans le troisième chapitre, nous proposons une taxonomie des mesures de corrélation regroupées en cinq familles : les mesures de corrélation croisée, les mesures utilisant des outils de statistiques classiques, les mesures utilisant les dérivées des images, les mesures s'appuyant sur des outils des statistiques non paramétriques et les mesures exploitant des outils des statistiques robustes. Parmi cette dernière famille, nous proposons dix-sept mesures. Les résultats obtenus avec notre protocole montrent que ces mesures obtiennent les meilleurs résultats dans les zones d'occultations. Le quatrième chapitre concerne la généralisation à la couleur des méthodes de mise en correspondance à base de corrélation. Après avoir présenté les systèmes de représentation de la couleur que nous testons, nous abordons la généralisation des méthodes à base de corrélation en passant par l'adaptation des mesures de corrélation à la couleur. Nous proposons trois méthodes différentes : fusion des résultats sur chaque composante, utilisation d'une analyse en composante principale et utilisation d'une mesure de corrélation couleur. Les résultats obtenus avec notre protocole mettent en évidence la meilleure méthode qui consiste à fusionner les scores de corrélation. Dans le dernier chapitre, pour prendre en compte les occultations, nous proposons des méthodes hybrides qui s'appuient sur l'utilisation de deux mesures de corrélation : une mesure classique dans les zones sans occultation et une mesure robuste dans les zones d'occultations. Nous distinguons quatre types de méthodes à base de détection de contours, de corrélation pondérée, de post-détection des occultations et de fusion de cartes de disparités. Les résultats obtenus avec notre protocole montrent que la méthode la plus performante consiste à fusionner deux cartes de disparités

    Methoden zur automatischen Unkrauterkennung fĂĽr die Prozesssteuerung von HerbizidmaĂźnahmen

    Get PDF
      -kein Abstract-  -no abstract

    Internationales Kolloquium über Anwendungen der Informatik und Mathematik in Architektur und Bauwesen : 04. bis 06.07. 2012, Bauhaus-Universität Weimar

    Get PDF
    The 19th International Conference on the Applications of Computer Science and Mathematics in Architecture and Civil Engineering will be held at the Bauhaus University Weimar from 4th till 6th July 2012. Architects, computer scientists, mathematicians, and engineers from all over the world will meet in Weimar for an interdisciplinary exchange of experiences, to report on their results in research, development and practice and to discuss. The conference covers a broad range of research areas: numerical analysis, function theoretic methods, partial differential equations, continuum mechanics, engineering applications, coupled problems, computer sciences, and related topics. Several plenary lectures in aforementioned areas will take place during the conference. We invite architects, engineers, designers, computer scientists, mathematicians, planners, project managers, and software developers from business, science and research to participate in the conference
    corecore