42 research outputs found

    Color description of low resolution images using fast bitwise quantization and border-interior classification

    Get PDF
    Image classification often require preprocessing and feature extraction steps that are directly related to the accuracy and speed of the whole task. In this paper we investigate color features extracted from low resolution images, assessing the influence of the resolution settings on the final classification accuracy. We propose a border-interior classification extractor with a logarithmic distance function in order to maintain the discrimination capability in different resolutions. Our study shows that the overall computational effort can be reduced in 98%. Besides, a fast bitwise quantization is performed for its efficiency on converting RGB images to one channel images. The contributions can benefit many applications, when dealing with a large number of images or in scenarios with limited network bandwidth and concerns with power consumption.FAPESP (grants # 10/19159-1 and 11/22749-8)CNPq (grant # 482760/2012-5

    Image coding using wavelets, interval wavelets and multi- layered wedgelets

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas

    Proof-of-Concept

    Get PDF
    Biometry is an area in great expansion and is considered as possible solution to cases where high authentication parameters are required. Although this area is quite advanced in theoretical terms, using it in practical terms still carries some problems. The systems available still depend on a high cooperation level to achieve acceptable performance levels, which was the backdrop to the development of the following project. By studying the state of the art, we propose the creation of a new and less cooperative biometric system that reaches acceptable performance levels.A constante necessidade de parâmetros mais elevados de segurança, nomeadamente ao nível de autenticação, leva ao estudo biometria como possível solução. Actualmente os mecanismos existentes nesta área tem por base o conhecimento de algo que se sabe ”password” ou algo que se possui ”codigo Pin”. Contudo este tipo de informação é facilmente corrompida ou contornada. Desta forma a biometria é vista como uma solução mais robusta, pois garante que a autenticação seja feita com base em medidas físicas ou compartimentais que definem algo que a pessoa é ou faz (”who you are” ou ”what you do”). Sendo a biometria uma solução bastante promissora na autenticação de indivíduos, é cada vez mais comum o aparecimento de novos sistemas biométricos. Estes sistemas recorrem a medidas físicas ou comportamentais, de forma a possibilitar uma autenticação (reconhecimento) com um grau de certeza bastante considerável. O reconhecimento com base no movimento do corpo humano (gait), feições da face ou padrões estruturais da íris, são alguns exemplos de fontes de informação em que os sistemas actuais se podem basear. Contudo, e apesar de provarem um bom desempenho no papel de agentes de reconhecimento autónomo, ainda estão muito dependentes a nível de cooperação exigida. Tendo isto em conta, e tudo o que já existe no ramo do reconhecimento biometrico, esta área está a dar passos no sentido de tornar os seus métodos o menos cooperativos poss??veis. Possibilitando deste modo alargar os seus objectivos para além da mera autenticação em ambientes controlados, para casos de vigilância e controlo em ambientes não cooperativos (e.g. motins, assaltos, aeroportos). É nesta perspectiva que o seguinte projecto surge. Através do estudo do estado da arte, pretende provar que é possível criar um sistema capaz de agir perante ambientes menos cooperativos, sendo capaz de detectar e reconhecer uma pessoa que se apresente ao seu alcance.O sistema proposto PAIRS (Periocular and Iris Recognition Systema) tal como nome indica, efectua o reconhecimento através de informação extraída da íris e da região periocular (região circundante aos olhos). O sistema é construído com base em quatro etapas: captura de dados, pré-processamento, extração de características e reconhecimento. Na etapa de captura de dados, foi montado um dispositivo de aquisição de imagens com alta resolução com a capacidade de capturar no espectro NIR (Near-Infra-Red). A captura de imagens neste espectro tem como principal linha de conta, o favorecimento do reconhecimento através da íris, visto que a captura de imagens sobre o espectro visível seria mais sensível a variações da luz ambiente. Posteriormente a etapa de pré-processamento implementada, incorpora todos os módulos do sistema responsáveis pela detecção do utilizador, avaliação de qualidade de imagem e segmentação da íris. O modulo de detecção é responsável pelo desencadear de todo o processo, uma vez que esta é responsável pela verificação da exist?ncia de um pessoa em cena. Verificada a sua exist?ncia, são localizadas as regiões de interesse correspondentes ? íris e ao periocular, sendo também verificada a qualidade com que estas foram adquiridas. Concluídas estas etapas, a íris do olho esquerdo é segmentada e normalizada. Posteriormente e com base em vários descritores, é extraída a informação biométrica das regiões de interesse encontradas, e é criado um vector de características biométricas. Por fim, é efectuada a comparação dos dados biometricos recolhidos, com os já armazenados na base de dados, possibilitando a criação de uma lista com os níveis de semelhança em termos biometricos, obtendo assim um resposta final do sistema. Concluída a implementação do sistema, foi adquirido um conjunto de imagens capturadas através do sistema implementado, com a participação de um grupo de voluntários. Este conjunto de imagens permitiu efectuar alguns testes de desempenho, verificar e afinar alguns parâmetros, e proceder a optimização das componentes de extração de características e reconhecimento do sistema. Analisados os resultados foi possível provar que o sistema proposto tem a capacidade de exercer as suas funções perante condições menos cooperativas

    Artificial intelligence and image processing applications for high-throughput phenotyping

    Get PDF
    Doctor of PhilosophyDepartment of Computer ScienceMitchell L NeilsenThe areas of Computer Vision and Scientific Computing have witnessed rapid growth in the last decade with the fields of industrial robotics, automotive and healthcare acting as the primary vehicles for research and advancement. However, related research in other fields, such as agriculture, remains an understudied problem. This dissertation explores the application of Computer Vision and Scientific Computing in an agricultural domain known as High-throughput Phenotyping (HTP). HTP is the assessment of complex seed traits such as growth, development, tolerance, resistance, ecology, yield, and the measurement of parameters that form more complex traits. The dissertation makes the following contributions: The first contribution is the development of algorithms to estimate morphometric traits such as length, width, area, and seed kernel count using 3-D graphics and static image processing, and the extension of existing algorithms for the same. The second contribution is the development of lightweight frameworks to aid in synthetic image dataset creation and image cropping for deep neural networks in HTP. Deep neural networks require a plethora of training data to yield results of the highest quality. However, no such training datasets are readily available for HTP research, especially on seed kernels. The proposed synthetic image generation framework helps generate a profusion of training data at will to train neural networks from a meager samples of seed kernels. Besides requiring large quantities of data, deep neural networks require the input to be a certain size. However, not all available data are in the size required by the deep neural networks. The proposed image cropper helps to resize images without resulting in any distortion, thereby, making image data fit for consumption. The third contribution is the design and analysis of supervised and self-supervised neural network architectures trained on synthetic images to perform the tasks of seed kernel classification, counting and morphometry. In the area of supervised image classification, state-of-the-art neural network models of VGG-16, VGG-19 and ResNet-101 are investigated. A Simple framework for Contrastive Learning of visual Representations (SimCLR) [137], Momentum Contrast (MoCo) [55] and Bootstrap Your Own Latent (BYOL) [123] are leveraged for self-supervised image classification. The instance-based segmentation deep neural network models of Mask R-CNN and YOLO are utilized to perform the tasks of seed kernel classification, segmentation and counting. The results demonstrate the feasibility of deep neural networks for their respective tasks of classification and instance segmentation. In addition to estimating seed kernel count from static images, algorithms that aid in seed kernel counting from videos are proposed and analyzed. Proposed is an algorithm that creates a slit image which can be analyzed to estimate seed count. Upon the creation of the slit image, the video is no longer required to estimate seed count, thereby, significantly lowering the computational resources required for the estimation. The fourth contribution is the development of an end-to-end, automated image capture system for single seed kernel analysis. In addition to estimating length and width from 2-D images, the proposed system estimates the volume of a seed kernel from 2-D images using the technique of volume sculpting. The relative standard deviation of the results produced by the proposed technique is lower (better) than the relative standard deviation of the results produced by volumetric estimation using the ellipsoid slicing technique. The fifth contribution is the development of image processing algorithms to provide feature enhancements to mobile applications to improve upon on-site phenotyping capabilities. Algorithms for two features of high value namely, leaf angle estimation and fractional plant cover estimation are developed. The leaf angle estimation feature estimates the angle between stem and leaf for images captured using mobile phone cameras whereas fractional plant cover is to determine companion plants i.e., plants that are able to co-exist and mutually benefit. The proposed techniques, frameworks and findings lay a solid foundation for future Computer Vision and Scientific Computing research in the domain of agriculture. The contributions are significant since the dissertation not only proposes techniques, but also develops low-cost end-to-end frameworks to leverage the proposed techniques in a scalable fashion

    Recent Advances in Embedded Computing, Intelligence and Applications

    Get PDF
    The latest proliferation of Internet of Things deployments and edge computing combined with artificial intelligence has led to new exciting application scenarios, where embedded digital devices are essential enablers. Moreover, new powerful and efficient devices are appearing to cope with workloads formerly reserved for the cloud, such as deep learning. These devices allow processing close to where data are generated, avoiding bottlenecks due to communication limitations. The efficient integration of hardware, software and artificial intelligence capabilities deployed in real sensing contexts empowers the edge intelligence paradigm, which will ultimately contribute to the fostering of the offloading processing functionalities to the edge. In this Special Issue, researchers have contributed nine peer-reviewed papers covering a wide range of topics in the area of edge intelligence. Among them are hardware-accelerated implementations of deep neural networks, IoT platforms for extreme edge computing, neuro-evolvable and neuromorphic machine learning, and embedded recommender systems

    Technology 2003: The Fourth National Technology Transfer Conference and Exposition, volume 2

    Get PDF
    Proceedings from symposia of the Technology 2003 Conference and Exposition, Dec. 7-9, 1993, Anaheim, CA, are presented. Volume 2 features papers on artificial intelligence, CAD&E, computer hardware, computer software, information management, photonics, robotics, test and measurement, video and imaging, and virtual reality/simulation

    Biometric Systems

    Get PDF
    Biometric authentication has been widely used for access control and security systems over the past few years. The purpose of this book is to provide the readers with life cycle of different biometric authentication systems from their design and development to qualification and final application. The major systems discussed in this book include fingerprint identification, face recognition, iris segmentation and classification, signature verification and other miscellaneous systems which describe management policies of biometrics, reliability measures, pressure based typing and signature verification, bio-chemical systems and behavioral characteristics. In summary, this book provides the students and the researchers with different approaches to develop biometric authentication systems and at the same time includes state-of-the-art approaches in their design and development. The approaches have been thoroughly tested on standard databases and in real world applications

    Segmentation d'images couleurs et multispectrales de la peau

    Get PDF
    La délimitation précise du contour des lésions pigmentées sur des images est une première étape importante pour le diagnostic assisté par ordinateur du mélanome. Cette thèse présente une nouvelle approche de la détection automatique du contour des lésions pigmentaires sur des images couleurs ou multispectrales de la peau. Nous présentons d'abord la notion de minimisation d'énergie par coupes de graphes en terme de Maxima A-Posteriori d'un champ de Markov. Après un rapide état de l'art, nous étudions l'influence des paramètres de l'algorithme sur les contours d'images couleurs. Dans ce cadre, nous proposons une fonction d'énergie basée sur des classifieurs performants (Machines à support de vecteurs et Forêts aléatoires) et sur un vecteur de caractéristiques calculé sur un voisinage local. Pour la segmentation de mélanomes, nous estimons une carte de concentration des chromophores de la peau, indices discriminants du mélanomes, à partir d'images couleurs ou multispectrales, et intégrons ces caractéristiques au vecteur. Enfin, nous détaillons le schéma global de la segmentation automatique de mélanomes, comportant une étape de sélection automatique des "graines" utiles à la coupure de graphes ainsi que la sélection des caractéristiques discriminantes. Cet outil est comparé favorablement aux méthodes classiques à base de coupure de graphes en terme de précision et de robustesse.Accurate border delineation of pigmented skin lesion (PSL) images is a vital first step in computer-aided diagnosis (CAD) of melanoma. This thesis presents a novel approach of automatic PSL border detection on color and multispectral skin images. We first introduce the concept of energy minimization by graph cuts in terms of maximum a posteriori estimation of a Markov random field (MAP-MRF framework). After a brief state of the art in interactive graph-cut based segmentation methods, we study the influence of parameters of the segmentation algorithm on color images. Under this framework, we propose an energy function based on efficient classifiers (support vector machines and random forests) and a feature vector calculated on a local neighborhood. For the segmentation of melanoma, we estimate the concentration maps of skin chromophores, discriminating indices of melanomas from color and multispectral images, and integrate these features in a vector. Finally, we detail an global framework of automatic segmentation of melanoma, which comprises two main stages: automatic selection of "seeds" useful for graph cuts and the selection of discriminating features. This tool is compared favorably to classic graph-cut based segmentation methods in terms of accuracy and robustness.SAVOIE-SCD - Bib.électronique (730659901) / SudocGRENOBLE1/INP-Bib.électronique (384210012) / SudocGRENOBLE2/3-Bib.électronique (384219901) / SudocSudocFranceF
    corecore