371 research outputs found

    A Study on Inspection of Defective Tablet Blister Using Image Segmentation Techniques

    Get PDF
    humansareaffectedfrom many kind ofdiseases. Proper Medicationistheonlywaytoovercome fromsuchdiseases.Somedicinesbecome most important part of human life.Manufacturing of medicines is done in very large scale. Duringmanufacturing, thereare many kind of defects in tablet blister, defects are likebreakage, cracksetcpresentin tablets or capsules.There may be side-effects of these defected tablets or capsules due to variation in dosage when consumed. The manufactured tablets should be properly inspected before reaching to the public, so that they do not cause any side-effects.Manual inspection of such defects in tablet blister may be very challenging task.Image segmentation is an important technique for automation of visual inspection. Hence, it is important to propose some approaches to detect these defects in tablet blister. In literaturesurvey many researchers have proposed multiple procedures for identifying defects in tablet blister. In this research work we review allthe methods used to identify defects in tablets blister

    Steganography and Steganalysis in Digital Multimedia: Hype or Hallelujah?

    Get PDF
    In this tutorial, we introduce the basic theory behind Steganography and Steganalysis, and present some recent algorithms and developments of these fields. We show how the existing techniques used nowadays are related to Image Processing and Computer Vision, point out several trendy applications of Steganography and Steganalysis, and list a few great research opportunities just waiting to be addressed.In this tutorial, we introduce the basic theory behind Steganography and Steganalysis, and present some recent algorithms and developments of these fields. We show how the existing techniques used nowadays are related to Image Processing and Computer Vision, point out several trendy applications of Steganography and Steganalysis, and list a few great research opportunities just waiting to be addressed

    Automatic signature verification system

    Get PDF
    Philosophiae Doctor - PhDIn this thesis, we explore dynamic signature verification systems. Unlike other signature models, we use genuine signatures in this project as they are more appropriate in real world applications. Signature verification systems are typical examples of biometric devices that use physical and behavioral characteristics to verify that a person really is who he or she claims to be. Other popular biometric examples include fingerprint scanners and hand geometry devices. Hand written signatures have been used for some time to endorse financial transactions and legal contracts although little or no verification of signatures is done. This sets it apart from the other biometrics as it is well accepted method of authentication. Until more recently, only hidden Markov models were used for model construction. Ongoing research on signature verification has revealed that more accurate results can be achieved by combining results of multiple models. We also proposed to use combinations of multiple single variate models instead of single multi variate models which are currently being adapted by many systems. Apart from these, the proposed system is an attractive way for making financial transactions more secure and authenticate electronic documents as it can be easily integrated into existing transaction procedures and electronic communication

    Classifiers and machine learning techniques for image processing and computer vision

    Get PDF
    Orientador: Siome Klein GoldensteinTese (doutorado) - Universidade Estadual de Campinas, Instituto da ComputaçãoResumo: Neste trabalho de doutorado, propomos a utilizaçãoo de classificadores e técnicas de aprendizado de maquina para extrair informações relevantes de um conjunto de dados (e.g., imagens) para solução de alguns problemas em Processamento de Imagens e Visão Computacional. Os problemas de nosso interesse são: categorização de imagens em duas ou mais classes, detecçãao de mensagens escondidas, distinção entre imagens digitalmente adulteradas e imagens naturais, autenticação, multi-classificação, entre outros. Inicialmente, apresentamos uma revisão comparativa e crítica do estado da arte em análise forense de imagens e detecção de mensagens escondidas em imagens. Nosso objetivo é mostrar as potencialidades das técnicas existentes e, mais importante, apontar suas limitações. Com esse estudo, mostramos que boa parte dos problemas nessa área apontam para dois pontos em comum: a seleção de características e as técnicas de aprendizado a serem utilizadas. Nesse estudo, também discutimos questões legais associadas a análise forense de imagens como, por exemplo, o uso de fotografias digitais por criminosos. Em seguida, introduzimos uma técnica para análise forense de imagens testada no contexto de detecção de mensagens escondidas e de classificação geral de imagens em categorias como indoors, outdoors, geradas em computador e obras de arte. Ao estudarmos esse problema de multi-classificação, surgem algumas questões: como resolver um problema multi-classe de modo a poder combinar, por exemplo, caracteríisticas de classificação de imagens baseadas em cor, textura, forma e silhueta, sem nos preocuparmos demasiadamente em como normalizar o vetor-comum de caracteristicas gerado? Como utilizar diversos classificadores diferentes, cada um, especializado e melhor configurado para um conjunto de caracteristicas ou classes em confusão? Nesse sentido, apresentamos, uma tecnica para fusão de classificadores e caracteristicas no cenário multi-classe através da combinação de classificadores binários. Nós validamos nossa abordagem numa aplicação real para classificação automática de frutas e legumes. Finalmente, nos deparamos com mais um problema interessante: como tornar a utilização de poderosos classificadores binarios no contexto multi-classe mais eficiente e eficaz? Assim, introduzimos uma tecnica para combinação de classificadores binarios (chamados classificadores base) para a resolução de problemas no contexto geral de multi-classificação.Abstract: In this work, we propose the use of classifiers and machine learning techniques to extract useful information from data sets (e.g., images) to solve important problems in Image Processing and Computer Vision. We are particularly interested in: two and multi-class image categorization, hidden messages detection, discrimination among natural and forged images, authentication, and multiclassification. To start with, we present a comparative survey of the state-of-the-art in digital image forensics as well as hidden messages detection. Our objective is to show the importance of the existing solutions and discuss their limitations. In this study, we show that most of these techniques strive to solve two common problems in Machine Learning: the feature selection and the classification techniques to be used. Furthermore, we discuss the legal and ethical aspects of image forensics analysis, such as, the use of digital images by criminals. We introduce a technique for image forensics analysis in the context of hidden messages detection and image classification in categories such as indoors, outdoors, computer generated, and art works. From this multi-class classification, we found some important questions: how to solve a multi-class problem in order to combine, for instance, several different features such as color, texture, shape, and silhouette without worrying about the pre-processing and normalization of the combined feature vector? How to take advantage of different classifiers, each one custom tailored to a specific set of classes in confusion? To cope with most of these problems, we present a feature and classifier fusion technique based on combinations of binary classifiers. We validate our solution with a real application for automatic produce classification. Finally, we address another interesting problem: how to combine powerful binary classifiers in the multi-class scenario more effectively? How to boost their efficiency? In this context, we present a solution that boosts the efficiency and effectiveness of multi-class from binary techniques.DoutoradoEngenharia de ComputaçãoDoutor em Ciência da Computaçã

    Biometric identity verification using on-line & off-line signature verification

    Get PDF
    Biometrics is the utilization of biological characteristics (face, iris, fingerprint) or behavioral traits (signature, voice) for identity verification of an individual. Biometric authentication is gaining popularity as a more trustable alternative to password-based security systems as it is relatively hard to be forgotten, stolen, or guessed. Signature is a behavioral biometric: it is not based on the physical properties, such as fingerprint or face, of the individual, but behavioral ones. As such, one's signature may change over time and it is not nearly as unique or difficult to forge as iris patterns or fingerprints, however signature's widespread acceptance by the public, make it more suitable for certain lower-security authentication needs. Signature verification is split into two according to the available data in the input. Off-line signature verification takes as input the image of a signature and is useful in automatic verification of signatures found on bank checks and documents. On-line signature verification uses signatures that are captured by pressure-sensitive tablets and could be used in real time applications like credit card transactions or resource accesses. In this work we present two complete systems for on-line and off-line signature verification. During registration to either of the systems the user has to submit a number of reference signatures which are cross aligned to extract statistics describing the variation in the user's signatures. Both systems have similar verification methodology and differ only in data acquisition and feature extraction modules. A test signature's authenticity is established by first aligning it with each reference signature of the claimed user, resulting in a number of dissimilarity scores: distances to nearest, farthest and template reference signatures. In previous systems, only one of these distances, typically the distance to the nearest reference signature or the distance to a template signature, was chosen, in an ad-hoc manner, to classify the signature as genuine or forgery. Here we propose a method to utilize all of these distances, treating them as features in a two-class classification problem, using standard pattern classification techniques. The distances are first normalized, resulting in a three dimensional space where genuine and forgery signature distributions are well separated. We experimented with the Bayes classifier, Support Vector Machines, and a linear classifier used in conjunction with Principal Component Analysis, to classify a given signature into one of the two classes (forgery or genuine). Test data sets of 620 on-line and 100 off-line signatures were constructed to evaluate performances of the two systems. Since it is very difficult to obtain real forgeries, we obtained skilled forgeries which are supplied by forgers who had access to signature data to practice before forging. The online system has a 1.4% error in rejecting forgeries, while rejecting only 1.3% of genuine signatures. As an offine signature is easier to forge, the offine system's performance is lower: a 25% error in rejecting forgery signatures and 20% error in rejecting genuine signatures. The results for the online system show significant improvement over the state-of-the-art results, and the results for the offline system are comparable with the performance of experienced human examiners

    Análise de propriedades intrínsecas e extrínsecas de amostras biométricas para detecção de ataques de apresentação

    Get PDF
    Orientadores: Anderson de Rezende Rocha, Hélio PedriniTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Os recentes avanços nas áreas de pesquisa em biometria, forense e segurança da informação trouxeram importantes melhorias na eficácia dos sistemas de reconhecimento biométricos. No entanto, um desafio ainda em aberto é a vulnerabilidade de tais sistemas contra ataques de apresentação, nos quais os usuários impostores criam amostras sintéticas, a partir das informações biométricas originais de um usuário legítimo, e as apresentam ao sensor de aquisição procurando se autenticar como um usuário válido. Dependendo da modalidade biométrica, os tipos de ataque variam de acordo com o tipo de material usado para construir as amostras sintéticas. Por exemplo, em biometria facial, uma tentativa de ataque é caracterizada quando um usuário impostor apresenta ao sensor de aquisição uma fotografia, um vídeo digital ou uma máscara 3D com as informações faciais de um usuário-alvo. Em sistemas de biometria baseados em íris, os ataques de apresentação podem ser realizados com fotografias impressas ou com lentes de contato contendo os padrões de íris de um usuário-alvo ou mesmo padrões de textura sintéticas. Nos sistemas biométricos de impressão digital, os usuários impostores podem enganar o sensor biométrico usando réplicas dos padrões de impressão digital construídas com materiais sintéticos, como látex, massa de modelar, silicone, entre outros. Esta pesquisa teve como objetivo o desenvolvimento de soluções para detecção de ataques de apresentação considerando os sistemas biométricos faciais, de íris e de impressão digital. As linhas de investigação apresentadas nesta tese incluem o desenvolvimento de representações baseadas nas informações espaciais, temporais e espectrais da assinatura de ruído; em propriedades intrínsecas das amostras biométricas (e.g., mapas de albedo, de reflectância e de profundidade) e em técnicas de aprendizagem supervisionada de características. Os principais resultados e contribuições apresentadas nesta tese incluem: a criação de um grande conjunto de dados publicamente disponível contendo aproximadamente 17K videos de simulações de ataques de apresentações e de acessos genuínos em um sistema biométrico facial, os quais foram coletados com a autorização do Comitê de Ética em Pesquisa da Unicamp; o desenvolvimento de novas abordagens para modelagem e análise de propriedades extrínsecas das amostras biométricas relacionadas aos artefatos que são adicionados durante a fabricação das amostras sintéticas e sua captura pelo sensor de aquisição, cujos resultados de desempenho foram superiores a diversos métodos propostos na literature que se utilizam de métodos tradicionais de análise de images (e.g., análise de textura); a investigação de uma abordagem baseada na análise de propriedades intrínsecas das faces, estimadas a partir da informação de sombras presentes em sua superfície; e, por fim, a investigação de diferentes abordagens baseadas em redes neurais convolucionais para o aprendizado automático de características relacionadas ao nosso problema, cujos resultados foram superiores ou competitivos aos métodos considerados estado da arte para as diferentes modalidades biométricas consideradas nesta tese. A pesquisa também considerou o projeto de eficientes redes neurais com arquiteturas rasas capazes de aprender características relacionadas ao nosso problema a partir de pequenos conjuntos de dados disponíveis para o desenvolvimento e a avaliação de soluções para a detecção de ataques de apresentaçãoAbstract: Recent advances in biometrics, information forensics, and security have improved the recognition effectiveness of biometric systems. However, an ever-growing challenge is the vulnerability of such systems against presentation attacks, in which impostor users create synthetic samples from the original biometric information of a legitimate user and show them to the acquisition sensor seeking to authenticate themselves as legitimate users. Depending on the trait used by the biometric authentication, the attack types vary with the type of material used to build the synthetic samples. For instance, in facial biometric systems, an attempted attack is characterized by the type of material the impostor uses such as a photograph, a digital video, or a 3D mask with the facial information of a target user. In iris-based biometrics, presentation attacks can be accomplished with printout photographs or with contact lenses containing the iris patterns of a target user or even synthetic texture patterns. In fingerprint biometric systems, impostor users can deceive the authentication process using replicas of the fingerprint patterns built with synthetic materials such as latex, play-doh, silicone, among others. This research aimed at developing presentation attack detection (PAD) solutions whose objective is to detect attempted attacks considering different attack types, in each modality. The lines of investigation presented in this thesis aimed at devising and developing representations based on spatial, temporal and spectral information from noise signature, intrinsic properties of the biometric data (e.g., albedo, reflectance, and depth maps), and supervised feature learning techniques, taking into account different testing scenarios including cross-sensor, intra-, and inter-dataset scenarios. The main findings and contributions presented in this thesis include: the creation of a large and publicly available benchmark containing 17K videos of presentation attacks and bona-fide presentations simulations in a facial biometric system, whose collect were formally authorized by the Research Ethics Committee at Unicamp; the development of novel approaches to modeling and analysis of extrinsic properties of biometric samples related to artifacts added during the manufacturing of the synthetic samples and their capture by the acquisition sensor, whose results were superior to several approaches published in the literature that use traditional methods for image analysis (e.g., texture-based analysis); the investigation of an approach based on the analysis of intrinsic properties of faces, estimated from the information of shadows present on their surface; and the investigation of different approaches to automatically learning representations related to our problem, whose results were superior or competitive to state-of-the-art methods for the biometric modalities considered in this thesis. We also considered in this research the design of efficient neural networks with shallow architectures capable of learning characteristics related to our problem from small sets of data available to develop and evaluate PAD solutionsDoutoradoCiência da ComputaçãoDoutor em Ciência da Computação140069/2016-0 CNPq, 142110/2017-5CAPESCNP

    Measurement, optimisation and control of particle properties in pharmaceutical manufacturing processes

    Get PDF
    Previously held under moratorium from 2 June 2020 until 6 June 2022.The understanding and optimisation of particle properties connected to their structure and morphology is a common objective for particle engineering applications either to improve materialhandling in the manufacturing process or to influence Critical Quality Attributes (CQAs) linked to product performance. This work aims to demonstrate experimental means to support a rational development approach for pharmaceutical particulate systems with a specific focus on droplet drying platforms such as spray drying. Micro-X-ray tomography (micro-XRT) is widely applied in areas such as geo- and biomedical sciences to enable a three dimensional investigation of the specimens. Chapter 4 elaborates on practical aspects of micro-XRT for a quantitative analysis of pharmaceutical solid products with an emphasis on implemented image processing and analysis methodologies. Potential applications of micro-XRT in the pharmaceutical manufacturing process can range from the characterisation of single crystals to fully formulated oral dosage forms. Extracted quantitative information can be utilised to directly inform product design and production for process development or optimisation. The non-destructive nature of the micro-XRT analysis can be further employed to investigate structure-performance relationships which might provide valuable insights for modelling approaches. Chapter 5 further demonstrates the applicability of micro-XRT for the analysis of ibuprofen capsules as a multi-particulate system each with a population of approximately 300 pellets. The in-depth analysis of collected micro-XRT image data allowed the extraction of more than 200 features quantifying aspects of the pellets’ size, shape, porosity, surface and orientation. Employed feature selection and machine learning methods enabled the detection of broken pellets within a classification model. The classification model has an accuracy of more than 99.55% and a minimum precision of 86.20% validated with a test dataset of 886 pellets from three capsules. The combination of single droplet drying (SDD) experiments with a subsequent micro-XRT analysis was used for a quantitative investigation of the particle design space and is described in Chapter 6. The implemented platform was applied to investigate the solidification of formulated metformin hydrochloride particles using D-mannitol and hydroxypropyl methylcellulose within a selected, pragmatic particle design space. The results indicate a significant impact of hydroxypropyl methylcellulose reducing liquid evaporation rates and particle drying kinetics. The morphology and internal structure of the formulated particles after drying are dominated by a crystalline core of D-mannitol partially suppressed with increasing hydroxypropyl methylcellulose additions. The characterisation of formulated metformin hydrochloride particles with increasing polymer content demonstrated the importance of an early-stage quantitative assessment of formulation-related particle properties. A reliable and rational spray drying development approach needs to assess parameters of the compound system as well as of the process itself in order to define a well-controlled and robust operational design space. Chapter 7 presents strategies for process implementation to produce peptide-based formulations via spray drying demonstrated using s-glucagon as a model peptide. The process implementation was supported by an initial characterisation of the lab-scale spray dryer assessing a range of relevant independent process variables including drying temperature and feed rate. The platform response was captured with available and in-house developed Process Analytical Technology. A B-290 Mini-Spray Dryer was used to verify the development approach and to implement the pre-designed spray drying process. Information on the particle formation mechanism observed in SDD experiments were utilised to interpret the characteristics of the spray dried material.The understanding and optimisation of particle properties connected to their structure and morphology is a common objective for particle engineering applications either to improve materialhandling in the manufacturing process or to influence Critical Quality Attributes (CQAs) linked to product performance. This work aims to demonstrate experimental means to support a rational development approach for pharmaceutical particulate systems with a specific focus on droplet drying platforms such as spray drying. Micro-X-ray tomography (micro-XRT) is widely applied in areas such as geo- and biomedical sciences to enable a three dimensional investigation of the specimens. Chapter 4 elaborates on practical aspects of micro-XRT for a quantitative analysis of pharmaceutical solid products with an emphasis on implemented image processing and analysis methodologies. Potential applications of micro-XRT in the pharmaceutical manufacturing process can range from the characterisation of single crystals to fully formulated oral dosage forms. Extracted quantitative information can be utilised to directly inform product design and production for process development or optimisation. The non-destructive nature of the micro-XRT analysis can be further employed to investigate structure-performance relationships which might provide valuable insights for modelling approaches. Chapter 5 further demonstrates the applicability of micro-XRT for the analysis of ibuprofen capsules as a multi-particulate system each with a population of approximately 300 pellets. The in-depth analysis of collected micro-XRT image data allowed the extraction of more than 200 features quantifying aspects of the pellets’ size, shape, porosity, surface and orientation. Employed feature selection and machine learning methods enabled the detection of broken pellets within a classification model. The classification model has an accuracy of more than 99.55% and a minimum precision of 86.20% validated with a test dataset of 886 pellets from three capsules. The combination of single droplet drying (SDD) experiments with a subsequent micro-XRT analysis was used for a quantitative investigation of the particle design space and is described in Chapter 6. The implemented platform was applied to investigate the solidification of formulated metformin hydrochloride particles using D-mannitol and hydroxypropyl methylcellulose within a selected, pragmatic particle design space. The results indicate a significant impact of hydroxypropyl methylcellulose reducing liquid evaporation rates and particle drying kinetics. The morphology and internal structure of the formulated particles after drying are dominated by a crystalline core of D-mannitol partially suppressed with increasing hydroxypropyl methylcellulose additions. The characterisation of formulated metformin hydrochloride particles with increasing polymer content demonstrated the importance of an early-stage quantitative assessment of formulation-related particle properties. A reliable and rational spray drying development approach needs to assess parameters of the compound system as well as of the process itself in order to define a well-controlled and robust operational design space. Chapter 7 presents strategies for process implementation to produce peptide-based formulations via spray drying demonstrated using s-glucagon as a model peptide. The process implementation was supported by an initial characterisation of the lab-scale spray dryer assessing a range of relevant independent process variables including drying temperature and feed rate. The platform response was captured with available and in-house developed Process Analytical Technology. A B-290 Mini-Spray Dryer was used to verify the development approach and to implement the pre-designed spray drying process. Information on the particle formation mechanism observed in SDD experiments were utilised to interpret the characteristics of the spray dried material

    Design and Development of Robotic Part Assembly System under Vision Guidance

    Get PDF
    Robots are widely used for part assembly across manufacturing industries to attain high productivity through automation. The automated mechanical part assembly system contributes a major share in production process. An appropriate vision guided robotic assembly system further minimizes the lead time and improve quality of the end product by suitable object detection methods and robot control strategies. An approach is made for the development of robotic part assembly system with the aid of industrial vision system. This approach is accomplished mainly in three phases. The first phase of research is mainly focused on feature extraction and object detection techniques. A hybrid edge detection method is developed by combining both fuzzy inference rule and wavelet transformation. The performance of this edge detector is quantitatively analysed and compared with widely used edge detectors like Canny, Sobel, Prewitt, mathematical morphology based, Robert, Laplacian of Gaussian and wavelet transformation based. A comparative study is performed for choosing a suitable corner detection method. The corner detection technique used in the study are curvature scale space, Wang-Brady and Harris method. The successful implementation of vision guided robotic system is dependent on the system configuration like eye-in-hand or eye-to-hand. In this configuration, there may be a case that the captured images of the parts is corrupted by geometric transformation such as scaling, rotation, translation and blurring due to camera or robot motion. Considering such issue, an image reconstruction method is proposed by using orthogonal Zernike moment invariants. The suggested method uses a selection process of moment order to reconstruct the affected image. This enables the object detection method efficient. In the second phase, the proposed system is developed by integrating the vision system and robot system. The proposed feature extraction and object detection methods are tested and found efficient for the purpose. In the third stage, robot navigation based on visual feedback are proposed. In the control scheme, general moment invariants, Legendre moment and Zernike moment invariants are used. The selection of best combination of visual features are performed by measuring the hamming distance between all possible combinations of visual features. This results in finding the best combination that makes the image based visual servoing control efficient. An indirect method is employed in determining the moment invariants for Legendre moment and Zernike moment. These moments are used as they are robust to noise. The control laws, based on these three global feature of image, perform efficiently to navigate the robot in the desire environment

    Feature extraction using MPEG-CDVS and Deep Learning with application to robotic navigation and image classification

    Get PDF
    The main contributions of this thesis are the evaluation of MPEG Compact Descriptor for Visual Search in the context of indoor robotic navigation and the introduction of a new method for training Convolutional Neural Networks with applications to object classification. The choice for image descriptor in a visual navigation system is not straightforward. Visual descriptors must be distinctive enough to allow for correct localisation while still offering low matching complexity and short descriptor size for real-time applications. MPEG Compact Descriptor for Visual Search is a low complexity image descriptor that offers several levels of compromises between descriptor distinctiveness and size. In this work, we describe how these trade-offs can be used for efficient loop-detection in a typical indoor environment. We first describe a probabilistic approach to loop detection based on the standard’s suggested similarity metric. We then evaluate the performance of CDVS compression modes in terms of matching speed, feature extraction, and storage requirements and compare them with the state of the art SIFT descriptor for five different types of indoor floors. During the second part of this thesis we focus on the new paradigm to machine learning and computer vision called Deep Learning. Under this paradigm visual features are no longer extracted using fine-grained, highly engineered feature extractor, but rather using a Convolutional Neural Networks (CNN) that extracts hierarchical features learned directly from data at the cost of long training periods. In this context, we propose a method for speeding up the training of Convolutional Neural Networks (CNN) by exploiting the spatial scaling property of convolutions. This is done by first training a pre-train CNN of smaller kernel resolutions for a few epochs, followed by properly rescaling its kernels to the target’s original dimensions and continuing training at full resolution. We show that the overall training time of a target CNN architecture can be reduced by exploiting the spatial scaling property of convolutions during early stages of learning. Moreover, by rescaling the kernels at different epochs, we identify a trade-off between total training time and maximum obtainable accuracy. Finally, we propose a method for choosing when to rescale kernels and evaluate our approach on recent architectures showing savings in training times of nearly 20% while test set accuracy is preserved
    corecore