67 research outputs found

    DBC based Face Recognition using DWT

    Full text link
    The applications using face biometric has proved its reliability in last decade. In this paper, we propose DBC based Face Recognition using DWT (DBC- FR) model. The Poly-U Near Infra Red (NIR) database images are scanned and cropped to get only the face part in pre-processing. The face part is resized to 100*100 and DWT is applied to derive LL, LH, HL and HH subbands. The LL subband of size 50*50 is converted into 100 cells with 5*5 dimention of each cell. The Directional Binary Code (DBC) is applied on each 5*5 cell to derive 100 features. The Euclidian distance measure is used to compare the features of test image and database images. The proposed algorithm render better percentage recognition rate compared to the existing algorithm.Comment: 15 pages,9 figures, 4 table

    Image source camera attribution

    Get PDF
    Orientador: Anderson de Rezende RochaDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Verificar a integridade e a autenticidade de imagens digitais é de fundamental importância quando estas podem ser apresentadas como evidência em uma corte de justiça. Uma maneira de se realizar esta verificação é identificar a câmera digital que capturou tais imagens. Neste trabalho, nós discutimos abordagens que permitem identificar se uma imagem sob investigação foi ou não capturada por uma determinada câmera digital. A pesquisa foi realizada segundo duas óticas: (1) verificação, em que o objetivo é verificar se uma determinada câmera, de fato, capturou uma dada imagem; e (2) reconhecimento, em que o foco é verificar se uma determinada imagem foi obtida por alguma câmera (se alguma) dentro de um conjunto limitado de câmeras e identificar, em caso afirmativo, o dispositivo específico que efetuou a captura. O estudo destas abordagens foi realizado considerando um cenário aberto (open-set), no qual nem sempre temos acesso a alguns dos dispositivos em questão. Neste trabalho, tratamos, também, do problema de correspondência entre dispositivos, em que o objetivo é verificar se um par de imagens foi gerado por uma mesma câmera. Isto pode ser útil para agrupar conjuntos de imagens de acordo com sua fonte quando não se possui qualquer informação sobre possíveis dispositivos de origem. As abordagens propostas apresentaram bons resultados, mostrando-se capazes de identificar o dispositivo específico utilizado na captura de uma imagem, e não somente sua marcaAbstract: Image's integrity and authenticity verification is paramount when it comes to a court of law. Just like we do in ballistics tests when we match a gun to its bullets, we can identify a given digital camera that acquired an image under investigation. In this work, we discussed approaches for identifying whether or not a given image under investigation was captured by a specific digital camera. We carried out the research under two vantage points: (1) verification, in which we are interested in verifying whether or not a given camera captured an image under investigation; and (2) recognition, in which we want to verify if an image was captured by a given camera (if any) from a pool of devices, and to point out such a camera. We performed this investigation considering an open set scenario, under which we can not rely on the assumption of full access to all of the investigated devices. We also tried to solve the device linking problem, where we aim at verifying if an image pair was generated by the same camera, without any information about the source of images. Our approaches reported good results, in terms of being capable of identifying the specific device that captured a given image including its model, brand, and even serial numberMestradoCiência da ComputaçãoMestre em Ciência da Computaçã

    Wireless device identification from a phase noise prospective

    Get PDF
    As wireless devices become increasingly pervasive and essential, they are becoming both a target for attacks and the very weapon with which such an attack can be carried out. Wireless networks have to face new kinds of intrusion that had not been considered previously because they are linked to the open nature of wireless networks. In particular, device identity management and intrusion detection are two of the most significant challenges in any network security solution but they are paramount for any wireless local area networks (WLANs) because of the inherent non-exclusivity of the transmission medium. The physical layer of 802.11-based wireless communication does not offer security guarantee because any electromagnetic signal transmitted can be monitored, captured, and analyzed by any sufficiently motivated and equipped adversary within the 802.11 device's transmission range. What is required is a form of identification that is nonmalleable (cannot be spoofed easily). For this reason we have decided to focus on physical characteristics of the network interface card (NIC) to distinguish between different wireless users because it can provide an additional layer of security. The unique properties of the wireless medium are extremely useful to get an additional set of information that can be used to extend and enhance traditional security mechanisms. This approach is commonly referred to as radio frequency fingerprinting (RFF), i.e., determining specific characteristics (fingerprint) of a network device component. More precisely, our main goal is to prove the feasibility of exploiting phase noise in oscillators for fingerprinting design and overcome existing limitations of conventional approaches. The intuition behind our design is that the autonomous nature of oscillators among noisy physical systems makes them unique in their response to perturbations and none of the previous work has ever tried to take advantage of thi

    Automatic texture classification in manufactured paper

    Get PDF

    Multi evidence fusion scheme for content-based image retrieval by clustering localised colour and texture features

    Get PDF
    Content-Based Image Retrieval (CBIR) is an automatic process of retrieving images according to their visual content. Research in this field mainly follows two directions. The first is concerned with the effectiveness in describing the visual content of images (i.e. features) by a technique that lead to discern similar and dissimilar images, and ultimately the retrieval of the most relevant images to the query image. The second direction focuses on retrieval efficiency by deploying efficient structures in organising images by their features in the database to narrow down the search space. The emphasis of this research is mainly on the effectiveness rather than the efficiency. There are two types of visual content features. The global feature represents the entire image by a single vector, and hence retrieval by using the global feature is more efficient but often less accurate. On the other hand, the local feature represents the image by a set of vectors, capturing localised visual variations in different parts of an image, promising better results particularly for images with complicated scenes. The first main purpose of this thesis is to study different types of local features. We explore a range of different types of local features from both frequency and spatial domains. Because of the large number of local features generated from an image, clustering methods are used for quantizing and summarising the feature vectors into segments from which a representation of the visual content of the entire image is derived. Since each clustering method has a different way of working and requires settings of different input parameters (e.g. number of clusters), preparations of input data (i.e. normalized or not) and choice of similarity measures, varied performance outcomes by different clustering methods in segmenting the local features are anticipated. We therefore also intend to study and analyse one commonly used clustering algorithm from each of the four main categories of clustering methods, i.e. K-means (partition-based), EM/GMM (model-based), Normalized Laplacian Spectral (graph-based), and Mean Shift (density-based). These algorithms were investigated in two scenarios when the number of clusters is either fixed or adaptively determined. Performances of the clustering algorithms in terms of image classification and retrieval are evaluated using three publically available image databases. The evaluations have revealed that a local DCT colour-texture feature was overall the best due to its robust integration of colour and texture information. In addition, our investigation into the behaviour of different clustering algorithms has shown that each algorithm had its own strengths and limitations in segmenting local features that affect the performance of image retrieval due to variations in visual colour and texture of the images. There is no algorithm that can outperform the others using either an adaptively determined or big fixed number of clusters. The second focus of this research is to investigate how to combine the positive effects of various local features obtained from different clustering algorithms in a fusion scheme aiming to bring about improved retrieval results over those by using a single clustering algorithm. The proposed fusion scheme integrates effectively the information from different sources, increasing the overall accuracy of retrieval. The proposed multi-evidence fusion scheme regards scores of image retrieval that are obtained from normalizing distances of applying different clustering algorithms to different types of local features as evidence and was presented in three forms: 1) evidence fusion using fixed weights (MEFS) where the weights were determined empirically and fixed a prior; 2) evidence fusion based on adaptive weights (AMEFS) where the fusion weights were adaptively determined using linear regression; 3) evidence fusion using a linear combination (Comb SUM) without weighting the evidences. Overall, all three versions of the multi-evidence fusion scheme have proved the ability to enhance the accuracy of image retrieval by increasing the number of relevant images in the ranked list. However, the improvement varied across different feature-clustering combinations (i.e. image representation) and the image databases used for the evaluation. This thesis presents an automatic method of image retrieval that can deal with natural world scenes by applying different clustering algorithms to different local features. The method achieves good accuracies of 85% at Top 5 and 80% at Top 10 over the WANG database, which are better when compared to a number of other well-known solutions in the literature. At the same time, the knowledge gained from this research, such as the effects of different types of local features and clustering methods on the retrieval results, enriches the understanding of the field and can be beneficial for the CBIR community

    Research on digital image watermark encryption based on hyperchaos

    Get PDF
    The digital watermarking technique embeds meaningful information into one or more watermark images hidden in one image, in which it is known as a secret carrier. It is difficult for a hacker to extract or remove any hidden watermark from an image, and especially to crack so called digital watermark. The combination of digital watermarking technique and traditional image encryption technique is able to greatly improve anti-hacking capability, which suggests it is a good method for keeping the integrity of the original image. The research works contained in this thesis include: (1)A literature review the hyperchaotic watermarking technique is relatively more advantageous, and becomes the main subject in this programme. (2)The theoretical foundation of watermarking technologies, including the human visual system (HVS), the colour space transform, discrete wavelet transform (DWT), the main watermark embedding algorithms, and the mainstream methods for improving watermark robustness and for evaluating watermark embedding performance. (3) The devised hyperchaotic scrambling technique it has been applied to colour image watermark that helps to improve the image encryption and anti-cracking capabilities. The experiments in this research prove the robustness and some other advantages of the invented technique. This thesis focuses on combining the chaotic scrambling and wavelet watermark embedding to achieve a hyperchaotic digital watermark to encrypt digital products, with the human visual system (HVS) and other factors taken into account. This research is of significant importance and has industrial application value

    Structured Dictionary Learning and its applications in Neural Recording

    Get PDF
    Widely utilized in the field of neuroscience, implantable neural recording devices could capture neuron activities with an acquisition rate on the order of megabytes per second. In order to efficiently transmit neural signals through wireless channels, these devices require compression methods that reduce power consumption. Although recent Compressed Sensing (CS) approaches have successfully demonstrated their power, their full potential is yet to be explored, particularly towards exploring a more efficient representation of the neural signals. As a promising solution, sparse representation not only provides better signal compression for bandwidth/storage efficiency, but also leads to faster processing algorithms as well as more effective signal separation for classification purpose. However, current sparsity‐based approaches for neural recording are limited due to several critical drawbacks: (i) the lack of an efficient data‐driven representation to fully capture the characteristics of specific neural signal; (ii) most existing methods do not fully explore the prior knowledge of neural signals (e.g., labels), while such information is often known; and (iii) the capability to encode discriminative information into the representation to promote classification. Using neural recording as a case study, this dissertation presents new theoretical ideas and mathematical frameworks on structured dictionary learning with applications in compression and classification. Start with a single task setup, we provide theoretical proofs to show the benefits of using structured sparsity in dictionary learning. Then we provide various novel models for the representation of a single measurement, as well as multiple measurements where signals exhibit both with‐in class similarity as well as with‐in class difference. Under the assumption that the label information of the neural signal is known, the proposed models minimize the data fidelity term together with the structured sparsity terms to drive for more discriminative representation. We demonstrate that this is particularly essential in neural recording since it can further improve the compression ratio, classification accuracy and help deal with non‐ideal scenarios such as co-occurrences of neuron firings. Fast and efficient algorithms based on Bayesian inference and alternative direction method are proposed. Extensive experiments are conducted on both neural recording applications as well as some other classification task, such as image classification
    corecore