153 research outputs found

    Digital forensic techniques for the reverse engineering of image acquisition chains

    Get PDF
    In recent years a number of new methods have been developed to detect image forgery. Most forensic techniques use footprints left on images to predict the history of the images. The images, however, sometimes could have gone through a series of processing and modification through their lifetime. It is therefore difficult to detect image tampering as the footprints could be distorted or removed over a complex chain of operations. In this research we propose digital forensic techniques that allow us to reverse engineer and determine history of images that have gone through chains of image acquisition and reproduction. This thesis presents two different approaches to address the problem. In the first part we propose a novel theoretical framework for the reverse engineering of signal acquisition chains. Based on a simplified chain model, we describe how signals have gone in the chains at different stages using the theory of sampling signals with finite rate of innovation. Under particular conditions, our technique allows to detect whether a given signal has been reacquired through the chain. It also makes possible to predict corresponding important parameters of the chain using acquisition-reconstruction artefacts left on the signal. The second part of the thesis presents our new algorithm for image recapture detection based on edge blurriness. Two overcomplete dictionaries are trained using the K-SVD approach to learn distinctive blurring patterns from sets of single captured and recaptured images. An SVM classifier is then built using dictionary approximation errors and the mean edge spread width from the training images. The algorithm, which requires no user intervention, was tested on a database that included more than 2500 high quality recaptured images. Our results show that our method achieves a performance rate that exceeds 99% for recaptured images and 94% for single captured images.Open Acces

    Análise de propriedades intrínsecas e extrínsecas de amostras biométricas para detecção de ataques de apresentação

    Get PDF
    Orientadores: Anderson de Rezende Rocha, Hélio PedriniTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Os recentes avanços nas áreas de pesquisa em biometria, forense e segurança da informação trouxeram importantes melhorias na eficácia dos sistemas de reconhecimento biométricos. No entanto, um desafio ainda em aberto é a vulnerabilidade de tais sistemas contra ataques de apresentação, nos quais os usuários impostores criam amostras sintéticas, a partir das informações biométricas originais de um usuário legítimo, e as apresentam ao sensor de aquisição procurando se autenticar como um usuário válido. Dependendo da modalidade biométrica, os tipos de ataque variam de acordo com o tipo de material usado para construir as amostras sintéticas. Por exemplo, em biometria facial, uma tentativa de ataque é caracterizada quando um usuário impostor apresenta ao sensor de aquisição uma fotografia, um vídeo digital ou uma máscara 3D com as informações faciais de um usuário-alvo. Em sistemas de biometria baseados em íris, os ataques de apresentação podem ser realizados com fotografias impressas ou com lentes de contato contendo os padrões de íris de um usuário-alvo ou mesmo padrões de textura sintéticas. Nos sistemas biométricos de impressão digital, os usuários impostores podem enganar o sensor biométrico usando réplicas dos padrões de impressão digital construídas com materiais sintéticos, como látex, massa de modelar, silicone, entre outros. Esta pesquisa teve como objetivo o desenvolvimento de soluções para detecção de ataques de apresentação considerando os sistemas biométricos faciais, de íris e de impressão digital. As linhas de investigação apresentadas nesta tese incluem o desenvolvimento de representações baseadas nas informações espaciais, temporais e espectrais da assinatura de ruído; em propriedades intrínsecas das amostras biométricas (e.g., mapas de albedo, de reflectância e de profundidade) e em técnicas de aprendizagem supervisionada de características. Os principais resultados e contribuições apresentadas nesta tese incluem: a criação de um grande conjunto de dados publicamente disponível contendo aproximadamente 17K videos de simulações de ataques de apresentações e de acessos genuínos em um sistema biométrico facial, os quais foram coletados com a autorização do Comitê de Ética em Pesquisa da Unicamp; o desenvolvimento de novas abordagens para modelagem e análise de propriedades extrínsecas das amostras biométricas relacionadas aos artefatos que são adicionados durante a fabricação das amostras sintéticas e sua captura pelo sensor de aquisição, cujos resultados de desempenho foram superiores a diversos métodos propostos na literature que se utilizam de métodos tradicionais de análise de images (e.g., análise de textura); a investigação de uma abordagem baseada na análise de propriedades intrínsecas das faces, estimadas a partir da informação de sombras presentes em sua superfície; e, por fim, a investigação de diferentes abordagens baseadas em redes neurais convolucionais para o aprendizado automático de características relacionadas ao nosso problema, cujos resultados foram superiores ou competitivos aos métodos considerados estado da arte para as diferentes modalidades biométricas consideradas nesta tese. A pesquisa também considerou o projeto de eficientes redes neurais com arquiteturas rasas capazes de aprender características relacionadas ao nosso problema a partir de pequenos conjuntos de dados disponíveis para o desenvolvimento e a avaliação de soluções para a detecção de ataques de apresentaçãoAbstract: Recent advances in biometrics, information forensics, and security have improved the recognition effectiveness of biometric systems. However, an ever-growing challenge is the vulnerability of such systems against presentation attacks, in which impostor users create synthetic samples from the original biometric information of a legitimate user and show them to the acquisition sensor seeking to authenticate themselves as legitimate users. Depending on the trait used by the biometric authentication, the attack types vary with the type of material used to build the synthetic samples. For instance, in facial biometric systems, an attempted attack is characterized by the type of material the impostor uses such as a photograph, a digital video, or a 3D mask with the facial information of a target user. In iris-based biometrics, presentation attacks can be accomplished with printout photographs or with contact lenses containing the iris patterns of a target user or even synthetic texture patterns. In fingerprint biometric systems, impostor users can deceive the authentication process using replicas of the fingerprint patterns built with synthetic materials such as latex, play-doh, silicone, among others. This research aimed at developing presentation attack detection (PAD) solutions whose objective is to detect attempted attacks considering different attack types, in each modality. The lines of investigation presented in this thesis aimed at devising and developing representations based on spatial, temporal and spectral information from noise signature, intrinsic properties of the biometric data (e.g., albedo, reflectance, and depth maps), and supervised feature learning techniques, taking into account different testing scenarios including cross-sensor, intra-, and inter-dataset scenarios. The main findings and contributions presented in this thesis include: the creation of a large and publicly available benchmark containing 17K videos of presentation attacks and bona-fide presentations simulations in a facial biometric system, whose collect were formally authorized by the Research Ethics Committee at Unicamp; the development of novel approaches to modeling and analysis of extrinsic properties of biometric samples related to artifacts added during the manufacturing of the synthetic samples and their capture by the acquisition sensor, whose results were superior to several approaches published in the literature that use traditional methods for image analysis (e.g., texture-based analysis); the investigation of an approach based on the analysis of intrinsic properties of faces, estimated from the information of shadows present on their surface; and the investigation of different approaches to automatically learning representations related to our problem, whose results were superior or competitive to state-of-the-art methods for the biometric modalities considered in this thesis. We also considered in this research the design of efficient neural networks with shallow architectures capable of learning characteristics related to our problem from small sets of data available to develop and evaluate PAD solutionsDoutoradoCiência da ComputaçãoDoutor em Ciência da Computação140069/2016-0 CNPq, 142110/2017-5CAPESCNP

    Deliverable 1.1 review document on the management of marine areas with particular regard on concepts, objectives, frameworks and tools to implement, monitor, and evaluate spatially managed areas

    Get PDF
    The main objectives if this document were to review the existing information on spatial management of marine areas, identifying the relevant policy objectives, to identify parameters linked to the success or failure of the various Spatially Managed marine Areas (SMAs) regimes, to report on methods and tools used in monitoring and evaluation of the state of SMAs, and to identify gaps and weaknesses in the existing frameworks in relation to the implementation, monitoring, evaluation and management of SMAs. The document is naturally divided in two sections: Section 1 reviews the concepts, objectives, drivers, policy and management framework, and extraneous factors related to the design, implementation and evaluation of SMAs; Section 2 reviews the tools and methods to monitor and evaluate seabed habitats and marine populations.peer-reviewe

    Toward Cold Atom Guidance in a Hollow-Core Photonic Crystal Fibre Using a Blue Detuned Hollow Laser Beam

    Get PDF
    Cette thèse décrit les progrès et techniques réalisées pour obtenir un couplage efficace d'atomes froids 85Rb dans une fibre optique en cristaux photonique à coeur vide et utilisant un guidage atomique à l'aide d'un faisceau laser creux de premier ordre décalé en fréquence vers le bleu. Dans le système proposé, la faible diffraction de ce faisceau de premier ordre lui permet d'agir comme un entonnoir optique à potentiel répulsif servant à guider les atomes froids,avec l'aide de la gravité, dans le coeur de la fibre optique. L'utilisation d'une fibre optique à faible perte, plutôt qu'un capillaire permet de développer le potentiel de guider les atomes sur une trajectoire arbitraire et des distances à l'échelle du laboratoire. Ceci permettrait ainsi plusieurs nouvelles applications en nanofabrication et en métrologie optique. Pour réaliser cet objectif, un piège Magnéto-Optique de 85Rb a été bâtit de zéro et en utilisant les techniques les plus avancées de refroidissement laser par gradient de polarisation a permis d'atteindre régulièrement des températures de 9K dans une mélasse optique contenant 10 7 atomes. Ces atomes froids furent guidés au-delà de 23 cm dans un faisceau creux collimé et décalé vers le bleu et au travers de ce faisceau focalisé de manière à reproduire les conditions d'entrée dans une fibre optique tout en permettant une observation précise des dynamiques de couplage. Trois classes d'atomes furent observées : perdus, piégés et guidés. Les dynamiques de ce système ainsi que les conditions optimales de couplage ont été identifiés grâce au modèle physique numérique ayant été développée. Une nouvelle approche au problème de la modélisation de la dynamique des atomes froids dans l'entonnoir optique a été développée au cours de cette thèse. Ce nouveau modèle a permis de reproduire la dynamique des atomes observés dans l'expérience mais a aussi pu être appliqué dans la simulation d'atomes froids dans le piège Magnéto-Optique et à la prédiction des températures atteintes dans diverses conditions expérimentales. Ceci a été réalisé grâce à la modélisation 3D des composantes conservatives and non-conservatives des forces optiques agissant sur les atomes. L'implémentation des mécanismes d'échauffement connu des atomes :la diffusion de lumière et de leur quantité de mouvement, fût aussi cruciale à cette fin. Ce modèle nous a aidé à identifier les meilleures conditions de couplage dans ce système,corroboré par l'expérience, et qu'il existait un potentiel optique optimal, pour une distance de couplage déterminé, qu'il ne fallait pas dépasser. Un faisceau LG01, monomode et de haute pureté fût généré avec une efficacité supérieure à 50% en utilisant un hologramme à valeurs complexes généré par ordinateur et rendu grâce à un modulateur de phase spatiale à base de cristaux liquides.----------Abstract This thesis describes advances and techniques toward the efficient coupling of cold 85Rb atoms into a low loss hollow core photonic crystal fibre using a blue-detuned first order hollow beam. In the proposed system, the low diffraction of the blue-detuned first order hollow beam acts as a repulsive potential optical funnel that allows the coupling of cold atoms under the influence of gravity into the fibre's hollow core. Using a low loss fibre with a blue detuned hollow beam shows potential for guiding atoms over an arbitrary path and longer distances on the laboratory scale, which would enable several new applications in nanofabrication and optical metrology. To realize this objective, a Magneto-Optical Trap of 85Rb was built from scratch and by using advanced polarization gradient cooling techniques was turned into a 9 K cold optical molasses containing 107 atoms. These cold atoms were guided over 23 cm in a collimated blue detuned hollow beam tunnel and through a focused hollow beam mimicking as closely as possible the coupling conditions for a hollow core optical fibre. Three classes of atoms were observed: lost, trapped and guided atoms. The dynamics of the system as well as the optimal coupling conditions were identifed through the use of a numerical model. A novel approach to modelling cold atom dynamics in an optical funnel was developed during the course of this thesis. This new model was not only able to reproduce the dynamics of atoms in the experiment but also simulate dense cold atoms cooled into the MOT and predict final temperatures attained. This was achieved by 3D modelling of the conservative and non-conservative components of optical forces acting on atoms but also through the implementation of known heat mechanisms: light scattering and momentum diffusion. The model identifed the best coupling conditions of this system, confirmed by experiment, and an optimal light potential for a given distance of coupling that must not be exceeded. A single mode, high purity, LG01 beam was generated with over 50% conversion efficiency from a Gaussian mode using a complex-valued computer generated hologram (CGH) rendered on a phase-only liquid-crystal spatial light modulator (SLM). A system-wide 35% conversion efficiency was achieved from the laser output to the vacuum chamber input. Several micro-structured polymer optical fibres and silica hollow-core band-gap photonic crystal fibres with Kagome claddings were evaluated. A single defect, large hollow core (50m diameter) Kagome cladding fibre was identified as a suitable solution for guiding cold 85Rb atoms

    Ground plane rectification from crowd motion

    Get PDF
    This work focuses on the estimation of the ground-plane parameters needed to rectify and reconstruct crowded pedestrian scenes, projected into 2D by an uncalibrated, monocular camera. Deformities introduced during the imaging process affect metrics such as size, velocity and distance, which are often useful when examining the behaviour of agents within the scene. A framework is presented to reverse “perspective distortion” by calculating the “groundplane”, upon which motion within the scene occurs. Existing methods use geometric features, such as parallel lines, or objects of known size, such as the height of individuals in the scene; however these features are often unavailable in densely crowded scenes due to occlusions. By measuring only the imaged velocity of tracked features, assumed to be constant in the world, the issue of occlusion can be largely overcome. A novel framework is presented for estimation of the ground-plane and camera focal-length for scenes modelled with a single plane. The above assumption is validated against simulations, outperforming an existing technique [12] against real-world benchmark data. This framework is extended into a two-plane world and the additional challenge of determining the respective topology of the planes is introduced. Several methods for locating the intersection-line between the two planes are evaluated on simulations, with the effect of variation in velocity and the height of tracked features on reconstruction accuracy being investigated, with the results indicating this technique is suitable in real-world conditions. This framework is generalised, removing the need for prior knowledge of the number of planes. The problem is reformulated as a linear-series of planes, each connected by a single hinge, allowing the calculation of a single rotation for each new plane. Again, results are shown against simulations on scenes of varying complexity, as well as realworld datasets validating the success of this method given realistic variations in velocity

    Exploiting Spatio-Temporal Coherence for Video Object Detection in Robotics

    Get PDF
    This paper proposes a method to enhance video object detection for indoor environments in robotics. Concretely, it exploits knowledge about the camera motion between frames to propagate previously detected objects to successive frames. The proposal is rooted in the concepts of planar homography to propose regions of interest where to find objects, and recursive Bayesian filtering to integrate observations over time. The proposal is evaluated on six virtual, indoor environments, accounting for the detection of nine object classes over a total of ∼ 7k frames. Results show that our proposal improves the recall and the F1-score by a factor of 1.41 and 1.27, respectively, as well as it achieves a significant reduction of the object categorization entropy (58.8%) when compared to a two-stage video object detection method used as baseline, at the cost of small time overheads (120 ms) and precision loss (0.92).</p

    Automatic signature verification system

    Get PDF
    Philosophiae Doctor - PhDIn this thesis, we explore dynamic signature verification systems. Unlike other signature models, we use genuine signatures in this project as they are more appropriate in real world applications. Signature verification systems are typical examples of biometric devices that use physical and behavioral characteristics to verify that a person really is who he or she claims to be. Other popular biometric examples include fingerprint scanners and hand geometry devices. Hand written signatures have been used for some time to endorse financial transactions and legal contracts although little or no verification of signatures is done. This sets it apart from the other biometrics as it is well accepted method of authentication. Until more recently, only hidden Markov models were used for model construction. Ongoing research on signature verification has revealed that more accurate results can be achieved by combining results of multiple models. We also proposed to use combinations of multiple single variate models instead of single multi variate models which are currently being adapted by many systems. Apart from these, the proposed system is an attractive way for making financial transactions more secure and authenticate electronic documents as it can be easily integrated into existing transaction procedures and electronic communication
    corecore