1,248 research outputs found

    Analyse d'Images de Documents Anciens: une Approche Texture

    Get PDF
    In this article, we propose a method of characterization of images of old documents based on a texture approach. This characterization is carried out with the help of a multi-resolution study of the textures contained in the images of the document. Thus, by extracting five features linked to the frequencies and to the orientations in the different areas of a page, it is possible to extract and compare elements of high semantic level without expressing any hypothesis about the physical or logical structure of the analysed documents. Experimentations demonstrate the performance of our propositions and the advances that they represent in terms of characterization of content of a deeply heterogeneous corpus.Dans cet article, nous proposons une méthode de caractérisation d'images d'ouvrages anciens basée sur une approche texture. Cette caractérisation est réalisée à l'aide d'une étude multirésolution des textures contenues dans les images de documents. Ainsi, en extrayant cinq indices liés aux fréquences et aux orientations dans les différentes parties d'une page, il est possible d'extraire et de comparer des éléments de haut niveau sémantique sans émettre d'hypothèses sur la structure physique ou logique des documents analysés. Des expérimentations montrent la faisabilité de la réalisation d'outils d'aide à la navigation ou d'aide à l'indexation. Au travers de ces expérimentations, nous mettrons en avant la pertinence de ces indices et les avancées qu'ils représentent en terme de caractérisation de contenu d'un corpus fortement hétérogène

    Old document image analysis : a texture approach

    Get PDF
    In this article, we propose a method of characterization of images of old documents based on a texture approach. This characterization is carried out with the help of a multi-resolution study of the textures contained in the images of the document. Thus, by extracting five features linked to the frequencies and to the orientations in the different areas of a page, it is possible to extract and compare elements of high semantic level without expressing any hypothesis about the physical or logical structure of the analysed documents. Experimentations demonstrate the performance of our propositions and the advances that they represent in terms of characterization of content of a deeply heterogeneous corpus.Dans cet article, nous proposons une méthode de caractérisation d’images d’ouvrages anciens basée sur une approche texture. Cette caractérisation est réalisée à l’aide d’une étude multirésolution des textures contenues dans les images de documents. Ainsi, en extrayant cinq indices liés aux fréquences et aux orientations dans les différentes parties d’une page, il est possible d’extraire et de comparer des éléments de haut niveau sémantique sans émettre d’hypothèses sur la structure physique ou logique des documents analysés. Des expérimentations montrent la faisabilité de la réalisation d’outils d’aide à la navigation ou d’aide à l’indexation. Au travers de ces expérimentations, nous mettrons en avant la pertinence de ces indices et les avancées qu’ils représentent en terme de caractérisation de contenu d’un corpus fortement hétérogène

    Correspondence of three-dimensional objects

    Get PDF
    First many thanks go to Prof. Hans du Buf, for his supervision based on his experience, for providing a stimulating and cheerful research environment in his laboratory, for letting me participate in the projects that produced results for papers, thus made me more aware of the state of the art in Computer Vision, especially in the area of 3D recognition. Also for his encouraging support and his way to always nd time for discussions, and last but not the least for the cooking recipes... Many thanks go also to my laboratory fellows, to Jo~ao Rodrigues, who invited me to participate in FCT and QREN projects, Jaime Carvalho Martins and Miguel Farrajota, for discussing scienti c and technical problems, but also almost all problems in the world. To all persons, that worked in, or visited the Vision Laboratory, especially those with whom I have worked with, almost on a daily basis. A special thanks to the Instituto Superior de Engenharia at UAlg and my colleagues at the Department of Electrical Engineering, for allowing me to suspend lectures in order to be present at conferences. To my family, my wife and my kids

    Design and implementation of web-based keystroke analytics for user verification

    Get PDF
    Keystroke analytics is the study of the way in which a user types rather than simply what they are typing. Through the application of statistical or machine learning methods the gathered biometric data may be used to verify the identity of a user, based on their typing style. This project aims to explore the field of keystroke analytics to gain an understanding of the methods involved and as such detail the implementation process for such a system’s design and implementation in a web-based context. Details regarding the technical design and implementation are specifically highlighted as current literature often does not describe how the systems shown were developed by rather the theory and methods used by them. The use of JavaScript to gather typing characteristic data is explored and the process of extracting useful features illustrated. Additionally both PHP and MySQL and used to create the backbone infrastructure to process and store the typing data. A phased development approach has been employed, with the overall system being separated into a collection of subsystems which are designed, implemented and tested before combining them to form the overall system. The supplementary software system requirements are presented, including the process of setting up a system capable of both being used to perform research on a local system as well as expand to online users for the data collection process. Method of testing the performance of a keystroke analytics system are discussed with potential changes to improve performance and minimise problems encountered outlined. The project was successful in that a working proof-of-concept web-based keystroke verification system was designed and implemented which yielded promising results for the data tested (FAR: 0%, FRR: 3.33%). Although to fully evaluate the system’s performance further testing needs to take place for a larger sample size of participants. The results obtained show that a keystroke analytics system may be implemented in a web-based environment, with relatively simple statistical methods, and provide reasonable performance results with only minor additional interaction required by the end-user. This has shown that keystroke analytics is a valid and well-performing method of providing non-intrusive multifactor authentication to traditional login systems

    Drawing, Handwriting Processing Analysis: New Advances and Challenges

    No full text
    International audienceDrawing and handwriting are communicational skills that are fundamental in geopolitical, ideological and technological evolutions of all time. drawingand handwriting are still useful in defining innovative applications in numerous fields. In this regard, researchers have to solve new problems like those related to the manner in which drawing and handwriting become an efficient way to command various connected objects; or to validate graphomotor skills as evident and objective sources of data useful in the study of human beings, their capabilities and their limits from birth to decline

    Cloud Detection And Trace Gas Retrieval From The Next Generation Satellite Remote Sensing Instruments

    Get PDF
    Thesis (Ph.D.) University of Alaska Fairbanks, 2005The objective of this thesis is to develop a cloud detection algorithm suitable for the National Polar Orbiting Environmental Satellite System (NPOESS) Visible Infrared Imaging Radiometer Suite (VIIRS) and methods for atmospheric trace gas retrieval for future satellite remote sensing instruments. The development of this VIIRS cloud mask required a flowdown process of different sensor models in which a variety of sensor effects were simulated and evaluated. This included cloud simulations and cloud test development to investigate possible sensor effects, and a comprehensive flowdown analysis of the algorithm was conducted. In addition, a technique for total column water vapor retrieval using shadows was developed with the goal of enhancing water vapor retrievals under hazy atmospheric conditions. This is a new technique that relies on radiance differences between clear and shadowed surfaces, combined with ratios between water vapor absorbing and window regions. A novel method for retrieving methane amounts over water bodies, including lakes, rivers, and oceans, under conditions of sun glint has also been developed. The theoretical basis for the water vapor as well as the methane retrieval techniques is derived and simulated using a radiative transfer model

    Fifth Conference on Artificial Intelligence for Space Applications

    Get PDF
    The Fifth Conference on Artificial Intelligence for Space Applications brings together diverse technical and scientific work in order to help those who employ AI methods in space applications to identify common goals and to address issues of general interest in the AI community. Topics include the following: automation for Space Station; intelligent control, testing, and fault diagnosis; robotics and vision; planning and scheduling; simulation, modeling, and tutoring; development tools and automatic programming; knowledge representation and acquisition; and knowledge base/data base integration

    Technology 2002: The Third National Technology Transfer Conference and Exposition, volume 2

    Get PDF
    Proceedings from symposia of the Technology 2002 Conference and Exposition, December 1-3, 1992, Baltimore, MD. Volume 2 features 60 papers presented during 30 concurrent sessions

    Collection and management of satellite data for hydrological models

    Get PDF
    This thesis reports on the development of a system for the acquisition of AVHRR data, the processing of this data into hydrological parameters then the organisation and management of this data. The derivation of hydrological parameters through the use of remote sensing data has been well reported in the literature. The integration of the different derived estimates into a uniform and integrated set of data for use in Hydrological models have been lacking. The aim of the this project is the presentation of a system that solves and presents the problems faced in the development of such a system. This thesis is concerned with the integration of a set of methods, each concerned with a hydrological parameter, into a compatible system for the remote sensing estimation of hydrological parameters. The information produced by remote sensing methods are populous in space. A system is needed to manage this significant body of generated data. A database was selected and used for this task. The proposed system is a prototype system concerned primarily with an investigation of the different processes involved in the integration of the different methods into a compatible package. The system evolved, with the introduction of the database in the system, to become an embryonic Hydrology and Remote Sensing Information System acronymed as HyRSIS. Programs used in this project comprise of two kinds, those written 'in house' and 'acquired' from different researchers. Compatibility of programs and data files was solved and then used as building blocks of HyRSIS. A main program was used as driver for the interaction with the different programs. A design criterion was established for future development of such a system. The system provided solutions for two the problems of the big size of the data and the non suitability of remote sensing data for hydrological modelling. The use of the database provided the housing and the managing tool for the bulk of the data. A protocol for the retrieval of data from the database was established. For the first time the hydrological model used, in this project and probably for any hydrological model, was run using several parameters derived from remote sensing sources and supplemented by conventional data. The same model was also run using conventional data, as a prime source, and supplemented with remote sensing data
    • …
    corecore