864 research outputs found

    Automated Data Digitization System for Vehicle Registration Certificates Using Google Cloud Vision API

    Get PDF
    This study aims to develop an automated data digitization system for the Thai vehicle registration certificate. It is the first system developed as a web service Application Programming Interface (API), which is essential for any enterprise to increase its business value. Currently, this system is available on “www.carjaidee.com”. The system involves four steps: 1) an embedded frame aligns a document to be correctly recognised in the image acquisition step; 2) sharpening and brightness filtering techniques to enhance image quality are applied in the pre-processing step; 3) the Google Cloud Vision API receives a prompt to proceed in the recognition step; 4) a specific domain dictionary to improve accuracy rate is developed for the post-processing step. This study defines 92 images for the experiment by counting the correct words and terms from the output. The findings suggest that the proposed method, which had an average accuracy of 93.28%, was significantly more accurate than the original method using only the Google Cloud Vision API. However, the system is limited because the dictionaries cannot automatically recognise a new word. In the future, we will explore solutions to this problem using natural language processing techniques. Doi: 10.28991/CEJ-2022-08-07-09 Full Text: PD

    A Tale of Two Transcriptions : Machine-Assisted Transcription of Historical Sources

    Get PDF
    This article is part of the "Norwegian Historical Population Register" project financed by the Norwegian Research Council (grant # 225950) and the Advanced Grand Project "Five Centuries of Marriages"(2011-2016) funded by the European Research Council (# ERC 2010-AdG_20100407)This article explains how two projects implement semi-automated transcription routines: for census sheets in Norway and marriage protocols from Barcelona. The Spanish system was created to transcribe the marriage license books from 1451 to 1905 for the Barcelona area; one of the world's longest series of preserved vital records. Thus, in the Project "Five Centuries of Marriages" (5CofM) at the Autonomous University of Barcelona's Center for Demographic Studies, the Barcelona Historical Marriage Database has been built. More than 600,000 records were transcribed by 150 transcribers working online. The Norwegian material is cross-sectional as it is the 1891 census, recorded on one sheet per person. This format and the underlining of keywords for several variables made it more feasible to semi-automate data entry than when many persons are listed on the same page. While Optical Character Recognition (OCR) for printed text is scientifically mature, computer vision research is now focused on more difficult problems such as handwriting recognition. In the marriage project, document analysis methods have been proposed to automatically recognize the marriage licenses. Fully automatic recognition is still a challenge, but some promising results have been obtained. In Spain, Norway and elsewhere the source material is available as scanned pictures on the Internet, opening up the possibility for further international cooperation concerning automating the transcription of historic source materials. Like what is being done in projects to digitize printed materials, the optimal solution is likely to be a combination of manual transcription and machine-assisted recognition also for hand-written sources

    Privacy, Visibility, Transparency, and Exposure

    Get PDF
    This essay considers the relationship between privacy and visibility in the networked information age. Visibility is an important determinant of harm to privacy, but a persistent tendency to conceptualize privacy harms and expectations in terms of visibility has created two problems. First, focusing on visibility diminishes the salience and obscures the operation of nonvisual mechanisms designed to render individual identity, behavior, and preferences transparent to third parties. The metaphoric mapping to visibility suggests that surveillance is simply passive observation, rather than the active production of categories, narratives, and, norms. Second, even a broader conception of privacy harms as a function of informational transparency is incomplete. Privacy has a spatial dimension as well as an informational dimension. The spatial dimension of the privacy interest, which the author characterizes as an interest in avoiding or selectively limiting exposure, concerns the structure of experienced space. It is not negated by the fact that people in public spaces expect to be visible to others present in those spaces, and it encompasses both the arrangement of physical spaces and the design of networked communications technologies. U.S. privacy law and theory currently do not recognize this interest at all. This essay argues that they should

    Beyond Document Page Classification: Design, Datasets, and Challenges

    Full text link
    This paper highlights the need to bring document classification benchmarking closer to real-world applications, both in the nature of data tested (XX: multi-channel, multi-paged, multi-industry; YY: class distributions and label set variety) and in classification tasks considered (ff: multi-page document, page stream, and document bundle classification, ...). We identify the lack of public multi-page document classification datasets, formalize different classification tasks arising in application scenarios, and motivate the value of targeting efficient multi-page document representations. An experimental study on proposed multi-page document classification datasets demonstrates that current benchmarks have become irrelevant and need to be updated to evaluate complete documents, as they naturally occur in practice. This reality check also calls for more mature evaluation methodologies, covering calibration evaluation, inference complexity (time-memory), and a range of realistic distribution shifts (e.g., born-digital vs. scanning noise, shifting page order). Our study ends on a hopeful note by recommending concrete avenues for future improvements.}Comment: 8 pages, under revie

    Security and Privacy for the Internet of Things: An Overview of the Project

    Get PDF
    As the adoption of digital technologies expands, it becomes vital to build trust and confidence in the integrity of such technology. The SPIRIT project investigates the proof of concept of employing novel secure and privacy-ensuring techniques in services set-up in the Internet of Things (IoT) environment, aiming to increase the trust of users in IoTbased systems. The proposed system integrates three highly novel technology concepts developed by the consortium partners. Specifically, a technology, termed ICMetrics, for deriving encryption keys directly from the operating characteristics of digital devices; secondly, a technology based on a contentbased signature of user data in order to ensure the integrity of sent data upon arrival; a third technology, termed semantic firewall, which is able to allow or deny the transmission of data derived from an IoT device according to the information contained within the data and the information gathered about the requester

    Online advertising: analysis of privacy threats and protection approaches

    Get PDF
    Online advertising, the pillar of the “free” content on the Web, has revolutionized the marketing business in recent years by creating a myriad of new opportunities for advertisers to reach potential customers. The current advertising model builds upon an intricate infrastructure composed of a variety of intermediary entities and technologies whose main aim is to deliver personalized ads. For this purpose, a wealth of user data is collected, aggregated, processed and traded behind the scenes at an unprecedented rate. Despite the enormous value of online advertising, however, the intrusiveness and ubiquity of these practices prompt serious privacy concerns. This article surveys the online advertising infrastructure and its supporting technologies, and presents a thorough overview of the underlying privacy risks and the solutions that may mitigate them. We first analyze the threats and potential privacy attackers in this scenario of online advertising. In particular, we examine the main components of the advertising infrastructure in terms of tracking capabilities, data collection, aggregation level and privacy risk, and overview the tracking and data-sharing technologies employed by these components. Then, we conduct a comprehensive survey of the most relevant privacy mechanisms, and classify and compare them on the basis of their privacy guarantees and impact on the Web.Peer ReviewedPostprint (author's final draft

    Anonimização automatizada de contratos jurídicos em português

    Get PDF
    With the introduction of the General Data Protection Regulation, many organizations were left with a large amount of documents containing public information that should have been private. Given that we are talking about quite large quantities of documents, it would be a waste of resources to edit them manually. The objective of this dissertation is the development of an autonomous system for the anonymization of sensitive information in contracts written in Portuguese. This system uses Google Cloud Vision, an API to apply the OCR tecnology, to extract any text present in a document. As there is a possibility that these documents are poorly readable, an image pre-processing is done using the OpenCV library to increase the readability of the text present in the images. Among others, the application of binarization, skew correction and noise removal algorithms were explored. Once the text has been extracted, it will be interpreted by an NLP library. In this project we chose to use spaCy, which contains a Portuguese pipeline trained with the WikiNer and UD Portuguese Bosque datasets. This library not only allows a very complete identification of the part of speech, but also contains four different categories of named entity recognition in its model. In addition to the processing carried out using the spaCy library, and since the Portuguese language does not have a great support, some rule-based algorithms were implemented in order to identify other types of more specific information such as identification number and postal codes. In the end, the information considered confidential is covered by a black rectangle drawn by OpenCV through the coordinates returned by Google Cloud Vision OCR and a new PDF is generated.Com a introdução do Regulamento Geral de Proteção de Dados, muitas organizações ficaram com uma grande quantidade de documentos contendo informações públicas que deveriam ser privadas. Dado que estamos a falar de quantidades bastante elevadas de documentos, seria um desperdício de recursos editá-los manualmente. O objetivo desta dissertação é o desenvovimento de um sistema autónomo de anonimização de informação sensível em contratos escritos na língua Portuguesa. Este sistema utiliza a Google Cloud Vision, uma API de OCR, para extrair qualquer texto presente num documento. Como existe a possibilidade desses documentos serem pouco legíveis, é feito um pré-processamento de imagem através da biblioteca OpenCV para aumentar a legibilidade do texto presente nas imagens. Entre outros, foi explorada a aplicação de algoritmos de binarização, correção da inclinação e remoção de ruído. Uma vez extraído o texto, este será interpretado por uma biblioteca de nlp, neste projeto optou-se pelo uso do spaCy, que contém um pipeline português treinado com os conjuntos de dados WikiNer e UD Portuguese Bosque. Esta biblioteca não permite apenas uma identificação bastante completa da parte do discurso, mas também contém quatro categorias diferentes de reconhecimento de entidade nomeada no seu modelo. Para além do processamento efetuado com o recurso à biblioteca de spaCy, e uma vez que a língua portuguesa não tem um grande suporte, foram implementados alguns algoritmos baseados em regras de modo a identificar outros tipos de informação mais especifica como número de identificação e códigos postais. No final, as informações consideradas confidenciais são cobertas por um retângulo preto desenhado pelo OpenCV através das coordenadas retornadas pelo OCR do Google Cloud Vision e será gerado um novo PDF.Mestrado em Engenharia de Computadores e Telemátic
    corecore