13,999 research outputs found

    A new seal verification for Chinese color seal

    Get PDF
    Automatic seal imprint identification system is highly demanded in oriental countries. Even though several seal identification techniques have been proposed, it is seldom to find the papers on the recovery of lost seal imprint strokes caused by superimposition. In this paper, a new seal verification for Chinese color seal is proposed. This approach segments the seal imprint from the input image in terms of the adaptive thresholds. The lost seal imprint strokes are recovered based on the text stroke width that can be detected automatically. In addition, the moment-based seal verification is to compare the reference seal imprint and the recovered one. Experimental results show that the proposed method is able to correctly and efficiently verify the genuine and forgery seal imprint

    Research on Event Extraction Model Based on Semantic Features of Chinese Words

    Get PDF
    Event Extraction (EE) is an important task in Natural Language Understanding (NLU). As the complexity of Chinese structure, Chinese EE is more difficult than English EE. According to the characteristics of Chinese, this paper designed a Semantic-GRU (Sem-GRU) model, which integrates Chinese word context semantics, Chinese word glyph semantics and Chinese word structure semantics. And this paper uses the model for Chinese Event Trigger Extraction (ETE) task. The experiment is compared in two tasks: ETE and Named Entity Recognition (NER). In ETE, the paper uses ACE 2005 Chinese event dataset to compare the existing research, the effect reaches 75.8 %. In NER, the paper uses MSRA dataset, which reaches 90.3 %, better than other models

    Design of the management system of port in China based on the internet of things technology

    Get PDF

    Apple’s Unkept Promises: Investigation of Three Pegatron Group Factories Supplying to Apple

    Get PDF
    This document is part of a digital collection provided by the Martin P. Catherwood Library, ILR School, Cornell University, pertaining to the effects of globalization on the workplace worldwide. Special emphasis is placed on labor rights, working conditions, labor market changes, and union organizing.CLW_2013_Report_Apple_Unkept_Promises.pdf: 531 downloads, before Oct. 1, 2020

    Non-english and non-latin signature verification systems: A survey

    Full text link
    Signatures continue to be an important biometric because they remain widely used as a means of personal verification and therefore an automatic verification system is needed. Manual signature-based authentication of a large number of documents is a difficult and time consuming task. Consequently for many years, in the field of protected communication and financial applications, we have observed an explosive growth in biometric personal authentication systems that are closely connected with measurable unique physical characteristics (e.g. hand geometry, iris scan, finger prints or DNA) or behavioural features. Substantial research has been undertaken in the field of signature verification involving English signatures, but to the best of our knowledge, very few works have considered non-English signatures such as Chinese, Japanese, Arabic etc. In order to convey the state-of-the-art in the field to researchers, in this paper we present a survey of non-English and non-Latin signature verification systems

    MCCFNet: multi-channel color fusion network for cognitive classification of traditional Chinese paintings.

    Get PDF
    The computational modeling and analysis of traditional Chinese painting rely heavily on cognitive classification based on visual perception. This approach is crucial for understanding and identifying artworks created by different artists. However, the effective integration of visual perception into artificial intelligence (AI) models remains largely unexplored. Additionally, the classification research of Chinese painting faces certain challenges, such as insufficient investigation into the specific characteristics of painting images for author classification and recognition. To address these issues, we propose a novel framework called multi-channel color fusion network (MCCFNet), which aims to extract visual features from diverse color perspectives. By considering multiple color channels, MCCFNet enhances the ability of AI models to capture intricate details and nuances present in Chinese painting. To improve the performance of the DenseNet model, we introduce a regional weighted pooling (RWP) strategy specifically designed for the DenseNet169 architecture. This strategy enhances the extraction of highly discriminative features. In our experimental evaluation, we comprehensively compared the performance of our proposed MCCFNet model against six state-of-the-art models. The comparison was conducted on a dataset consisting of 2436 TCP samples, derived from the works of 10 renowned Chinese artists. The evaluation metrics employed for performance assessment were Top-1 Accuracy and the area under the curve (AUC). The experimental results have shown that our proposed MCCFNet model significantly outperform all other benchmarking methods with the highest classification accuracy of 98.68%. Meanwhile, the classification accuracy of any deep learning models on TCP can be much improved when adopting our proposed framework

    COMPARATIVE STUDY OF FONT RECOGNITION USING CONVOLUTIONAL NEURAL NETWORKS AND TWO FEATURE EXTRACTION METHODS WITH SUPPORT VECTOR MACHINE

    Get PDF
    Font recognition is one of the essential issues in document recognition and analysis, and is frequently a complex and time-consuming process. Many techniques of optical character recognition (OCR) have been suggested and some of them have been marketed, however, a few of these techniques considered font recognition. The issue of OCR is that it saves copies of documents to make them searchable, but the documents stop having the original appearance. To solve this problem, this paper presents a system for recognizing three and six English fonts from character images using Convolution Neural Network (CNN), and then compare the results of proposed system with the two studies. The first study used NCM features and SVM as a classification method, and the second study used DP features and SVM as classification method. The data of this study were taken from Al-Khaffaf dataset [21]. The two types of datasets have been used: the first type is about 27,620 sample for the three fonts classification and the second type is about 72,983 sample for the six fonts classification and both datasets are English character images in gray scale format with 8 bits. The results showed that CNN achieved the highest recognition rate in the proposed system compared with the two studies reached 99.75% and 98.329 % for the three and six fonts recognition, respectively. In addition, CNN got the least time required for creating model about 6 minutes and 23- 24 minutes for three and six fonts recognition, respectively. Based on the results, we can conclude that CNN technique is the best and most accurate model for recognizing fonts

    Privacy self-regulation and the changing role of the state: from public law to social and technical mechanisms of governance

    Get PDF
    This paper provides a structured overview of different self-governance mechanisms for privacy and data protection in the corporate world, with a special focus on Internet privacy. It also looks at the role of the state, and how it has related to privacy self-governance over time. While early data protection started out as law-based regulation by nation-states, transnational self-governance mechanisms have become more important due to the rise of global telecommunications and the Internet. Reach, scope, precision and enforcement of these industry codes of conduct vary a lot. The more binding they are, the more limited is their reach, though they - like the state-based instruments for privacy protection - are becoming more harmonised and global in reach nowadays. These social codes of conduct are developed by the private sector with limited participation of official data protection commissioners, public interest groups, or international organisations. Software tools - technical codes - for online privacy protection can give back some control over their data to individual users and customers, but only have limited reach and applications. The privacy-enhancing design of network infrastructures and database architectures is still mainly developed autonomously by the computer and software industry. Here, we can recently find a stronger, but new role of the state. Instead of regulating data processors directly, governments and oversight agencies now focus more on the intermediaries - standards developers, large software companies, or industry associations. And instead of prescribing and penalising, they now rely more on incentive-structures like certifications or public funding for social and technical self-governance instruments of privacy protection. The use of technology as an instrument and object of regulation is thereby becoming more popular, but the success of this approach still depends on the social codes and the underlying norms which technology is supposed to embed. --
    • 

    corecore