97 research outputs found

    Tamper detection of qur'anic text watermarking scheme based on vowel letters with Kashida using exclusive-or and queueing technique

    Get PDF
    The most sensitive Arabic text available online is the digital Holy Qur’an. This sacred Islamic religious book is recited by all Muslims worldwide including the non-Arabs as part of their worship needs. It should be protected from any kind of tampering to keep its invaluable meaning intact. Different characteristics of the Arabic letters like the vowels ( ŰŁ . و . ي ), Kashida (extended letters), and other symbols in the Holy Qur’an must be secured from alterations. The cover text of the al-Qur’an and its watermarked text are different due to the low values of the Peak Signal to Noise Ratio (PSNR), Embedding Ratio (ER), and Normalized Cross-Correlation (NCC), thus the location for tamper detection gets low accuracy. Watermarking technique with enhanced attributes must therefore be designed for the Qur’an text using Arabic vowel letters with Kashida. Most of the existing detection methods that tried to achieve accurate results related to the tampered Qur’an text often show various limitations like diacritics, alif mad surah, double space, separate shapes of Arabic letters, and Kashida. The gap addressed by this research is to improve the security of Arabic text in the Holy Qur’an by using vowel letters with Kashida. The purpose of this research is to enhance Quran text watermarking scheme based on exclusive-or and reversing with queueing techniques. The methodology consists of four phases. The first phase is pre-processing followed by the embedding process phase to hide the data after the vowel letters wherein if the secret bit is ‘1’, insert the Kashida but do not insert it if the bit is ‘0’. The third phase is extraction process and the last phase is to evaluate the performance of the proposed scheme by using PSNR (for the imperceptibility), ER (for the capacity), and NCC (for the security of the watermarking). The experimental results revealed the improvement of the NCC by 1.77 %, PSNR by 9.6 %, and ER by 8.6 % compared to available current schemes. Hence, it can be concluded that the proposed scheme has the ability to detect the location of tampering accurately for attacks of insertion, deletion, and reordering

    Aeronautical engineering: A continuing bibliography with indexes (supplement 271)

    Get PDF
    This bibliography lists 666 reports, articles, and other documents introduced into the NASA scientific and technical information system in October, 1991. Subject coverage includes design, construction and testing of aircraft and aircraft engines; aircraft components, equipment and systems; ground support systems; and theoretical and applied aspects of aerodynamics and general fluid dynamics

    Exploiting Spatio-Temporal Coherence for Video Object Detection in Robotics

    Get PDF
    This paper proposes a method to enhance video object detection for indoor environments in robotics. Concretely, it exploits knowledge about the camera motion between frames to propagate previously detected objects to successive frames. The proposal is rooted in the concepts of planar homography to propose regions of interest where to find objects, and recursive Bayesian filtering to integrate observations over time. The proposal is evaluated on six virtual, indoor environments, accounting for the detection of nine object classes over a total of ∌ 7k frames. Results show that our proposal improves the recall and the F1-score by a factor of 1.41 and 1.27, respectively, as well as it achieves a significant reduction of the object categorization entropy (58.8%) when compared to a two-stage video object detection method used as baseline, at the cost of small time overheads (120 ms) and precision loss (0.92).</p

    Object Duplicate Detection

    Get PDF
    With the technological evolution of digital acquisition and storage technologies, millions of images and video sequences are captured every day and shared in online services. One way of exploring this huge volume of images and videos is through searching a particular object depicted in images or videos by making use of object duplicate detection. Therefore, need of research on object duplicate detection is validated by several image and video retrieval applications, such as tag propagation, augmented reality, surveillance, mobile visual search, and television statistic measurement. Object duplicate detection is detecting visually same or very similar object to a query. Input is not restricted to an image, it can be several images from an object or even it can be a video. This dissertation describes the author's contribution to solve problems on object duplicate detection in computer vision. A novel graph-based approach is introduced for 2D and 3D object duplicate detection in still images. Graph model is used to represent the 3D spatial information of the object based on the local features extracted from training images so that an explicit and complex 3D object modeling is avoided. Therefore, improved performance can be achieved in comparison to existing methods in terms of both robustness and computational complexity. Our method is shown to be robust in detecting the same objects even when images containing the objects are taken from very different viewpoints or distances. Furthermore, we apply our object duplicate detection method to video, where the training images are added iteratively to the video sequence in order to compensate for 3D view variations, illumination changes and partial occlusions. Finally, we show several mobile applications for object duplicate detection, such as object recognition based museum guide, money recognition or flower recognition. General object duplicate detection may fail to detection chess figures, however considering context, like chess board position and height of the chess figure, detection can be more accurate. We show that user interaction further improves image retrieval compared to pure content-based methods through a game, called Epitome

    Collected papers, Vol. V

    Get PDF
    • 

    corecore