21 research outputs found

    The Ninth Visual Object Tracking VOT2021 Challenge Results

    Get PDF
    acceptedVersionPeer reviewe

    Robust watermarking of . . .

    No full text
    This paper introduces two spatial methods in order to embed watermark data into fingerprint images, without corrupting their features. The first method inserts watermarkdata after feature extraction, thuspreventing watermarking ofregH;q used for #ngySLkhF classi#cation. The method utilizes animag adaptive strenge adjustment technique which results in watermarks with low visibility. The second method introduces a feature adaptivewatermarking technique for#ngDFHEkhFF thus applicable before feature extraction. For both of the methods,decoding does not require origrek #ngrekFyS imagr Unlike most of the published spatial watermarking methods, the proposed methods providehig decoding accuracy for#ngjHD;kh imagjH Hig datahiding and decoding performance for color imagk is also observed

    A Spatial Method for Watermarking of Fingerprint Images

    No full text
    This paper extends the watermarking method introduced in [1] in order to embed watermark data into fingerprint images, without corrupting their features. Two methods are proposed. The first method inserts watermark data after feature extraction thus prevents watermarking of regions used for fingerprint classification. The method utilizes an image adaptive strength adjustment technique which results in watermarks with low visibility. The second method introduces a feature adaptive watermarking technique for fingerprints, thus applicable before feature extraction. For both of the methods, decoding does not require original fingerprint image. Unlike most of the published spatial watermarking methods, the proposed methods provide high decoding accuracy for fingerprint images

    Perceptual audio features for emotion detection

    Get PDF
    Abstract In this article, we propose a new set of acoustic features for automatic emotion recognition from audio. The features are based on the perceptual quality metrics that are given in perceptual evaluation of audio quality known as ITU BS.1387 recommendation. Starting from the outer and middle ear models of the auditory system, we base our features on the masked perceptual loudness which defines relatively objective criteria for emotion detection. The features computed in critical bands based on the reference concept include the partial loudness of the emotional difference, emotional difference-to-perceptual mask ratio, measures of alterations of temporal envelopes, measures of harmonics of the emotional difference, the occurrence probability of emotional blocks, and perceptual bandwidth. A soft-majority voting decision rule that strengthens the conventional majority voting is proposed to assess the classifier outputs. Compared to the state-of-the-art systems including Munich Open-Source Emotion and Affect Recognition Toolkit, Hidden Markov Toolkit, and Generalized Discriminant Analysis, it is shown that the emotion recognition rates are improved between 7-16% for EMO-DB and 7-11% in VAM for "all" and "valence" tasks.</jats:p
    corecore