11 research outputs found

    Implementation of Adaptive Unsharp Masking as a pre-filtering method for watermark detection and extraction

    Get PDF
    Digital watermarking has been one of the focal points of research interests in order to provide multimedia security in the last decade. Watermark data, belonging to the user, are embedded on an original work such as text, audio, image, and video and thus, product ownership can be proved. Various robust watermarking algorithms have been developed in order to extract/detect the watermark against such attacks. Although watermarking algorithms in the transform domain differ from others by different combinations of transform techniques, it is difficult to decide on an algorithm for a specific application. Therefore, instead of developing a new watermarking algorithm with different combinations of transform techniques, we propose a novel and effective watermark extraction and detection method by pre-filtering, namely Adaptive Unsharp Masking (AUM). In spite of the fact that Unsharp Masking (UM) based pre-filtering is used for watermark extraction/detection in the literature by causing the details of the watermarked image become more manifest, effectiveness of UM may decrease in some cases of attacks. In this study, AUM has been proposed for pre-filtering as a solution to the disadvantages of UM. Experimental results show that AUM performs better up to 11\% in objective quality metrics than that of the results when pre-filtering is not used. Moreover; AUM proposed for pre-filtering in the transform domain image watermarking is as effective as that of used in image enhancement and can be applied in an algorithm-independent way for pre-filtering in transform domain image watermarking

    Human abnormal behavior impact on speaker verification systems

    Get PDF
    Human behavior plays a major role in improving human-machine communication. The performance must be affected by abnormal behavior as systems are trained using normal utterances. The abnormal behavior is often associated with a change in the human emotional state. Different emotional states cause physiological changes in the human body that affect the vocal tract. Fear, anger, or even happiness we recognize as a deviation from a normal behavior. The whole spectrum of human-machine application is susceptible to behavioral changes. Abnormal behavior is a major factor, especially for security applications such as verification systems. Face, fingerprint, iris, or speaker verification is a group of the most common approaches to biometric authentication today. This paper discusses human normal and abnormal behavior and its impact on the accuracy and effectiveness of automatic speaker verification (ASV). The support vector machines classifier inputs are Mel-frequency cepstral coefficients and their dynamic changes. For this purpose, the Berlin Database of Emotional Speech was used. Research has shown that abnormal behavior has a major impact on the accuracy of verification, where the equal error rate increase to 37 %. This paper also describes a new design and application of the ASV system that is much more immune to the rejection of a target user with abnormal behavior.Web of Science6401274012

    DeepVoCoder: A CNN model for compression and coding of narrow band speech

    Get PDF
    This paper proposes a convolutional neural network (CNN)-based encoder model to compress and code speech signal directly from raw input speech. Although the model can synthesize wideband speech by implicit bandwidth extension, narrowband is preferred for IP telephony and telecommunications purposes. The model takes time domain speech samples as inputs and encodes them using a cascade of convolutional filters in multiple layers, where pooling is applied after some layers to downsample the encoded speech by half. The final bottleneck layer of the CNN encoder provides an abstract and compact representation of the speech signal. In this paper, it is demonstrated that this compact representation is sufficient to reconstruct the original speech signal in high quality using the CNN decoder. This paper also discusses the theoretical background of why and how CNN may be used for end-to-end speech compression and coding. The complexity, delay, memory requirements, and bit rate versus quality are discussed in the experimental results.Web of Science7750897508

    A rule based prosody model for Turkish text-to-speech synthesis

    Get PDF
    Ovaj članak predstavlja naš novi prozodijski model u sustavu za sintezu turskog teksta u govor (TTS). Nakon razvijanja TTS sustava vođenog parametrijskim osobinama koje se sastoje od promjena trajanja, visine i jačine glasa, pokušavamo postaviti neka prozodijska pravila kako bi se povećala prirodnost našeg sintetizatora. Budući da u turskom jeziku glagoli koji se sprežu mogu biti samostalne rečenice uz sufikse koji im se dodaju, sastavljamo perceptualni prozodijski model definiranjem pravila o obrascima naglasaka kod sprezanja glagola. Sistematski su se proučavali potvrdni, negativni i upitni (i potvrdni i negativni) oblici mnogih glagola. Nisu se proučavali samo glagoli već, na isti način, i neke fraze kako bi se postigla ispravna prozodija. Prema rezultatima testova slušanja, definirana pravila zasnovana na promjenama trajanja, visine i jačine glasa, dovode do perceptualno bolje govorne sinteze, naime u prosjeku do 1,78/5,0 poboljšanja u CMSO testu (Comparative Mean Opinion Score). To poboljšanje predstavlja uspjeh našeg novog prozodijskog modela.This paper presents our novel prosody model in a Turkish text-to-speech synthesis (TTS) system. After developing a TTS system driven by parametric features consisting of duration, pitch and energy modifications, we try to figure out some prosody rules in order to increase the naturalness of our synthesizer. Since the inflected verbs in Turkish can be stand-alone sentences with the suffixes they take, we build a perceptual prosody model by defining rules on the stress patterns of verb inflections. Affirmative, negative and interrogative (both positive and negative) forms of many verbs were examined in a systematic way. Not only verbs, but in the same way, some phrases were examined for obtaining a proper prosody. According to the results of listening tests, the defined rules based on duration, pitch and energy modification weights, result in perceptually better speech synthesis, namely about 1,78/5,0 improvement in average in the CMOS (Comparative Mean Opinion Score) test. This improvement shows the success of our novel prosody model

    A rule based prosody model for Turkish text-to-speech synthesis

    Get PDF
    Ovaj članak predstavlja naš novi prozodijski model u sustavu za sintezu turskog teksta u govor (TTS). Nakon razvijanja TTS sustava vođenog parametrijskim osobinama koje se sastoje od promjena trajanja, visine i jačine glasa, pokušavamo postaviti neka prozodijska pravila kako bi se povećala prirodnost našeg sintetizatora. Budući da u turskom jeziku glagoli koji se sprežu mogu biti samostalne rečenice uz sufikse koji im se dodaju, sastavljamo perceptualni prozodijski model definiranjem pravila o obrascima naglasaka kod sprezanja glagola. Sistematski su se proučavali potvrdni, negativni i upitni (i potvrdni i negativni) oblici mnogih glagola. Nisu se proučavali samo glagoli već, na isti način, i neke fraze kako bi se postigla ispravna prozodija. Prema rezultatima testova slušanja, definirana pravila zasnovana na promjenama trajanja, visine i jačine glasa, dovode do perceptualno bolje govorne sinteze, naime u prosjeku do 1,78/5,0 poboljšanja u CMSO testu (Comparative Mean Opinion Score). To poboljšanje predstavlja uspjeh našeg novog prozodijskog modela.This paper presents our novel prosody model in a Turkish text-to-speech synthesis (TTS) system. After developing a TTS system driven by parametric features consisting of duration, pitch and energy modifications, we try to figure out some prosody rules in order to increase the naturalness of our synthesizer. Since the inflected verbs in Turkish can be stand-alone sentences with the suffixes they take, we build a perceptual prosody model by defining rules on the stress patterns of verb inflections. Affirmative, negative and interrogative (both positive and negative) forms of many verbs were examined in a systematic way. Not only verbs, but in the same way, some phrases were examined for obtaining a proper prosody. According to the results of listening tests, the defined rules based on duration, pitch and energy modification weights, result in perceptually better speech synthesis, namely about 1,78/5,0 improvement in average in the CMOS (Comparative Mean Opinion Score) test. This improvement shows the success of our novel prosody model

    A novel derivative-based classification method for hyperspectral data processing

    No full text
    In hyperspectral classification, a derivative of reflectance spectra is used directly or by fusion with the reflectance spectra. In this way, classification performance is improved. However, on the land cover, especially for plant species, the reflectance spectra may exhibit differences depending on a plant age and maturity level. This situation makes traditional classification methods which are based on time-dependent spectral similarity. In addition, the problem of classification of the species which have similar spectral properties is still valid. As a solution to time dependency and spectral similarity problems, in this study, a new and more generic method based on the spectral derivative is proposed. The method is tested for hyperspectral images which are captured at different time of the year and different places, in the life cycle of species. Test results show that proposed method successfully classifies the land cover time-independent and it is superior to the classical classification methods

    A flexible bit rate switching method for low bit rate vocoders

    No full text
    Robust low bit rate speech coders are essential in commercial and military communication systems. They operate at fixed bit rates and bit rates cannot be altered without major modifications in the vocoder design. In this paper we introduce a novel approach to vocoders by coding the time-scale modified input speech signal. The proposed method offers any bit rate from 2400 to 720 bits/s without modifying the vocoder structure. Simulation results, which mainly concentrate on intelligibility, talker recognisability and voice quality versus codec complexity and delay, are also presented. The proposed scaled speech coder delivers communication quality speech at half the bit rate of the new US Federal Standard, mixed excitation linear prediction vocoder

    Deep learning serves voice cloning: How vulnerable are automatic speaker verification systems to spoofing trials?

    No full text
    This article verifies the reliability of automatic speaker verification (ASV) systems on new synthesis methods based on deep neural networks. ASV systems are widely used and applied regarding secure and effective biometric authentication. On the other hand, the rapid deployment of ASV systems contributes to the increased attention of attackers with newer and more sophisticated spoofing methods. Until recently, speech synthesis of the reference speaker did not seriously compromise the latest ASV systems. This situation is changing with the deployment of deep neural networks into the synthesis process. Projects including WaveNet, Deep Voice, Voice Loop, and many others generate very natural and high-quality speech that may clone voice identity. We are slowly approaching an era where we will not be able to recognize a genuine voice from a synthesized one. Therefore, it is necessary to define the robustness of current ASV systems to new methods of voice cloning. In this article, well-known SVM and GMM as well as new CNN-based ASVs are applied and subjected to synthesized speech from Tacotron 2 with the WaveNet TTS system. The results of this work confirm our concerns regarding the reliability of ASV systems against synthesized speech.Web of Science58210510

    Genome-Wide Transcriptional Reorganization Associated with Senescence-to-Immortality Switch during Human Hepatocellular Carcinogenesis

    Get PDF
    Senescence is a permanent proliferation arrest in response to cell stress such as DNA damage. It contributes strongly to tissue aging and serves as a major barrier against tumor development. Most tumor cells are believed to bypass the senescence barrier (become "immortal") by inactivating growth control genes such as TP53 and CDKN2A. They also reactivate telomerase reverse transcriptase. Senescence-to-immortality transition is accompanied by major phenotypic and biochemical changes mediated by genome-wide transcriptional modifications. This appears to happen during hepatocellular carcinoma (HCC) development in patients with liver cirrhosis, however, the accompanying transcriptional changes are virtually unknown. We investigated genome-wide transcriptional changes related to the senescence-to-immortality switch during hepatocellular carcinogenesis. Initially, we performed transcriptome analysis of senescent and immortal clones of Huh7 HCC cell line, and identified genes with significant differential expression to establish a senescence-related gene list. Through the analysis of senescence-related gene expression in different liver tissues we showed that cirrhosis and HCC display expression patterns compatible with senescent and immortal phenotypes, respectively; dysplasia being a transitional state. Gene set enrichment analysis revealed that cirrhosis/senescence-associated genes were preferentially expressed in non-tumor tissues, less malignant tumors, and differentiated or senescent cells. In contrast, HCC/immortality genes were up-regulated in tumor tissues, or more malignant tumors and progenitor cells. In HCC tumors and immortal cells genes involved in DNA repair, cell cycle, telomere extension and branched chain amino acid metabolism were up-regulated, whereas genes involved in cell signaling, as well as in drug, lipid, retinoid and glycolytic metabolism were down-regulated. Based on these distinctive gene expression features we developed a 15-gene hepatocellular immortality signature test that discriminated HCC from cirrhosis with high accuracy. Our findings demonstrate that senescence bypass plays a central role in hepatocellular carcinogenesis engendering systematic changes in the transcription of genes regulating DNA repair, proliferation, differentiation and metabolism

    A novel approach for small sample size family-based association studies: sequential tests

    No full text
    In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis
    corecore