41 research outputs found

    Synthesis and characterization of some heterocyclic including oxazoles,Thiazoles, Pyridazines, phthalizines and Pyrazoles with evaluating of biological activity.

    Get PDF
    A series of new compounds including p-bromo methyl pheno acetate [2]. N-( aminocarbonyl)–p-bromo pheno acetamide [3] , N-( aminothioyl) -p-bromo phenoacetyl amide [4], N-[4-(p-di phenyl)-1,3-oxazol-2-yl]-p-bromopheno acetamide [5],N-[4-p-di phenyl]-1,3-thiazol-2-yl-p-bromo phenoacet amide [6], p-bromopheno acetic acid hydrazide [7] , 1-N-(p-bromo pheno acetyl)-1,2-dihydro-pyridazin-3,6- dione [8], 1-N-(p-bromo pheno acetyl)-1,2-dihydro-phthalazin-3,8- dione[ 9], 1-(p-bromo pheno acetyl)-3-methylpyrazol-5-one [10] and 1-(p-bromo phenol acetyl)- 3,5-dimethyl pyrazole [11] have been synthesized. The prepared compounds were characterized by m.p.,FT-IR and 1H-NMR spectroscopy. Also ,the biological activity was evaluated

    Speech enhancement Algorithm based on super-Gaussian modeling and orthogonal polynomials

    Get PDF
    © 2020 Lippincott Williams and Wilkins. All rights reserved. Different types of noise from the surrounding always interfere with speech and produce annoying signals for the human auditory system. To exchange speech information in a noisy environment, speech quality and intelligibility must be maintained, which is a challenging task. In most speech enhancement algorithms, the speech signal is characterized by Gaussian or super-Gaussian models, and noise is characterized by a Gaussian prior. However, these assumptions do not always hold in real-life situations, thereby negatively affecting the estimation, and eventually, the performance of the enhancement algorithm. Accordingly, this paper focuses on deriving an optimum low-distortion estimator with models that fit well with speech and noise data signals. This estimator provides minimum levels of speech distortion and residual noise with additional improvements in speech perceptual aspects via four key steps. First, a recent transform based on an orthogonal polynomial is used to transform the observation signal into a transform domain. Second, the noise classification based on feature extraction is adopted to find accurate and mutable models for noise signals. Third, two stages of nonlinear and linear estimators based on the minimum mean square error (MMSE) and new models for speech and noise are derived to estimate a clean speech signal. Finally, the estimated speech signal in the time domain is determined by considering the inverse of the orthogonal transform. The results show that the average classification accuracy of the proposed approach is 99.43%. In addition, the proposed algorithm significantly outperforms existing speech estimators in terms of quality and intelligibility measures

    Signal compression and enhancement using a new orthogonal-polynomial-based discrete transform

    Get PDF
    Discrete orthogonal functions are important tools in digital signal processing. These functions received considerable attention in the last few decades. This study proposes a new set of orthogonal functions called discrete Krawtchouk-Tchebichef transform (DKTT). Two traditional orthogonal polynomials, namely, Krawtchouk and Tchebichef, are combined to form DKTT. The theoretical and mathematical frameworks of the proposed transform are provided. DKTT was tested using speech and image signals from a well-known database under clean and noisy environments. DKTT was applied in a speech enhancement algorithm to evaluate the efficient removal of noise from speech signal. The performance of DKTT was compared with that of standard transforms. Different types of distance (similarity index) and objective measures in terms of image quality, speech quality, and speech intelligibility assessments were used for comparison. Experimental tests show that DKTT exhibited remarkable achievements and excellent results in signal compression and speech enhancement. Therefore, DKTT can be considered as a new set of orthogonal functions for futuristic applications of signal processing

    A fast feature extraction algorithm for image and video processing

    Get PDF
    Medical images and videos are utilized to discover, diagnose and treat diseases. Managing, storing, and retrieving stored images effectively are considered important topics. The rapid growth of multimedia data, including medical images and videos, has caused a swift rise in data transmission volume and repository size. Multimedia data contains useful information; however, it consumes an enormous storage space. Therefore, high processing time for that sheer volume of data will be required. Image and video applications demand for reduction in computational cost (processing time) when extracting features. This paper introduces a novel method to compute transform coefficients (features) from images or video frames. These features are used to represent the local visual content of images and video frames. We compared the proposed method with the traditional approach of feature extraction using a standard image technique. Furthermore, the proposed method is employed for shot boundary detection (SBD) applications to detect transitions in video frames. The standard TRECVID 2005, 2006, and 2007 video datasets are used to evaluate the performance of the SBD applications. The achieved results show that the proposed algorithm significantly reduces the computational cost in comparison to the traditional method

    Image edge detection operators based on orthogonal polynomials

    Get PDF
    Orthogonal polynomials (OPs) are beneficial for image processing. OPs are used to reflect an image or a scene to a moment domain, and moments are subsequently used to extract object contours utilised in various applications. In this study, OP-based edge detection operators are introduced to replace traditional convolution-based and block processing methods with direct matrix multiplication. A mathematical model with empirical study results is established to investigate the performance of the proposed detectors compared with that of traditional algorithms, such as Sobel and Canny operators. The proposed operators are then evaluated by using entire images from a well-known data set. Experimental results reveal that the proposed operator achieves a more favourable interpretation, especially for images distorted by motion effects, than traditional methods do

    Fast recursive computation of Krawtchouk polynomials

    Get PDF
    Krawtchouk polynomials (KPs) and their moments are used widely in the field of signal processing for their superior discriminatory properties. This study proposes a new fast recursive algorithm to compute Krawtchouk polynomial coefficients (KPCs). This algorithm is based on the symmetry property of KPCs along the primary and secondary diagonals of the polynomial array. The n−x plane of the KP array is partitioned into four triangles, which are symmetrical across the primary and secondary diagonals. The proposed algorithm computes the KPCs for only one triangle (partition), while the coefficients of the other three triangles (partitions) can be computed using the derived symmetry properties of the KP. Therefore, only N / 4 recursion times are required. The proposed algorithm can also be used to compute polynomial coefficients for different values of the parameter p in interval (0, 1). The performance of the proposed algorithm is compared with that in previous literature in terms of image reconstruction error, polynomial size, and computation cost. Moreover, the proposed algorithm is applied in a face recognition system to determine the impact of parameter p on feature extraction ability. Simulation results show that the proposed algorithm has a remarkable advantage over other existing algorithms for a wide range of parameters p and polynomial size N, especially in reducing the computation time and the number of operations utilized

    Shot boundary detection based on orthogonal polynomial

    Get PDF
    Shot boundary detection (SBD) is a substantial step in video content analysis, indexing, retrieval, and summarization. SBD is the process of automatically partitioning video into its basic units, known as shots, through detecting transitions between shots. The design of SBD algorithms developed from simple feature comparison to rigorous probabilistic and using of complex models. Nevertheless, accelerate the detection of transitions with higher accuracy need to be improved. Extensive research has employed orthogonal polynomial (OP) and their moments in computer vision and signal processing owing to their powerful performance in analyzing signals. A new SBD algorithm based on OP has been proposed in this paper. The Features are derived from orthogonal transform domain (moments) to detect the hard transitions in video sequences. Moments are used because of their ability to represent signal (video frame) without information redundancy. These features are the moments of smoothed and gradients of video frames. The moments are computed using a developed OP which is squared Krawtchouk-Tchebichef polynomial. These moments (smoothed and gradients) are fused to form a feature vector. Finally, the support vector machine is utilized to detect hard transitions. In addition, a comparison between the proposed algorithm and other state-of-the-art algorithms is performed to reinforce the capability of the proposed work. The proposed algorithm is examined using three well-known datasets which are TRECVID2005, TRECVID2006, and TRECVID2007. The outcomes of the comparative analysis show the superior performance of the proposed algorithm against other existing algorithms

    NSQM: A non-intrusive assessment of speech quality using normalized energies of the neurogram

    No full text
    This study proposes a new non-intrusive measure of speech quality, the neurogram speech quality measure (NSQM), based on the responses of a biologically-inspired computational model of the auditory system for listeners with normal hearing. The model simulates the responses of an auditory-nerve fiber with a characteristic frequency to a speech signal, and the population response of the model is represented by a neurogram (2D time-frequency representation). The responses of each characteristic frequency in the neurogram were decomposed into sub-bands using 1D discrete Wavelet transform. The normalized energy corresponding to each sub-band was used as an input to a support vector regression model to predict the quality score of the processed speech. The performance of the proposed non-intrusive measure was compared to the results from a range of intrusive and non-intrusive measures using three standard databases: the EXP1 and EXP3 of supplement 23 to the P series (P.Supp23) of ITU-T Recommendations and the NOIZEUS databases. The proposed NSQM achieved an overall better result over most of the existing metrics for the effects of compression codecs, additive and channel noises. © 201

    Speech Quality Factors for Traditional and Neural-Based Low Bit Rate Vocoders

    Get PDF
    International Conference on Quality of Multimedia Experience (QoMEX), Dublin, Ireland, 26-28 May 2020This study compares the performances of different algorithms for coding speech at low bit rates. In addition to widely deployed traditional vocoders, a selection of recently developed generative-model-based coders at different bit rates are contrasted. Performance analysis of the coded speech is evaluated for different quality aspects: accuracy of pitch periods estimation, the word error rates for automatic speech recognition, and the influence of speaker gender and coding delays. A number of performance metrics of speech samples taken from a publicly available database were compared with subjective scores. Results from subjective quality assessment do not correlate well with existing full reference speech quality metrics. The results provide valuable insights into aspects of the speech signal that will be used to develop a novel metric to accurately predict speech quality from generative-model-based coders.Science Foundation IrelandInsight Research Centr
    corecore