599 research outputs found
Statistically optimum pre- and postfiltering in quantization
We consider the optimization of pre- and postfilters surrounding a quantization system. The goal is to optimize the filters such that the mean square error is minimized under the key constraint that the quantization noise variance is directly proportional to the variance of the quantization system input. Unlike some previous work, the postfilter is not restricted to be the inverse of the prefilter. With no order constraint on the filters, we present closed-form solutions for the optimum pre- and postfilters when the quantization system is a uniform quantizer. Using these optimum solutions, we obtain a coding gain expression for the system under study. The coding gain expression clearly indicates that, at high bit rates, there is no loss in generality in restricting the postfilter to be the inverse of the prefilter. We then repeat the same analysis with first-order pre- and postfilters in the form 1+αz-1 and 1/(1+γz^-1 ). In specific, we study two cases: 1) FIR prefilter, IIR postfilter and 2) IIR prefilter, FIR postfilter. For each case, we obtain a mean square error expression, optimize the coefficients α and γ and provide some examples where we compare the coding gain performance with the case of α=γ. In the last section, we assume that the quantization system is an orthonormal perfect reconstruction filter bank. To apply the optimum preand postfilters derived earlier, the output of the filter bank must be wide-sense stationary WSS which, in general, is not true. We provide two theorems, each under a different set of assumptions, that guarantee the wide sense stationarity of the filter bank output. We then propose a suboptimum procedure to increase the coding gain of the orthonormal filter bank
Recommended from our members
Analytic Conditions for Energy Neutrality in Uniformly-Formed Wireless Sensor Networks
Future deployments of wireless sensor network (WSN) infrastructures for environmental or event monitoring are expected to be equipped with energy harvesters (e.g. piezoelectric, thermal, photovoltaic) in order to substantially increase their autonomy. In this paper we derive conditions for energy neutrality, i.e. perpetual energy autonomy per sensor node, by balancing the node's expected energy consumption with its expected energy harvesting capability. Our analysis assumes a uniformly-formed WSN, i.e. a network comprising identical transmitter sensor nodes and identical receiver/relay sensor nodes with a balanced cluster-tree topology. The proposed framework is parametric to: (i) the duty cycle for the network activation; (ii) the number of nodes in the same tier of the cluster-tree topology; (iii) the consumption rate of the receiver node(s) that collect (and possibly relay) data along with their own; (iv) the marginal probability density function (PDF) characterizing the data transmission rate per node; (v) the expected amount of energy harvested by each node. Based on our analysis, we obtain the number of nodes leading to the minimumenergy harvestingrequirement for each tier of the WSN cluster-tree topology. We also derive closed-form expressions for the difference in the minimum energy harvesting requirements between four transmission rate PDFs in function of the WSN parameters. Our analytic results are validated via experiments using TelosB sensor nodes and an energy measurement testbed. Our framework is useful for feasibility studies on energy harvesting technologies in WSNs and for optimizing the operational settings of hierarchical WSN-based monitoring infrastructures prior to time-consuming testing and deployment within the application environment
Research on intelligent fault diagnosis of mechanical equipment based on sparse deep neural networks
In the big data background, the accuracy of fault diagnosis and recognition has been difficult to be improved. The deep neural network was used to recognize the diagnosis rate of the bearing with four kinds of conditions and compared with traditional BP neural network, genetic neural network and particle swarm neural network. Results showed that the diagnosis accuracy and convergence rate of the deep neural network were obviously higher than those of other models. Fault diagnosis rates with different sample sizes and training sample proportions were then studied to compare with the latest reported methods. Results showed that fault diagnosis had a good stability using deep neural networks. Vibration accelerations of the bearing with different fault diameters and excitation loads were extracted. The deep neural network was used to recognize these faults. Diagnosis accuracy was very high. In particular, the fault diagnosis rate was 98 % when signal features of vibration accelerations were very obvious, which indicated that using deep neural network was effective in diagnosing and recognizing different types of faults. Finally, the deep neural network was used to conduct fault diagnosis for the gearbox of wind turbines and compared with the other models to present that it would work well in the industrial environment
Image compression with anisotropic diffusion
Compression is an important field of digital image processing where well-engineered methods with high performance exist. Partial differential equations (PDEs), however, have not much been explored in this context so far. In our paper we introduce a novel framework for image compression that makes use of the interpolation qualities of edge-enhancing diffusion. Although this anisotropic diffusion equation with a diffusion tensor was originally proposed for image denoising, we show that it outperforms many other PDEs when sparse scattered data must be interpolated. To exploit this property for image compression, we consider an adaptive triangulation method for removing less significant pixels from the image. The remaining points serve as scattered interpolation data for the diffusion process. They can be coded in a compact way that reflects the B-tree structure of the triangulation. We supplement the coding step with a number of amendments such as error threshold adaptation, diffusion-based point selection, and specific quantisation strategies. Our experiments illustrate the usefulness of each of these modifications. They demonstrate that for high compression rates, our PDE-based approach does not only give far better results than the widely-used JPEG standard, but can even come close to the quality of the highly optimised JPEG2000 codec
The relationship between the number of extrema of compound sinusoidal signals and its high-frequency component
As the main findings of our research work, we present a novel theorem on the relationship between the number of extrema of compound sinusoidal signals and its high-frequency component. In the case of signals consisting of the sum of two sine signals, if the high-frequency component has a higher product of the frequency and the amplitude, then we prove that the frequency of the high-frequency component is proportional to the number of extrema in a time interval. This theorem justifies some of the experimental results of other researchers on the relevance of extrema to frequency and amplitude. To confirm the theorem, extrema counting was performed on speech signals and compared with Fourier transform. The experimental results show that the average number of extrema of the compound sinusoidal signal or its derivatives over a time interval can be used to estimate the frequency at its highest frequency band. An important application of this research work is the fast calculation of high frequencies of a signal. This theorem also shows that the number of extrema points can be used as a new effective feature for signal processing, especially speech signals
- …