731 research outputs found

    AM/FM signal estimation with micro-segmentation and polynomial fit

    Get PDF
    Cataloged from PDF version of article.Amplitude and phase estimation of AM/FM signals with parametric polynomial representation require the polynomial orders for phase and amplitude to be known. But in reality, they are not known and have to be estimated. A well-known method for estimation is the higher-order ambiguity function (HAF) or its variants. But the HAF method has several reported drawbacks such as error propagation and slowly varying or even constant amplitude assumption. Especially for the long duration time-varying signals like AM/FM signals, which require high orders for the phase and amplitude, computational load is very heavy due to nonlinear optimization involving many variables. This paper utilizes a micro-segmentation approach where the length of segment is selected such that the amplitude and instantaneous frequency (IF) is constant over the segment. With this selection first, the amplitude and phase estimates for each micro-segment are obtained optimally in the LS sense, and then, these estimates are concatenated to obtain the overall amplitude and phase estimates. The initial estimates are not optimal but sufficiently close to the optimal solution for subsequent processing. Therefore, by using the initial estimates, the overall polynomial orders for the amplitude and phase are estimated. Using estimated orders, the initial amplitude and phase functions are fitted to the polynomials to obtain the final signal. The method does not use any multivariable nonlinear optimization and is efficient in the sense that the MSE performance is close enough to the Cramer–Rao bound. Simulation examples are presented

    Rate-distortion optimization for stereoscopic video streaming with unequal error protection

    Get PDF
    We consider an error-resilient stereoscopic streaming system that uses an H.264-based multiview video codec and a rateless Raptor code for recovery from packet losses. One aim of the present work is to suggest a heuristic methodology for modeling the end-to-end rate-distortion (RD) characteristic of such a system. Another aim is to show how to make use of such a model to optimally select the parameters of the video codec and the Raptor code to minimize the overall distortion. Specifically, the proposed system models the RD curve of video encoder and performance of channel codec to jointly derive the optimal encoder bit rates and unequal error protection (UEP) rates specific to the layered stereoscopic video streaming. We define analytical RD curve modeling for each layer that includes the interdependency of these layers. A heuristic analytical model of the performance of Raptor codes is also defined. Furthermore, the distortion on the stereoscopic video quality caused by packet losses is estimated. Finally, analytical models and estimated single-packet loss distortions are used to minimize the end-to-end distortion and to obtain optimal encoder bit rates and UEP rates. The simulation results clearly demonstrate the significant quality gain against the nonoptimized schemes

    A two phase successive cancellation decoder architecture for polar codes

    Get PDF
    We propose a two-phase successive cancellation (TPSC) decoder architecture for polar codes that exploits the array-code property of polar codes by breaking the decoding of a length-TV polar code into a series of length-√ L decoding cycles. Each decoding cycle consists of two phases: a first phase for decoding along the columns and a second phase for decoding along the rows of the code array. The reduced decoder size makes it more affordable to implement the core decoder logic using distributed memory elements consisting of flip-flops (FFs), as opposed to slower random access memory (RAM), leading to a speed up in clock frequency. To minimize the circuit complexity, a single decoder unit is used in both phases with minor modifications. The re-use of the same decoder module makes it necessary to recall certain internal decoder state variables between decoding cycles. Instead of storing the decoder state variables in RAM, the decoder discards them and calculates them again when needed. Overall, the decoder has O(√ L) circuit complexity excluding RAM, and a latency of approximately 2.57V. A RAM of size O(N) is needed for storing the channel log-likelihood variables and the decoder decision variables. As an example of the proposed method, a length N = 214 bit polar code is implemented in an FPGA and the synthesis results are compared with a previously reported FPGA implementation. The results show that the proposed architecture has lower complexity, lower memory utilization with higher throughput, and a clock frequency that is less sensitive to code length. © 2013 IEEE

    Adaptive Filtering for Non-Gaussian Stable Processes

    Get PDF
    A large class of physical phenomenon observed in practice exhibit non-Gaussian behavior. In this letter, a-stable distributions, which have heavier tails than Gaussian distribution, are considered to model non-Gaussian signals. Adaptive signal processing in the presence of such a noise is a requirement of many practical problems. Since direct application of commonly used adaptation techniques fail in these applications, new algorithms for adaptive filtering for α-stable random processes are introduced. © 1994 IEE

    Optimal measurement under cost constraints for estimation of propagating wave fields

    Get PDF
    We give a precise mathematical formulation of some measurement problems arising in optics, which is also applicable in a wide variety of other contexts. In essence the measurement problem is an estimation problem in which data collected by a number of noisy measurement probes arc combined to reconstruct an unknown realization of a random process f(x) indexed by a spatial variable x ε ℝk for some k ≥ 1. We wish to optimally choose and position the probes given the statistical characterization of the process f(x) and of the measurement noise processes. We use a model in which we define a cost function for measurement probes depending on their resolving power. The estimation problem is then set up as an optimization problem in which we wish to minimize the mean-square estimation error summed over the entire domain of f subject to a total cost constraint for the probes. The decision variables are the number of probes, their positions and qualities. We are unable to offer a solution to this problem in such generality; however, for the metrical problem in which the number and locations or the probes are fixed, we give complete solutions Tor some special cases and an efficient numerical algorithm for computing the best trade-off between measurement cost and mean-square estimation error. A novel aspect of our formulation is its close connection with information theory; as we argue in the paper, the mutual information function is the natural cost function for a measurement device. The use of information as a cost measure for noisy measurements opens up several direct analogies between the measurement problem and classical problems of information theory, which are pointed out in the paper. ©2007 IEEE

    Early time-locked gamma response and gender specificity

    Get PDF
    Cataloged from PDF version of article.The aim was to investigate whether gender is a causative factor in the gamma status according to which some individuals respond with time-locked, early gamma response, G+, while the others do not show this response, G-. The sample consisted of 42 volunteer participants (between 19 and 37 years of age with at least 9 years of education). There were 22 females and 20 males. Data were collected under the oddball paradigm. Auditory stimulation (10 ms r/f time, 50 ms duration, 65 dB SPL) consisted of target (2000 Hz; p = .20) stimuli that occurred randomly within a series of standard stimuli (1000 Hz; p = .80). Gamma responses were studied in the amplitude frequency characteristics, in the digitally filtered event-related potentials (f-ERPs) and in the distributions which were obtained using the recently developed time-frequency component analysis (TFCA) technique. Participants were classified into G+ and G- groups with a criterion of full agreement between the results of an automated gamma detection technique and expert opinion. The 2 × 2 × 2 ANOVA on f-ERPs and 2 × 2 × 2 multivariate ANOVA on TFCA distributions showed the main effect of gamma status and gender as significant, and the interaction between gamma status and gender as nonsignificant. Accordingly, individual difference in gamma status is a reliable phenomenon, but this does not depend on gender. There are conflicting findings in the literature concerning the effect of gender on ERP components (N100, P300). The present study showed that if the gamma status is not included in research designs, it may produce a confounding effect on ERP parameters. © 2005 Elsevier B.V. All rights reserved

    Rate-distortion optimized layered stereoscopic video streaming with raptor codes

    Get PDF
    A near optimal streaming system for stereoscopic video is proposed. Initially, the stereoscopic video is separated into three layers and the approximate analytical model of the Rate-Distortion (RD) curve of each layer is calculated from sufficient number of rate and distortion samples. The analytical modeling includes the interdependency of the defined layers. Then, the analytical models are used to derive the optimal source encoding rates for a given channel bandwidth. The distortion in the quality of the stereoscopic video that is caused by losing a NAL unit from the defined layers is estimated to minimize the average distortion of a single NAL unit loss. The minimization is performed over protection rates allocated to each layer. Raptor codes are utilized as the error protection scheme due to their novelty and suitability in video transmission. The layers are protected unequally using Raptor codes according to the parity ratios allocated to the layers. Comparison of the defined scheme with two other protection allocation schemes is provided via simulations to observe the quality of stereoscopic video. ©2007 IEEE

    Error resilient layered stereoscopic video streaming

    Get PDF
    In this paper, error resilient stereoscopic video streaming problem is addressed. Two different Forward Error Correction (FEC) codes namely Systematic LT and RS codes are utilized to protect the stereoscopic video data against transmission errors. Initially, the stereoscopic video is categorized in 3 layers with different priorities. Then, a packetization scheme is used to increase the efficiency of error protection. A comparative analysis of RS and LT codes are provided via simulations to observe the optimum packetization and UEP strategies

    An efficient parallelization technique for high throughput FFT-ASIPs

    Get PDF
    Fast Fourier Transformation (FFT) and it's inverse (IFFT) are used in Orthogonal Frequency Division Multiplexing (OFDM) systems for data (de)modulation. The transformations are the kernel tasks in an OFDM implementation, and are the most processing-intensive ones. Recent trends in the electronic consumer market require OFDM implementations to be flexible, making a trade-off between area, energy-efficiency, flexibility and timing a necessity. This has spurred the development of Application-Specific Instruction-Set Processors (ASIPs) for FFT processing. Parallelization is an architectural parameter that significantly influence design goals. This paper presents an analysis of the efficiency of parallelization techniques for an FFT-ASIP. It is shown that existing techniques are inefficient for high throughput applications such as Ultra Wideband (UWB), because of memory bottlenecks. Therefore, an interleaved execution technique which exploits temporal parallelism is proposed. With this technique, it is possible to meet the throughput requirement of UWB (409.6 Msamples/s) with only 4 non-trivial butterfly units for an ASIP that runs at 400MHz. © 2006 IEEE

    Vitamin D3/vitamin K2/magnesium-loaded polylactic acid/tricalcium phosphate/polycaprolactone composite nanofibers demonstrated osteoinductive effect by increasing Runx2 via Wnt/β-catenin pathway

    Get PDF
    Vitamin D3, vitamin K2, and Mg (10%, 1.25%, and 5%, w/w, respectively)-loaded PLA (12%, w/v) (TCP (5%, w/v))/PCL (12%, w/v) 1:1 (v/v) composite nanofibers (DKMF) were produced by electrospinning method (ES) and their osteoinductive effects were investigated in cell culture test. Neither pure nanofibers nor DKMF caused a significant cytotoxic effect in fibroblasts. The induction of the stem cell differentiation into osteogenic cells was observed in the cell culture with both DKMF and pure nanofibers, separately. Vitamin D3, vitamin K2, and magnesium demonstrated to support the osteogenic differentiation of mesenchymal stem cells by expressing Runx2, BMP2, and osteopontin and suppressing PPAR-γ and Sox9. Therefore, the Wnt/β-catenin signaling pathway was activated by DKMF. DKMF promoted large axonal sprouting and needle-like elongation of osteoblast cells and enhanced cellular functions such as migration, infiltration, proliferation, and differentiation after seven days of incubation using confocal laser scanning microscopy. The results showed that DKMF demonstrated sustained drug release for 144 h, tougher and stronger structure, higher tensile strength, increased water up-take capacity, decreased degradation ratio, and slightly lower Tm and Tg values compared to pure nanofibers. Consequently, DKMF is a promising treatment approach in bone tissue engineering due to its osteoinductive effects
    corecore