89 research outputs found

    An Investigation of Orthogonal Wavelet Division Multiplexing Techniques as an Alternative to Orthogonal Frequency Division Multiplex Transmissions and Comparison of Wavelet Families and Their Children

    Get PDF
    Recently, issues surrounding wireless communications have risen to prominence because of the increase in the popularity of wireless applications. Bandwidth problems, and the difficulty of modulating signals across carriers, represent significant challenges. Every modulation scheme used to date has had limitations, and the use of the Discrete Fourier Transform in OFDM (Orthogonal Frequency Division Multiplex) is no exception. The restriction on further development of OFDM lies primarily within the type of transform it uses in the heart of its system, Fourier transform. OFDM suffers from sensitivity to Peak to Average Power Ratio, carrier frequency offset and wasting some bandwidth to guard successive OFDM symbols. The discovery of the wavelet transform has opened up a number of potential applications from image compression to watermarking and encryption. Very recently, work has been done to investigate the potential of using wavelet transforms within the communication space. This research will further investigate a recently proposed, innovative, modulation technique, Orthogonal Wavelet Division Multiplex, which utilises the wavelet transform opening a new avenue for an alternative modulation scheme with some interesting potential characteristics. Wavelet transform has many families and each of those families has children which each differ in filter length. This research consider comprehensively investigates the new modulation scheme, and proposes multi-level dynamic sub-banding as a tool to adapt variable signal bandwidths. Furthermore, all compactly supported wavelet families and their associated children of those families are investigated and evaluated against each other and compared with OFDM. The linear computational complexity of wavelet transform is less than the logarithmic complexity of Fourier in OFDM. The more important complexity is the operational complexity which is cost effectiveness, such as the time response of the system, the memory consumption and the number of iterative operations required for data processing. Those complexities are investigated for all available compactly supported wavelet families and their children and compared with OFDM. The evaluation reveals which wavelet families perform more effectively than OFDM, and for each wavelet family identifies which family children perform the best. Based on these results, it is concluded that the wavelet modulation scheme has some interesting advantages over OFDM, such as lower complexity and bandwidth conservation of up to 25%, due to the elimination of guard intervals and dynamic bandwidth allocation, which result in better cost effectiveness

    Information flow between volatilities across time scales

    Get PDF
    Conventional time series analysis, focusing exclusively on a time series at a given scale, lacks the ability to explain the nature of the data generating process. A process equation that successfully explains daily price changes, for example, is unable to characterize the nature of hourly price changes. On the other hand, statistical properties of monthly price changes are often not fully covered by a model based on daily price changes. In this paper, we simultaneously model regimes of volatilities at multiple time scales through wavelet-domain hidden Markov models. We establish an important stylized property of volatility across different time scales. We call this property asymmetric vertical dependence. It is asymmetric in the sense that a low volatility state (regime) at a long time horizon is most likely followed by low volatility states at shorter time horizons. On the other hand, a high volatility state at long time horizons does not necessarily imply a high volatility state at shorter time horizons. Our analysis provides evidence that volatility is a mixture of high and low volatility regimes, resulting in a distribution that is non-Gaussian. This result has important implications regarding the scaling behavior of volatility, and consequently, the calculation of risk at different time scales.Discrete wavelet transform, wavelet-domain hidden Markov trees, foreign exchange markets; stock markets; multiresolution analysis; scaling

    Asymmetry of Information Flow Between Volatilities Across Time Scales

    Get PDF
    Conventional time series analysis, focusing exclusively on a time series at a given scale, lacks the ability to explain the nature of the data generating process. A process equation that successfully explains daily price changes, for example, is unable to characterize the nature of hourly price changes. On the other hand, statistical properties of monthly price changes are often not fully covered by a model based on daily price changes. In this paper, we simultaneously model regimes of volatilities at multiple time scales through wavelet-domain hidden Markov models. We establish an important stylized property of volatility across different time scales. We call this property asymmetric vertical dependence. It is asymmetric in the sense that a low volatility state (regime) at a long time horizon is most likely followed by low volatility states at shorter time horizons. On the other hand, a high volatility state at long time horizons does not necessarily imply a high volatility state at shorter time horizons. Our analysis provides evidence that volatility is a mixture of high and low volatility regimes, resulting in a distribution that is non-Gaussian. This result has important implications regarding the scaling behavior of volatility, and consequently, the calculation of risk at different time scalesDiscrete wavelet transform, wavelet-domain hidden Markov trees, foreign exchange markets, stock markets, multiresolution analysis, scaling

    A state space approach to the design of globally optimal FIR energy compaction filters

    Get PDF
    We introduce a new approach for the least squared optimization of a weighted FIR filter of arbitrary order N under the constraint that its magnitude squared response be Nyquist(M). Although the new formulation is general enough to cover a wide variety of applications, the focus of the paper is on optimal energy compaction filters. The optimization of such filters has received considerable attention in the past due to the fact that they are the main building blocks in the design of principal component filter banks (PCFBs). The newly proposed method finds the optimum product filter Fopt(z)=Hopt(Z)Hopt (z^-1) corresponding to the compaction filter Hopt (z). By expressing F(z) in the form D(z)+D(z^-1), we show that the compaction problem can be completely parameterized in terms of the state-space realization of the causal function D(z). For a given input power spectrum, the resulting filter Fopt(z) is guaranteed to be a global optimum solution due to the convexity of the new formulation. The new algorithm is universal in the sense that it works for any M, arbitrary filter length N, and any given input power spectrum. Furthermore, additional linear constraints such as wavelets regularity constraints can be incorporated into the design problem. Finally, obtaining Hopt(z) from Fopt(z) does not require an additional spectral factorization step. The minimum-phase spectral factor Hmin(z) can be obtained automatically by relating the state space realization of Dopt(z) to that of H opt(z

    A state space approach to the design of globally optimal FIR energy compaction filters

    Full text link

    Bayesian Variational Regularisation for Dark Matter Reconstruction with Uncertainty Quantification

    Get PDF
    Despite the great wealth of cosmological knowledge accumulated since the early 20th century, the nature of dark-matter, which accounts for ~85% of the matter content of the universe, remains illusive. Unfortunately, though dark-matter is scientifically interesting, with implications for our fundamental understanding of the Universe, it cannot be directly observed. Instead, dark-matter may be inferred from e.g. the optical distortion (lensing) of distant galaxies which, at linear order, manifests as a perturbation to the apparent magnitude (convergence) and ellipticity (shearing). Ensemble observations of the shear are collected and leveraged to construct estimates of the convergence, which can directly be related to the universal dark-matter distribution. Imminent stage IV surveys are forecast to accrue an unprecedented quantity of cosmological information; a discriminative partition of which is accessible through the convergence, and is disproportionately concentrated at high angular resolutions, where the echoes of cosmological evolution under gravity are most apparent. Capitalising on advances in probability concentration theory, this thesis merges the paradigms of Bayesian inference and optimisation to develop hybrid convergence inference techniques which are scalable, statistically principled, and operate over the Euclidean plane, celestial sphere, and 3-dimensional ball. Such techniques can quantify the plausibility of inferences at one-millionth the computational overhead of competing sampling methods. These Bayesian techniques are applied to the hotly debated Abell-520 merging cluster, concluding that observational catalogues contain insufficient information to determine the existence of dark-matter self-interactions. Further, these techniques were applied to all public lensing catalogues, recovering the then largest global dark-matter mass-map. The primary methodological contributions of this thesis depend only on posterior log-concavity, paving the way towards a, potentially revolutionary, complete hybridisation with artificial intelligence techniques. These next-generation techniques are the first to operate over the full 3-dimensional ball, laying the foundations for statistically principled universal dark-matter cartography, and the cosmological insights such advances may provide

    Design and Implementation of Complexity Reduced Digital Signal Processors for Low Power Biomedical Applications

    Get PDF
    Wearable health monitoring systems can provide remote care with supervised, inde-pendent living which are capable of signal sensing, acquisition, local processing and transmission. A generic biopotential signal (such as Electrocardiogram (ECG), and Electroencephalogram (EEG)) processing platform consists of four main functional components. The signals acquired by the electrodes are amplified and preconditioned by the (1) Analog-Front-End (AFE) which are then digitized via the (2) Analog-to-Digital Converter (ADC) for further processing. The local digital signal processing is usually handled by a custom designed (3) Digital Signal Processor (DSP) which is responsible for either anyone or combination of signal processing algorithms such as noise detection, noise/artefact removal, feature extraction, classification and compres-sion. The digitally processed data is then transmitted via the (4) transmitter which is renown as the most power hungry block in the complete platform. All the afore-mentioned components of the wearable systems are required to be designed and fitted into an integrated system where the area and the power requirements are stringent. Therefore, hardware complexity and power dissipation of each functional component are crucial aspects while designing and implementing a wearable monitoring platform. The work undertaken focuses on reducing the hardware complexity of a biosignal DSP and presents low hardware complexity solutions that can be employed in the aforemen-tioned wearable platforms. A typical state-of-the-art system utilizes Sigma Delta (Σ∆) ADCs incorporating a Σ∆ modulator and a decimation filter whereas the state-of-the-art decimation filters employ linear phase Finite-Impulse-Response (FIR) filters with high orders that in-crease the hardware complexity [1–5]. In this thesis, the novel use of minimum phase Infinite-Impulse-Response (IIR) decimators is proposed where the hardware complexity is massively reduced compared to the conventional FIR decimators. In addition, the non-linear phase effects of these filters are also investigated since phase non-linearity may distort the time domain representation of the signal being filtered which is un-desirable effect for biopotential signals especially when the fiducial characteristics carry diagnostic importance. In the case of ECG monitoring systems the effect of the IIR filter phase non-linearity is minimal which does not affect the diagnostic accuracy of the signals. The work undertaken also proposes two methods for reducing the hardware complexity of the popular biosignal processing tool, Discrete Wavelet Transform (DWT). General purpose multipliers are known to be hardware and power hungry in terms of the number of addition operations or their underlying building blocks like full adders or half adders required. Higher number of adders leads to an increase in the power consumption which is directly proportional to the clock frequency, supply voltage, switching activity and the resources utilized. A typical Field-Programmable-Gate-Array’s (FPGA) resources are Look-up Tables (LUTs) whereas a custom Digital Signal Processor’s (DSP) are gate-level cells of standard cell libraries that are used to build adders [6]. One of the proposed methods is the replacement of the hardware and power hungry general pur-pose multipliers and the coefficient memories with reconfigurable multiplier blocks that are composed of simple shift-add networks and multiplexers. This method substantially reduces the resource utilization as well as the power consumption of the system. The second proposed method is the design and implementation of the DWT filter banks using IIR filters which employ less number of arithmetic operations compared to the state-of-the-art FIR wavelets. This reduces the hardware complexity of the analysis filter bank of the DWT and can be employed in applications where the reconstruction is not required. However, the synthesis filter bank for the IIR wavelet transform has a higher computational complexity compared to the conventional FIR wavelet synthesis filter banks since re-indexing of the filtered data sequence is required that can only be achieved via the use of extra registers. Therefore, this led to the proposal of a novel design which replaces the complex IIR based synthesis filter banks with FIR fil-ters which are the approximations of the associated IIR filters. Finally, a comparative study is presented where the hybrid IIR/FIR and FIR/FIR wavelet filter banks are de-ployed in a typical noise reduction scenario using the wavelet thresholding techniques. It is concluded that the proposed hybrid IIR/FIR wavelet filter banks provide better denoising performance, reduced computational complexity and power consumption in comparison to their IIR/IIR and FIR/FIR counterparts

    Information flow between volatilities across time scales

    Get PDF
    Conventional time series analysis, focusing exclusively on a time series at a given scale, lacks the ability to explain the nature of the data generating process. A process equation that successfully explains daily price changes, for example, is unable to characterize the nature of hourly price changes. On the other hand, statistical properties of monthly price changes are often not fully covered by a model based on daily price changes. In this paper, we simultaneously model regimes of volatilities at multiple time scales through wavelet-domain hidden Markov models. We establish an important stylized property of volatility across different time scales. We call this property asymmetric vertical dependence. It is asymmetric in the sense that a low volatility state (regime) at a long time horizon is most likely followed by low volatility states at shorter time horizons. On the other hand, a high volatility state at long time horizons does not necessarily imply a high volatility state at shorter time horizons. Our analysis provides evidence that volatility is a mixture of high and low volatility regimes, resulting in a distribution that is non-Gaussian. This result has important implications regarding the scaling behavior of volatility, and consequently, the calculation of risk at different time scales
    corecore