139 research outputs found

    Channel modeling and resource allocation in OFDM systems

    Get PDF
    The increasing demand for high data rate in wireless communication systems gives rise to broadband communication systems. The radio channel is plagued by multipath propagation, which causes frequency-selective fading in broadband signals. Orthogonal Frequency-Division Multiplexing (OFDM) is a modulation scheme specifically designed to facilitate high-speed data transmission over frequency-selective fading channels. The problem of channel modeling in the frequency domain is first investigated for the wideband and ultra wideband wireless channels. The channel is converted into an equivalent discrete channel by uniformly sampling the continuous channel frequency response (CFR), which results in a discrete CFR. A necessary and sufficient condition is established for the existence of parametric models for the discrete CFR. Based on this condition, we provide a justification for the effectiveness of previously reported autoregressive (AR) models in the frequency domain of wideband and ultra wideband channels. Resource allocation based on channel state information (CSI) is known to be a very powerful method for improving the spectral efficiency of OFDM systems. Bit and power allocation algorithms have been discussed for both static channels, where perfect knowledge of CSI is assumed, and time-varying channels, where the knowledge of CSI is imperfect. In case of static channels, the optimal resource allocation for multiuser OFDM systems has been investigated. Novel algorithms are proposed for subcarrier allocation and bit-power allocation with considerably lower complexity than other schemes in the literature. For time-varying channel, the error in CSI due to channel variation is recognized as the main obstacle for achieving the full potential of resource allocation. Channel prediction is proposed to suppress errors in the CSI and new bit and power allocation schemes incorporating imperfect CSI are presented and their performance is evaluated through simulations. Finally, a maximum likelihood (ML) receiver for Multiband Keying (MBK) signals is discussed, where MBK is a modulation scheme proposed for ultra wideband systems (UWB). The receiver structure and the associated ML decision rule is derived through analysis. A suboptimal algorithm based on a depth-first tree search is introduced to significantly reduce the computational complexity of the receiver

    Filter Scheduling Function Model In Internet Server: Resource Configuration, Performance Evaluation And Optimal Scheduling

    Get PDF
    ABSTRACT FILTER SCHEDULING FUNCTION MODEL IN INTERNET SERVER: RESOURCE CONFIGURATION, PERFORMANCE EVALUATION AND OPTIMAL SCHEDULING by MINGHUA XU August 2010 Advisor: Dr. Cheng-Zhong Xu Major: Computer Engineering Degree: Doctor of Philosophy Internet traffic often exhibits a structure with rich high-order statistical properties like selfsimilarity and long-range dependency (LRD). This greatly complicates the problem of server performance modeling and optimization. On the other hand, popularity of Internet has created numerous client-server or peer-to-peer applications, with most of them, such as online payment, purchasing, trading, searching, publishing and media streaming, being timing sensitive and/or financially critical. The scheduling policy in Internet servers is playing central role in satisfying service level agreement (SLA) and achieving savings and efficiency in operations. The increasing popularity of high-volume performance critical Internet applications is a challenge for servers to provide individual response-time guarantees. Existing tools like queuing models in most cases only hold in mean value analysis under the assumption of simplified traffic structures. Considering the fact that most Internet applications can tolerate a small percentage of deadline misses, we define a decay function model characterizes the relationship between the request delay constraint, deadline misses, and server capacity in a transfer function based filter system. The model is general for any time-series based or measurement based processes. Within the model framework, a relationship between server capacity, scheduling policy, and service deadline is established in formalism. Time-invariant (non-adaptive) resource allocation policies are design and analyzed in the time domain. For an important class of fixed-time allocation policies, optimality conditions with respect to the correlation of input traffic are established. The upper bound for server capacity and service level are derived with general Chebshev\u27s inequality, and extended to tighter boundaries for unimodal distributions by using VysochanskiPetunin\u27s inequality. For traffic with strong LRD, a design and analysis of the decay function model is done in the frequency domain. Most Internet traffic has monotonically decreasing strength of variation functions over frequency. For this type of input traffic, it is proved that optimal schedulers must have a convex structure. Uniform resource allocation is an extreme case of the convexity and is proved to be optimal for Poisson traffic. With an integration of the convex-structural principle, an enhance GPS policy improves the service quality significantly. Furthermore, it is shown that the presence of LRD in the input traffic results in shift of variation strength from high frequency to lower frequency bands, leading to a degradation of the service quality. The model is also extended to support server with different deadlines, and to derive an optimal time-variant (adaptive) resource allocation policy that minimizes server load variances and server resource demands. Simulation results show time-variant scheduling algorithm indeed outperforms time-invariant optimal decay function scheduler. Internet traffic has two major dynamic factors, the distribution of request size and the correlation of request arrival process. When applying decay function model as scheduler to random point process, corresponding two influences for server workload process is revealed as, first, sizing factor--interaction between request size distribution and scheduling functions, second, correlation factor--interaction between power spectrum of arrival process and scheduling function. For the second factor, it is known from this thesis that convex scheduling function will minimize its impact over server workload. Under the assumption of homogeneous scheduling function for all requests, it shows that uniform scheduling is optimal for the sizing factor. Further more, by analyzing the impact from queueing delay to scheduling function, it shows that queueing larger tasks vs. smaller ones leads to less reduction in sizing factor, but at the benefit of more decreasing in correlation factor in the server workload process. This shows the origin of optimality of shortest remain processing time (SRPT) scheduler

    Asymptotic Task-Based Quantization with Application to Massive MIMO

    Get PDF
    Quantizers take part in nearly every digital signal processing system which operates on physical signals. They are commonly designed to accurately represent the underlying signal, regardless of the specific task to be performed on the quantized data. In systems working with high-dimensional signals, such as massive multiple-input multiple-output (MIMO) systems, it is beneficial to utilize low-resolution quantizers, due to cost, power, and memory constraints. In this work we study quantization of high-dimensional inputs, aiming at improving performance under resolution constraints by accounting for the system task in the quantizers design. We focus on the task of recovering a desired signal statistically related to the high-dimensional input, and analyze two quantization approaches: We first consider vector quantization, which is typically computationally infeasible, and characterize the optimal performance achievable with this approach. Next, we focus on practical systems which utilize hardware-limited scalar uniform analog-to-digital converters (ADCs), and design a task-based quantizer under this model. The resulting system accounts for the task by linearly combining the observed signal into a lower dimension prior to quantization. We then apply our proposed technique to channel estimation in massive MIMO networks. Our results demonstrate that a system utilizing low-resolution scalar ADCs can approach the optimal channel estimation performance by properly accounting for the task in the system design

    Multistage mean-variance portfolio selection in cointegrated vector autoregressive systems

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 187-190).The problem of portfolio choice is an example of sequential decision making under uncertainty. Investors must consider their attitudes towards risk and reward in face of an unknown future, in order to make complex financial choices. Often, mathematical models of investor preferences and asset return dynamics aid in this process, resulting in a wide range of portfolio choice paradigms, one of which is considered in this thesis. Specifically, it is assumed that the investor operates so as to maximize his expected terminal wealth, subject to a risk (variance) constraint, in what is known as mean-variance optimal (MVO) portfolio selection, and that the log-prices of the assets evolve according a simple linear system known as a cointegrated vector autoregressive (VAR) process. While MVO portfolio choice remains the most popular formulation for single-stage asset allocation problems in both academia and industry, computational difficulties traditionally limit its use in a dynamic, multistage setting. Cointegration models are popular among industry practitioners as they encode the belief that the log-prices of many groups of assets are not WSS, yet move together in a coordinated fashion. Such systems exhibit temporary states of disequilibrium or relative asset mis-pricings that can be exploited for profit. Here, a set of multiperiod trading strategies are developed and studied. Both static and dynamic frameworks are considered, in which rebalancing is prohibited or allowed, respectively. Throughout this work, the relationship between the resulting portfolio weight vectors and the geometry of a cointegrated VAR process is demonstrated. In the static case, the performance of the MVO solution is analyzed in terms of the use of leverage, the correlation structure of the inter-stage portfolio returns, and the investment time horizon.(cont.) In the dynamic setting, the use of inter-temporal hedging enables the investor to further exploit the negative correlation among the inter-stage returns. However, the stochastic parameters of the per-stage asset return distributions prohibit the development of a closed-form solution to the dynamic MVO problem, necessitating the use of Monte Carlo methods. To address the computational limitations of this numerical approximation, a set of four approximate dynamic schemes are considered. Each relaxation is suboptimal, yet admits a tractable solution. The relative performance of these strategies, demonstrated through simulations involving synthetic and real data, depends again on the investment time horizon, the use of leverage and the statistical properties of the inter-stage portfolio returns.by Melanie Beth Rudoy.Ph.D

    Systematic hybrid analog/digital signal coding

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.Includes bibliographical references (p. 201-206).This thesis develops low-latency, low-complexity signal processing solutions for systematic source coding, or source coding with side information at the decoder. We consider an analog source signal transmitted through a hybrid channel that is the composition of two channels: a noisy analog channel through which the source is sent unprocessed and a secondary rate-constrained digital channel; the source is processed prior to transmission through the digital channel. The challenge is to design a digital encoder and decoder that provide a minimum-distortion reconstruction of the source at the decoder, which has observations of analog and digital channel outputs. The methods described in this thesis have importance to a wide array of applications. For example, in the case of in-band on-channel (IBOC) digital audio broadcast (DAB), an existing noisy analog communications infrastructure may be augmented by a low-bandwidth digital side channel for improved fidelity, while compatibility with existing analog receivers is preserved. Another application is a source coding scheme which devotes a fraction of available bandwidth to the analog source and the rest of the bandwidth to a digital representation. This scheme is applicable in a wireless communications environment (or any environment with unknown SNR), where analog transmission has the advantage of a gentle roll-off of fidelity with SNR. A very general paradigm for low-latency, low-complexity source coding is composed of three basic cascaded elements: 1) a space rotation, or transformation, 2) quantization, and 3) lossless bitstream coding. The paradigm has been applied with great success to conventional source coding, and it applies equally well to systematic source coding. Focusing on the case involving a Gaussian source, Gaussian channel and mean-squared distortion, we determine optimal or near-optimal components for each of the three elements, each of which has analogous components in conventional source coding. The space rotation can take many forms such as linear block transforms, lapped transforms, or subband decomposition, all for which we derive conditions of optimality. For a very general case we develop algorithms for the design of locally optimal quantizers. For the Gaussian case, we describe a low-complexity scalar quantizer, the nested lattice scalar quantizer, that has performance very near that of the optimal systematic scalar quantizer. Analogous to entropy coding for conventional source coding, Slepian-Wolf coding is shown to be an effective lossless bitstream coding stage for systematic source coding.by Richard J. Barron.Ph.D

    The theory of linear prediction

    Get PDF
    Linear prediction theory has had a profound impact in the field of digital signal processing. Although the theory dates back to the early 1940s, its influence can still be seen in applications today. The theory is based on very elegant mathematics and leads to many beautiful insights into statistical signal processing. Although prediction is only a part of the more general topics of linear estimation, filtering, and smoothing, this book focuses on linear prediction. This has enabled detailed discussion of a number of issues that are normally not found in texts. For example, the theory of vector linear prediction is explained in considerable detail and so is the theory of line spectral processes. This focus and its small size make the book different from many excellent texts which cover the topic, including a few that are actually dedicated to linear prediction. There are several examples and computer-based demonstrations of the theory. Applications are mentioned wherever appropriate, but the focus is not on the detailed development of these applications. The writing style is meant to be suitable for self-study as well as for classroom use at the senior and first-year graduate levels. The text is self-contained for readers with introductory exposure to signal processing, random processes, and the theory of matrices, and a historical perspective and detailed outline are given in the first chapter

    Physical Layer Aware Optical Networks

    Get PDF
    This thesis describes novel contributions in the field of physical layer aware optical networks. IP traffic increase and revenue compression in the Telecom industry is putting a lot of pressure on the optical community to develop novel solutions that must both increase total capacity while being cost effective. This requirement is pushing operators towards network disaggregation, where optical network infrastructure is built by mix and match different physical layer technologies from different vendors. In such a novel context, every equipment and transmission technique at the physical layer impacts the overall network behavior. Hence, methods giving quantitative evaluations of individual merit of physical layer equipment at network level are a firm request during network design phases as well as during network lifetime. Therefore, physical layer awareness in network design and operation is fundamental to fairly assess the potentialities, and exploit the capabilities of different technologies. From this perspective, propagation impairments modeling is essential. In this work propagation impairments in transparent optical networks are summarized, with a special focus on nonlinear effects. The Gaussian Noise model is reviewed, then extended for wideband scenarios. To do so, the impact of polarization mode dispersion on nonlinear interference (NLI) generation is assessed for the first time through simulation, showing its negligible impact on NLI generation. Thanks to this result, the Gaussian Noise model is generalized to assess the impact of space and frequency amplitude variations along the fiber, mainly due to stimulated Raman scattering, on NLI generation. The proposed Generalized GN (GGN) model is experimentally validated on a setup with commercial linecards, compared with other modeling options, and an example of application is shown. Then, network-level power optimization strategies are discussed, and the Locally Optimization Global Optimization (LOGO) approach reviewed. After that, a novel framework of analysis for optical networks that leverages detailed propagation impairment modeling called the Statistical Network Assessment Process (SNAP) is presented. SNAP is motivated by the need of having a general framework to assess the impact of different physical layer technologies on network performance, without relying on rigid optimization approaches, that are not well-suited for technology comparison. Several examples of applications of SNAP are given, including comparisons of transceivers, amplifiers and node technologies. SNAP is also used to highlight topological bottlenecks in progressively loaded network scenarios and to derive possible solutions for them. The final work presented in this thesis is related to the implementation of a vendor agnostic quality of transmission estimator for multi-vendor optical networks developed in the context of the Physical Simulation Environment group of the Telecom Infra Project. The implementation of a module based on the GN model is briefly described, then results of a multi-vendor experimental validation performed in collaboration with Microsoft are shown
    • …
    corecore