107 research outputs found

    Approximations for Performance Analysis in Wireless Communications and Applications to Reconfigurable Intelligent Surfaces

    Get PDF
    In the last few decades, the field of wireless communications has witnessed significant technological advancements to meet the needs of today’s modern world. The rapidly emerging technologies, however, are becoming increasingly sophisticated, and the process of investigating their performance and assessing their applicability in the real world is becoming more challenging. That has aroused a relatively wide range of solutions in the literature to study the performance of the different communication systems or even draw new results that were difficult to obtain. These solutions include field measurements, computer simulations, and theoretical solutions such as alternative representations, approximations, or bounds of classic functions that commonly appear in performance analyses. Field measurements and computer simulations have significantly improved performance evaluation in communication theory. However, more advanced theoretical solutions can be further developed in order to avoid using the ex- pensive and time-consuming wireless communications measurements, replace the numerical simulations, which can sometimes be unreliable and suffer from failures in numerical evaluation, and achieve analytically simpler results with much higher accuracy levels than the existing theoretical ones. To this end, this thesis firstly focuses on developing new approximations and bounds using unified approaches and algorithms that can efficiently and accurately guide researchers through the design of their adopted wireless systems and facilitate the conducted performance analyses in the various communication systems. Two performance measures are of primary interest in this study, namely the average error probability and the ergodic capacity, due to their valuable role in conducting a better understanding of the systems’ behavior and thus enabling systems engineers to quickly detect and resolve design issues that might arise. In particular, several parametric expressions of different analytical forms are developed to approximate or bound the Gaussian Q-function, which occurs in the error probability analysis. Additionally, any generic function of the Q-function is approximated or bounded using a tractable exponential expression. Moreover, a unified logarithmic expression is proposed to approximate or bound the capacity integrals that occur in the capacity analysis. A novel systematic methodology and a modified version of the classical Remez algorithm are developed to acquire optimal coefficients for the accompanying parametric approximation or bound in the minimax sense. Furthermore, the quasi-Newton algorithm is implemented to acquire optimal coefficients in terms of the total error. The average symbol error probability and ergodic capacity are evaluated for various applications using the developed tools. Secondly, this thesis analyzes a couple of communication systems assisted with reconfigurable intelligent surfaces (RISs). RIS has been gaining significant attention lately due to its ability to control propagation environments. In particular, two communication systems are considered; one with a single RIS and correlated Rayleigh fading channels, and the other with multiple RISs and non-identical generic fading channels. Both systems are analyzed in terms of outage probability, average symbol error probability, and ergodic capacity, which are derived using the proposed tools. These performance measures reveal that better performance is achieved when assisting the communication system with RISs, increasing the number of reflecting elements equipped on the RISs, or locating the RISs nearer to either communication node. In conclusion, the developed approximations and bounds, together with the optimized coefficients, provide more efficient tools than those available in the literature, with richer capabilities reflected by the more robust closed-form performance analysis, significant increase in accuracy levels, and considerable reduction in analytical complexity which in turns can offer more understanding into the systems’ behavior and the effect of the different parameters on their performance. Therefore, they are expected to lay the groundwork for the investigation of the latest communication technologies, such as RIS technology, whose performance has been studied for some system models in this thesis using the developed tools

    Bayesian, and Non-Bayesian, Cause-Specific Competing-Risk Analysis for Parametric and Nonparametric Survival Functions: The R Package CFC

    Get PDF
    The R package CFC performs cause-specific, competing-risk survival analysis by computing cumulative incidence functions from unadjusted, cause-specific survival functions. A high-level API in CFC enables end-to-end survival and competing-risk analysis, using a single-line function call, based on the parametric survival regression models in the survival package. A low-level API allows users to achieve more flexibility by supplying their custom survival functions, perhaps in a Bayesian setting. Utility methods for summarizing and plotting the output allow population-average cumulative incidence functions to be calculated, visualized and compared to unadjusted survival curves. Numerical and computational optimization strategies are employed for efficient and reliable computation of the coupled integrals involved. To address potential integrable singularities caused by infinite cause-specific hazards, particularly near time-from-index of zero, integrals are transformed to remove their dependency on hazard functions, making them solely functions of causespecific, unadjusted survival functions. This implicit variable transformation also provides for easier extensibility of CFC to handle custom survival models since it only requires the users to implement a maximum of one function per cause. The transformed integrals are numerically calculated using a generalization of Simpson's rule to handle the implicit change of variable from time to survival, while a generalized trapezoidal rule is used as reference for error calculation. An OpenMP-parallelized, efficient C++ implementation - using packages Rcpp and RcppArmadillo - makes the application of CFC in Bayesian settings practical, where a potentially large number of samples represent the posterior distribution of cause-specific survival functions

    Revisited Optimal Error Bounds for Interpolatory Integration Rules

    Get PDF

    Coupled structural/thermal/electromagnetic analysis/tailoring of graded composite structures

    Get PDF
    Accomplishments are described for the first year effort of a 5-year program to develop a methodology for coupled structural/thermal/electromagnetic analysis/tailoring of graded composite structures. These accomplishments include: (1) the results of the selective literature survey; (2) 8-, 16-, and 20-noded isoparametric plate and shell elements; (3) large deformation structural analysis; (4) eigenanalysis; (5) anisotropic heat transfer analysis; and (6) anisotropic electromagnetic analysis

    Numerical investigation of fermion mass generation in QED

    Get PDF
    We investigate the dynamical generation of fermion mass in quantum electrodynamics (QED). This non-perturbative study is performed using a truncated set of Schwinger-Dyson equations for the fermion and the photon propagator. First, we study dynamical fermion mass generation in quenched QED with the Curtis-Pennington vertex, which satisfies the Ward-Takahashi identity and moreover ensures the multiplicative renormalizability of the fermion propagator. We apply bifurcation analysis to determine the critical point for a general covariant gauge. In the second part of this work we investigate the dynamical generation of fermion mass in full, unquenched QED. We develop a numerical method to solve the system of three coupled non-linear equations for the dynamical fermion mass, the fermion wavefunction renormalization and the photon renormalization function. Much care is taken to ensure the high accuracy of the solutions. Moreover, we discuss in detail the proper numerical cancellation of the quadratic divergence in the vacuum polarization integral and the requirement of using smooth approximations to the solutions. To achieve this, we improve the numerical method by introducing the Chebyshev expansion method. We apply this method to the bare vertex approximation to unquenched QED to determine the critical coupling for a variety of approximations. This culminates in the detailed, highly accurate, solution of the Schwinger-Dyson equations for dynamical fermion mass generation in QED including both, the photon renormalization function and the fermion wavefunction renormalization in a consistent way, in the bare vertex approximation and, for the first time, using improved vertices. We introduce new improvements to the numerical method, to achieve the accuracy necessary to avoid unphysical quadratic divergences in the vacuum polarization with the Ball-Chiu vertex

    LATENT VARIABLE GENERALIZED LINEAR MODELS

    Get PDF
    Generalized Linear Models (GLMs) (McCullagh and Nelder, 1989) provide a unified framework for fixed effect models where response data arise from exponential family distributions. Much recent research has attempted to extend the framework to include random effects in the linear predictors. Different methodologies have been employed to solve different motivating problems, for example Generalized Linear Mixed Models (Clayton, 1994) and Multilevel Models (Goldstein, 1995). A thorough review and classification of this and related material is presented. In Item Response Theory (IRT) subjects are tested using banks of pre-calibrated test items. A useful model is based on the logistic function with a binary response dependent on the unknown ability of the subject. Item parameters contribute to the probability of a correct response. Within the framework of the GLM, a latent variable, the unknown ability, is introduced as a new component of the linear predictor. This approach affords the opportunity to structure intercept and slope parameters so that item characteristics are represented. A methodology for fitting such GLMs with latent variables, based on the EM algorithm (Dempster, Laird and Rubin, 1977) and using standard Generalized Linear Model fitting software GLIM (Payne, 1987) to perform the expectation step, is developed and applied to a model for binary response data. Accurate numerical integration to evaluate the likelihood functions is a vital part of the computational process. A study of the comparative benefits of two different integration strategies is undertaken and leads to the adoption, unusually, of Gauss-Legendre rules. It is shown how the fitting algorithms are implemented with GLIM programs which incorporate FORTRAN subroutines. Examples from IRT are given. A simulation study is undertaken to investigate the sampling distributions of the estimators and the effect of certain numerical attributes of the computational process. Finally a generalized latent variable model is developed for responses from any exponential family distribution

    Numerical method for hypersingular integrals of highly oscillatory functions on the positive semiaxis

    Get PDF
    This paper deals with a quadrature rule for the numerical evaluation of hypersingular integrals of highly oscillatory functions on the positive semiaxis. The rule is of product type and consists in approximating the density function f by a truncated interpolation process based on the zeros of generalized Laguerre polynomials and an additional point. We prove the stability and the convergence of the rule, giving error estimates for functions belonging to weighted Sobolev spaces equipped with uniform norm. We also show how the proposed rule can be used for the numerical solution of hypersingular integral equations. Numerical tests which confirm the theoretical estimates and comparisons with other existing quadrature rules are presented
    corecore