1,070,546 research outputs found

    Joint Source and Relay Precoding Designs for MIMO Two-Way Relaying Based on MSE Criterion

    Full text link
    Properly designed precoders can significantly improve the spectral efficiency of multiple-input multiple-output (MIMO) relay systems. In this paper, we investigate joint source and relay precoding design based on the mean-square-error (MSE) criterion in MIMO two-way relay systems, where two multi-antenna source nodes exchange information via a multi-antenna amplify-and-forward relay node. This problem is non-convex and its optimal solution remains unsolved. Aiming to find an efficient way to solve the problem, we first decouple the primal problem into three tractable sub-problems, and then propose an iterative precoding design algorithm based on alternating optimization. The solution to each sub-problem is optimal and unique, thus the convergence of the iterative algorithm is guaranteed. Secondly, we propose a structured precoding design to lower the computational complexity. The proposed precoding structure is able to parallelize the channels in the multiple access (MAC) phase and broadcast (BC) phase. It thus reduces the precoding design to a simple power allocation problem. Lastly, for the special case where only a single data stream is transmitted from each source node, we present a source-antenna-selection (SAS) based precoding design algorithm. This algorithm selects only one antenna for transmission from each source and thus requires lower signalling overhead. Comprehensive simulation is conducted to evaluate the effectiveness of all the proposed precoding designs.Comment: 32 pages, 10 figure

    Topology optimization of freeform large-area metasurfaces

    Full text link
    We demonstrate optimization of optical metasurfaces over 10510^5--10610^6 degrees of freedom in two and three dimensions, 100--1000+ wavelengths (λ\lambda) in diameter, with 100+ parameters per λ2\lambda^2. In particular, we show how topology optimization, with one degree of freedom per high-resolution "pixel," can be extended to large areas with the help of a locally periodic approximation that was previously only used for a few parameters per λ2\lambda^2. In this way, we can computationally discover completely unexpected metasurface designs for challenging multi-frequency, multi-angle problems, including designs for fully coupled multi-layer structures with arbitrary per-layer patterns. Unlike typical metasurface designs based on subwavelength unit cells, our approach can discover both sub- and supra-wavelength patterns and can obtain both the near and far fields

    How many participants do we have to include in properly powered experiments? A tutorial of power analysis with reference tables

    Get PDF
    Given that an effect size of d = .4 is a good first estimate of the smallest effect size of interest in psychological research, we already need over 50 participants for a simple comparison of two within-participants conditions if we want to run a study with 80% power. This is more than current practice. In addition, as soon as a between-groups variable or an interaction is involved, numbers of 100, 200, and even more participants are needed. As long as we do not accept these facts, we will keep on running underpowered studies with unclear results. Addressing the issue requires a change in the way research is evaluated by supervisors, examiners, reviewers, and editors. The present paper describes reference numbers needed for the designs most often used by psychologists, including single-variable between-groups and repeated-measures designs with two and three levels, two-factor designs involving two repeated-measures variables or one between-groups variable and one repeated-measures variable (split-plot design). The numbers are given for the traditional, frequentist analysis with p 10. These numbers provide researchers with a standard to determine (and justify) the sample size of an upcoming study. The article also describes how researchers can improve the power of their study by including multiple observations per condition per participant

    A "poor man's" approach to topology optimization of natural convection problems

    Full text link
    Topology optimization of natural convection problems is computationally expensive, due to the large number of degrees of freedom (DOFs) in the model and its two-way coupled nature. Herein, a method is presented to reduce the computational effort by use of a reduced-order model governed by simplified physics. The proposed method models the fluid flow using a potential flow model, which introduces an additional fluid property. This material property currently requires tuning of the model by comparison to numerical Navier-Stokes based solutions. Topology optimization based on the reduced-order model is shown to provide qualitatively similar designs, as those obtained using a full Navier-Stokes based model. The number of DOFs is reduced by 50% in two dimensions and the computational complexity is evaluated to be approximately 12.5% of the full model. We further compare to optimized designs obtained utilizing Newton's convection law.Comment: Preprint version. Please refer to final version in Structural Multidisciplinary Optimization https://doi.org/10.1007/s00158-019-02215-

    Two-Method Planned Missing Designs for Longitudinal Research

    Get PDF
    We examine longitudinal extensions of the two-method measurement design, which uses planned missingness to optimize cost-efficiency and validity of hard-to-measure constructs. These designs use a combination of two measures: a “gold standard” that is highly valid but expensive to administer, and an inexpensive (e.g., survey-based) measure that contains systematic measurement bias (e.g., response bias). Using simulated data on four measurement occasions, we compared the cost-efficiency and validity of longitudinal designs where the gold standard is measured at one or more measurement occasions. We manipulated the nature of the response bias over time (constant, increasing, fluctuating), the factorial structure of the response bias over time, and the constraints placed on the latent variable model. Our results showed that parameter bias is lowest when the gold standard is measured on at least two occasions. When a multifactorial structure was used to model response bias over time, it is necessary to have the “gold standard” measures included at every time point, in which case most of the parameters showed low bias. Almost all parameters in all conditions displayed high relative efficiency, suggesting that the 2-method design is an effective way to reduce costs and improve both power and accuracy in longitudinal research
    corecore