9 research outputs found

    Understanding the assumptions underlying Mendelian randomization

    Get PDF
    With the rapidly increasing availability of large genetic data sets in recent years, Mendelian Randomization (MR) has quickly gained popularity as a novel secondary analysis method. Leveraging genetic variants as instrumental variables, MR can be used to estimate the causal effects of one phenotype on another even when experimental research is not feasible, and therefore has the potential to be highly informative. It is dependent on strong assumptions however, often producing biased results if these are not met. It is therefore imperative that these assumptions are well-understood by researchers aiming to use MR, in order to evaluate their validity in the context of their analyses and data. The aim of this perspective is therefore to further elucidate these assumptions and the role they play in MR, as well as how different kinds of data can be used to further support them

    A Tale of Snakes and Horses: Amplifying Correlation Power Analysis on Quadratic Maps

    No full text
    We study the success probabilities of two variants of Correlation Power Analysis (CPA) to retrieve multiple secret bits. The target is a permutation-based symmetric cryptographic construction with a quadratic map as an S-box. More precisely, we focus on the nonlinear mapping X used in the Xoodoo and Keccak-p permutations, which is affine equivalent to the nonlinear mapping of Ascon. We thus consider three-bit and five-bit S-boxes. Our leakage model is the difference in power consumption of register cells before and after one round. It reflects that, >in hardware, the aforesaid cryptographic algorithms are usually implemented by deploying a round-based architecture. The power consumption difference depends on whether the targeted bits in the register flip. In particular, we describe two attacks based on the CPA methodology. First, we start with a standard CPA approach, for which, to the best of our knowledge, we are the first to point out the differences between attacking a three-bit and a five-bit S-box. For CPA, the highest correlation coefficient is the most likely secret hypothesis. We improve on this with our novel combined Correlation Power Analysis (combined CPA), or Snake attack, which uses quadratic map cryptanalysis (e.g., of the function X) to achieve a better attack in terms of the number of traces required and computational complexity. For the Snake attack, sums of absolute or squared values of correlation coefficients are used to determine the most likely guess. As a result, we effectively show that our proposed Snake attack can recover the secret in n22 (or n22+n) intermediate results, compared to 22n for the CPA, where n is the length of the targeted S-Box. We collected power measurements from a hardware setup to demonstrate practical attack success probabilities according to the rank of the correct secret hypothesis both for Xoodoo and Keccak-p. In addition, we explain our success probabilities thanks to the Henery model developed for horse races. In short, after performing 16,896 attacks, the Snake attack or combined CPA on Xoodoo consistently recovers the correct secret bits with each attack using 43,860 traces on average and with only 12 correlations, compared to 61,380 traces for the standard CPA attack with 64 correlations. The Snake attack requires about one-fifth as many correlation values as the standard CPA. For Keccak-p, the difference is more drastic: the Snake attack recovers the secret bits invariably after 771,600 traces with just 20 correlations. In contrast, the standard CPA attack operates on 1,024 correlations (about fifty times more than the Snake attack) and requires 1,223,400 traces

    Semi-Automated Rasch Analysis with Differential Item Functioning

    No full text
    Rasch analysis is a procedure to develop and validate instruments that aim to measure persons' traits. However, manual Rasch analysis is a complex and time-consuming task, even more so when the possibility of differential item functioning (DIF) is taken into consideration. Furthermore, manual Rasch analysis by construction relies on a modeller's subjective choices. As an alternative approach, we introduce a semi-automated procedure that is based on the optimization of a new criterion, called in-plus-out-of-questionnaire log likelihood with differential item functioning (IPOQLL-DIF) which extends our previous criterion. We illustrate our procedure on artificially generated data as well as on several real-world datasets containing potential DIF items. On these real-world datasets, our procedure found instruments with similar clinimetric properties as those suggested by experts through manual analyses

    Personalized monitoring of ambulatory function with a smartphone 2-minute walk test in multiple sclerosis

    No full text
    Background: Remote smartphone-based 2-minute walking tests (s2MWTs) allow frequent and potentially sensitive measurements of ambulatory function. Objective: To investigate the s2MWT on assessment of, and responsiveness to change in ambulatory function in MS. Methods: One hundred two multiple sclerosis (MS) patients and 24 healthy controls (HCs) performed weekly s2MWTs on self-owned smartphones for 12 and 3 months, respectively. The timed 25-foot walk test (T25FW) and Expanded Disability Status Scale (EDSS) were assessed at 3-month intervals. Anchor-based (using T25FW and EDSS) and distribution-based (curve fitting) methods were used to assess responsiveness of the s2MWT. A local linear trend model was used to fit weekly s2MWT scores of individual patients. Results: A total of 4811 and 355 s2MWT scores were obtained in patients (n = 94) and HC (n = 22), respectively. s2MWT demonstrated large variability (65.6 m) compared to the average score (129.5 m), and was inadequately responsive to anchor-based change in clinical outcomes. Curve fitting separated the trend from noise in high temporal resolution individual-level data, and statistically reliable changes were detected in 45% of patients. Conclusions: In group-level analyses, clinically relevant change was insufficiently detected due to large variability with sporadic measurements. Individual-level curve fitting reduced the variability in s2MWT, enabling the detection of statistically reliable change in ambulatory function
    corecore