2,456 research outputs found
Fine mapping of causal HLA variants using penalised regression
The identification of risk loci in the Human Leukocyte Antigen (HLA) region using single-SNP association tests has been hampered by the extent of linkage disequilibrium (LD). Penalised regression via Least Absolute Shrinkage and Selection Operator (LASSO) can be used as a method for selection of variables in multi-SNP analysis, and to deal with the problem of multi-collinearity among predictors. This method applies a penalty that shrinks the estimates of the regression coefficients towards zero. This is equivalent to applying a double exponential (DE) prior distribution to the coefficients with a mode at zero, corresponding to the prior belief that most of the effects are negligible in a Bayesian approach. Parameter inference is based on the posterior mode, with non-zero values indicating marker-disease associations.
Single-SNP, stepwise regression and the LASSO approach were applied to case-control studies of rheumatoid arthritis, a disease which has been associated with markers from the HLA region. A generalisation of the LASSO called the HyperLasso (HLASSO), which uses the normal-exponential-gamma prior in place of the DE, was also investigated. These approaches were applied to data from the Genetics of Rheumatoid Arthritis (GoRA) study. Genotype imputation was used as a means to jointly analyse the GoRA and the Wellcome Trust Case Control Consortium (WTCCC) HLA SNPs. The North American Rheumatoid Arthritis Consortium (NARAC) study was used to validate the findings.
After controlling for type-I error, the penalised approaches greatly reduced the number of positive signals compared to single-SNP analysis, suggesting that correlation among SNP loci was better handled. The HLASSO results were sparser but similar to the LASSO results. One SNP in HLA-DPB1 was replicated in the NARAC study. In both models, the robustness of the retained variables was verified by bootstrapping. The results suggest that SNP-selection using LASSO or HLASSO shows a substantial benefit in identifying risk loci in regions of high LD
Forecasting electricity spot market prices with a k-factor GIGARCH process
In this article, we investigate conditional mean and variance forecasts using a dynamic model following a k-factor GIGARCH process. We are particularly interested in calculating the conditional variance of the prediction error. We apply this method to electricity prices and test spot prices forecasts until one month ahead forecast. We conclude that the k-factor GIGARCH process is a suitable tool to forecast spot prices, using the classical RMSE criteria.Conditional mean ; conditional variance ; forecast ; electricity prices ; GIGARCH process
Forecasting electricity spot market prices with a k-factor GIGARCH process
In this article, we investigate conditional mean and variance forecasts using a dynamic model following a k-factor GIGARCH process. We are particularly interested in calculating the conditional variance of the prediction error. We apply this method to electricity prices and test spot prices forecasts until one month ahead forecast. We conclude that the k-factor GIGARCH process is a suitable tool to forecast spot prices, using the classical RMSE criteria.Conditional mean - conditional variance - forecast - electricity prices - GIGARCH process
A k- factor GIGARCH process : estimation and application to electricity market spot prices,
Some crucial time series of market data, such as electricity spot prices, exhibit long memory, in the sense of slowly-decaying correlations combined with heteroscedasticity. To e able to model such a behaviour, we consider the k-factor GIGARCH process and we propose two methods to address the related parameter estimation problem. For each method, we develop the asymptotic theory for this estimation.GIGARCH process â estimation theory â Electricity spot prices.
Comparison of local electrochemical impedance measurements derived frombi-electrode and microcapillary techniques
In the present paper, local electrochemical impedance spectrawere obtained on a 316L stainless steel from two configurations: a dual microelectrode (bi-electrode) and microcapillaries. With the bi-electrode, the local impedance measurements were made from the ratio of the applied voltage to the local current density calculated from the application of the ohmâs law. With the use of microelectrochemical cells, the specimen surface area in contact with the electrolyte is limited by the use of glass microcapillaries and the local impedance was defined fromthe ratio of the local potential to the local current restricted to the analysed surface area. Differences and similarities observed in local impedance spectra obtained with the two configurations were describe
On Stochastic Error and Computational Efficiency of the Markov Chain Monte Carlo Method
In Markov Chain Monte Carlo (MCMC) simulations, the thermal equilibria
quantities are estimated by ensemble average over a sample set containing a
large number of correlated samples. These samples are selected in accordance
with the probability distribution function, known from the partition function
of equilibrium state. As the stochastic error of the simulation results is
significant, it is desirable to understand the variance of the estimation by
ensemble average, which depends on the sample size (i.e., the total number of
samples in the set) and the sampling interval (i.e., cycle number between two
consecutive samples). Although large sample sizes reduce the variance, they
increase the computational cost of the simulation. For a given CPU time, the
sample size can be reduced greatly by increasing the sampling interval, while
having the corresponding increase in variance be negligible if the original
sampling interval is very small. In this work, we report a few general rules
that relate the variance with the sample size and the sampling interval. These
results are observed and confirmed numerically. These variance rules are
derived for the MCMC method but are also valid for the correlated samples
obtained using other Monte Carlo methods. The main contribution of this work
includes the theoretical proof of these numerical observations and the set of
assumptions that lead to them
Influence of cutting process mechanics on surface integrity and electrochemical behavior of OFHC copper
The authors gratefully acknowledge the support received from IC ARTS and CEA ValducSuperfinishing machining has a particular impact on cutting mechanics, surface integrity and local electrochemical behavior. In fact, material removal during this process induces geometrical, mechanical and micro-structural modifications in the machined surface and sub-surface. However, a conventional 3D cutting process is still complex to study in terms of analytical/numerical modeling and experimental process monitoring. So, researchers are wondering if a less intricate configuration such as orthogonal cutting would be able to provide information about surface integrity as close as possible to that one generated by a 3D cutting process. For that reason, in the present paper, two different machining configurations were compared: face turning and orthogonal cutting. The work material is oxygen free high conductivity copper (OFHC) and the cutting tools are uncoated cemented carbide. The research work was performed in three steps. In the first step, the process mechanics of superfinishing machining of OFHC copper was performed. In the second step, the surface integrity and the chemical behavior of the machined samples were analyzed. Finally, in the third step, correlations between input parameters and output measures were conducted using statistical techniques. Results show that when applying low ratios between the uncut chip thickness and the cutting edge radius, the surface integrity and cutting energy are highly affected by the ploughing phenomenon. Otherwise, the most relevant cutting parameter is the feed. In order to compare face turning with orthogonal cutting, a new geometrical parameter was introduced, which has a strong effect in the electrochemical behavior of the machined surface
Lagrangian acceleration statistics in a turbulent channel flow
Lagrangian acceleration statistics in a fully developed turbulent channel
flow at are investigated, based on tracer particle tracking in
experiments and direct numerical simulations. The evolution with wall distance
of the Lagrangian velocity and acceleration time scales is analyzed. Dependency
between acceleration components in the near-wall region is described using
cross-correlations and joint probability density functions. The strong
streamwise coherent vortices typical of wall-bounded turbulent flows are shown
to have a significant impact on the dynamics. This results in a strong
anisotropy at small scales in the near-wall region that remains present in most
of the channel. Such statistical properties may be used as constraints in
building advanced Lagrangian stochastic models to predict the dispersion and
mixing of chemical components for combustion or environmental studies.Comment: accepted for publication in Physical Review Fluid
- âŠ