1,401 research outputs found
Multivariate Analysis Applications in X-ray Diffraction
: Multivariate analysis (MA) is becoming a fundamental tool for processing in an efficient
way the large amount of data collected in X-ray diffraction experiments. Multi-wedge data
collections can increase the data quality in case of tiny protein crystals; in situ or operando setups
allow investigating changes on powder samples occurring during repeated fast measurements;
pump and probe experiments at X-ray free-electron laser (XFEL) sources supply structural
characterization of fast photo-excitation processes. In all these cases, MA can facilitate the extraction
of relevant information hidden in data, disclosing the possibility of automatic data processing even
in absence of a priori structural knowledge. MA methods recently used in the field of X-ray
diffraction are here reviewed and described, giving hints about theoretical background and possible
applications. The use of MA in the framework of the modulated enhanced diffraction technique is
described in detail
Dimension-reduction and discrimination of neuronal multi-channel signals
Dimensionsreduktion und Trennung neuronaler Multikanal-Signale
Recommended from our members
Covariate-assisted ranking and screening for large-scale two-sample inference
Two-sample multiple testing has a wide range of applications. The conventionalpractice first reduces the original observations to a vector of p-values and then chooses a cutoffto adjust for multiplicity. However, this data reduction step could cause significant loss ofinformation and thus lead to suboptimal testing procedures.We introduce a new framework fortwo-sample multiple testing by incorporating a carefully constructed auxiliary variable in inferenceto improve the power. A data-driven multiple-testing procedure is developed by employinga covariate-assisted ranking and screening (CARS) approach that optimally combines the informationfrom both the primary and the auxiliary variables. The proposed CARS procedureis shown to be asymptotically valid and optimal for false discovery rate control. The procedureis implemented in the R package CARS. Numerical results confirm the effectiveness of CARSin false discovery rate control and show that it achieves substantial power gain over existingmethods. CARS is also illustrated through an application to the analysis of a satellite imagingdata set for supernova detection
Recommended from our members
Sparse Linear Discriminant Analysis with more Variables than Observations
It is known that classical linear discriminant analysis (LDA) performs classification well when the number of observations is much larger than the number of variables. However, when the number of variables is larger than the number of observations, classical LDA cannot be performed because the within-group covariance matrix is singular. Recently proposed LDA methods that can handle singular within-group covariance matrix were reviewed. Most of these methods focus on regularizing the within-class covariance matrix. However, they give less attention to sparsity ( selecting variables), interpretation and computational cost, which are important in high-dimensional problems. The fact that most of the original variables may be irrelevant or redundant suggests looking for sparse solutions that involve only a small portion of the variables. In the present work, new sparse LDA methods are proposed that are suited to high-dimensional data. The first two methods assume groups share a common within-group covariance matrix and approximate this matrix by a diagonal matrix. One of these methods is a variant of the other that sacrifices some accuracy for greater computational speed. Both methods obtain sparsity by minimizing an l1 norm and maximizing discrimination power under a common loss function with a tuning parameter. The third method assumes that groups share common eigenvector in eigenvector-eigenvalue decomposition of their within-group covariance matrices, while their eigenvalues may differ. The fourth method assumes the within-group covariance matrices are proportional to each other. The fifth method is derived from the Dantzig selector and uses optimal scoring to construct discriminant function. The third and fourth methods achieve sparsity by imposing a cardinality constraint with the cardinality level determined by cross-validation. All the new methods reduce their computation time by sequentially determining individual discriminant functions. The methods are applied to six real data sets and perform well when compared with two existing methods
Reference priors for high energy physics
Bayesian inferences in high energy physics often use uniform prior
distributions for parameters about which little or no information is available
before data are collected. The resulting posterior distributions are therefore
sensitive to the choice of parametrization for the problem and may even be
improper if this choice is not carefully considered. Here we describe an
extensively tested methodology, known as reference analysis, which allows one
to construct parametrization-invariant priors that embody the notion of minimal
informativeness in a mathematically well-defined sense. We apply this
methodology to general cross section measurements and show that it yields
sensible results. A recent measurement of the single top quark cross section
illustrates the relevant techniques in a realistic situation
- …