12,774 research outputs found
Classification of Hungarian medieval silver coins using x-ray fluorescent spectroscopy and multivariate data analysis
A set of silver coins from the collection of Déri Museum Debrecen (Hungary) was examined by X-ray
fluorescent elemental analysis with the aim to assign the coins to different groups with the best possible precision
based on the acquired chemical information and to build models, which arrange the coins according to their
historical periods.
Results: Principal component analysis, linear discriminant analysis, partial least squares discriminant analysis,
classification and regression trees and multivariate curve resolution with alternating least squares were applied to
reveal dominant pattern in the data and classify the coins into several groups. We also identified those chemical
components, which are present in small percentages, but are useful for the classification of the coins. With the
coins divided into two groups according to adequate historical periods, we have obtained a correct classification
(76-78%) based on the chemical compositions.
Conclusions: X-ray fluorescent elemental analysis together with multivariate data analysis methods is suitable to
group medieval coins according to historical periods.
Keywords: X-ray fluorescence spectroscopy, Multivariate techniques, Coin, Silver, Middle age
Binary superlattice design by controlling DNA-mediated interactions
Most binary superlattices created using DNA functionalization or other
approaches rely on particle size differences to achieve compositional order and
structural diversity. Here we study two-dimensional (2D) assembly of
DNA-functionalized micron-sized particles (DFPs), and employ a strategy that
leverages the tunable disparity in interparticle interactions, and thus
enthalpic driving forces, to open new avenues for design of binary
superlattices that do not rely on the ability to tune particle size (i.e.,
entropic driving forces). Our strategy employs tailored blends of complementary
strands of ssDNA to control interparticle interactions between micron-sized
silica particles in a binary mixture to create compositionally diverse 2D
lattices. We show that the particle arrangement can be further controlled by
changing the stoichiometry of the binary mixture in certain cases. With this
approach, we demonstrate the abil- ity to program the particle assembly into
square, pentagonal, and hexagonal lattices. In addition, different particle
types can be compositionally ordered in square checkerboard and hexagonal -
alternating string, honeycomb, and Kagome arrangements.Comment: 4 figures in the main text. 5 figures in the supplementary
informatio
Dictionary-based Tensor Canonical Polyadic Decomposition
To ensure interpretability of extracted sources in tensor decomposition, we
introduce in this paper a dictionary-based tensor canonical polyadic
decomposition which enforces one factor to belong exactly to a known
dictionary. A new formulation of sparse coding is proposed which enables high
dimensional tensors dictionary-based canonical polyadic decomposition. The
benefits of using a dictionary in tensor decomposition models are explored both
in terms of parameter identifiability and estimation accuracy. Performances of
the proposed algorithms are evaluated on the decomposition of simulated data
and the unmixing of hyperspectral images
Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches
Imaging spectrometers measure electromagnetic energy scattered in their
instantaneous field view in hundreds or thousands of spectral channels with
higher spectral resolution than multispectral cameras. Imaging spectrometers
are therefore often referred to as hyperspectral cameras (HSCs). Higher
spectral resolution enables material identification via spectroscopic analysis,
which facilitates countless applications that require identifying materials in
scenarios unsuitable for classical spectroscopic analysis. Due to low spatial
resolution of HSCs, microscopic material mixing, and multiple scattering,
spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus,
accurate estimation requires unmixing. Pixels are assumed to be mixtures of a
few materials, called endmembers. Unmixing involves estimating all or some of:
the number of endmembers, their spectral signatures, and their abundances at
each pixel. Unmixing is a challenging, ill-posed inverse problem because of
model inaccuracies, observation noise, environmental conditions, endmember
variability, and data set size. Researchers have devised and investigated many
models searching for robust, stable, tractable, and accurate unmixing
algorithms. This paper presents an overview of unmixing methods from the time
of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models
are first discussed. Signal-subspace, geometrical, statistical, sparsity-based,
and spatial-contextual unmixing algorithms are described. Mathematical problems
and potential solutions are described. Algorithm characteristics are
illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of
Selected Topics in Applied Earth Observations and Remote Sensin
Symmetric Tensor Decomposition by an Iterative Eigendecomposition Algorithm
We present an iterative algorithm, called the symmetric tensor eigen-rank-one
iterative decomposition (STEROID), for decomposing a symmetric tensor into a
real linear combination of symmetric rank-1 unit-norm outer factors using only
eigendecompositions and least-squares fitting. Originally designed for a
symmetric tensor with an order being a power of two, STEROID is shown to be
applicable to any order through an innovative tensor embedding technique.
Numerical examples demonstrate the high efficiency and accuracy of the proposed
scheme even for large scale problems. Furthermore, we show how STEROID readily
solves a problem in nonlinear block-structured system identification and
nonlinear state-space identification
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Application of Terahertz Technology in Biomolecular Analysis and Medical Diagnosis
Terahertz technology is a nondestructive technique, which has progressed significantly in the scientific research and gains highly attention in the analysis of biological molecular, cellular, tissues and organs. In this decade, some studies were reported on the application of terahertz technology in medical testing and diagnosis. Here, we summarize the terahertz characters, terahertz spectroscopy, and terahertz imaging technology combined with chemometrics. This chapter focuses on introducing the research progress on analyzing the tissues of cancers using terahertz spectroscopy and terahertz imaging technology. Furthermore, the problems should be solved, and development directions of terahertz spectroscopy and terahertz imaging technology are discussed
- …