186 research outputs found
Recommended from our members
Some inequalities contrasting principal component and factor analyses solutions
Principal component analysis (PCA) and factor analysis (FA) are two time-honored dimension reduction methods. In this paper, some inequalities are presented to contrast PCA and FA solutions for the same data set. For this reason, we take advantage of the recently established matrix decomposition (MD) formulation of FA. In summary, the resulting inequalities show that [1] FA gives a better fit to the data than PCA, [2] PCA extracts a larger amount of common āinformationā than FA, and [3] For each variable, its unique variance in FA is larger than its residual variance in PCA minus the one in FA. The resulting inequalities can be useful to suggest whether PCA or FA should be used for a particular data set. The answers can also be valid for the classic FA formulation not relying on the MD-FA definition, as both ātypesā FA provide almost equal solutions. Additionally, the inequalities give theoretical explanation of some empirically observed tendencies in PCA and FA solutions, e.g., that the absolute values of PCA loadings tend to be larger than those for FA loadings, and that the unique variances in FA tend to be larger than the residual variances of PCA
Recommended from our members
Sparse exploratory factor analysis
Sparse principal component analysis is a very active research area in the last decade. In the same time, there are very few works on sparse factor analysis. We propose a new contribution to the area by exploring a procedure for sparse factor analysis where the unknown parameters are found simultaneously
Computational Identification of Confirmatory Factor Analysis Model with Simplimax Procedures
Confirmatory factor analysis (CFA) refers to the FA procedure with some loadings constrained to be zeros. A difficulty in CFA is that the constraint must be specified by users in a subjective manner. For dealing with this difficulty, we propose a computational method, in which the best CFA solution is obtained optimally without relying on usersā judgements. The method consists of the procedures at lower (L) and higher (H) levels: at the L level, for a fixed number of zero loadings, it is determined both which loadings are to be zeros and what values are to be given to the remaining nonzero parameters; at the H level, the procedure at the L level is performed over the different numbers of zero loadings, to provide the best solution. In the L level procedure, Kiersā (1994) simplimax rotation fulfills a key role: the CFA solution under the constraint computationally specified by that rotation is used for initializing the parameters of a new FA procedure called simplimax FA. The task at the H level can be easily performed using information criteria. The usefulness of the proposed method is demonstrated numerically
Sparsest factor analysis for clustering variables: a matrix decomposition approach
We propose a new procedure for sparse factor analysis (FA) such that each variable loads only one common factor. Thus, the loading matrix has a single nonzero element in each row and zeros elsewhere. Such a loading matrix is the sparsest possible for certain number of variables and common factors. For this reason, the proposed method is named sparsest FA (SSFA). It may also be called FA-based variable clustering, since the variables loading the same common factor can be classified into a cluster. In SSFA, all model parts of FA (common factors, their correlations, loadings, unique factors, and unique variances) are treated as fixed unknown parameter matrices and their least squares function is minimized through specific data matrix decomposition. A useful feature of the algorithm is that the matrix of common factor scores is re-parameterized using QR decomposition in order to efficiently estimate factor correlations. A simulation study shows that the proposed procedure can exactly identify the true sparsest models. Real data examples demonstrate the usefulness of the variable clustering performed by SSFA
Destruction of mesoscopic chemically modulated domains in single phase high entropy alloy via plastic deformation
Chemically modulated mesoscopic domains in a fcc single phase CrMnFeCoNi equi-atomic high entropy alloy (HEA) are detected by small angle diffraction performed at a synchrotron radiation facility, whereas the mesoscopic domains cannot be detected by conventional X-ray diffraction and 2D mappings of energy dispersive X-ray spectroscopy by scanning electron microscopy and scanning transmission electron microscopy. The mesoscopic domains are deformed and shrieked, and finally destructed by plastic deformation, which is supported by the comprehensive observations/measurements, such as electrical resistivity, Vickers hardness, electron backscattering diffraction, and hard X-ray photoemission spectroscopy. The destruction of the mesoscopic domains causes the decrease in electrical resistivity via plastic deformation, so called K-effect, which is completely opposite to the normal trend of metals. We confirmed that the presence and the size of local chemical ordering or short-range order domains in the single phased HEA, and furthermore, Cr and Mn are related to form the domains
Mammary Pagetās Disease with Intraductal Spread: A Patient Report
A 49-year-old woman was diagnosed with mammary Pagetās disease and underwent a modified mastectomy. Pagetās cells were observed not only in the nipple epidermis and adjacent lactiferous ducts, but also at several branches of the lactiferous ducts in the deeper breast. In treating mammary Pagetās disease, the possibility of intraductal spreads should be kept in mind
Capturing molecular structural dynamics by 100ā ps time-resolved X-ray absorption spectroscopy
An experimental set-up for time-resolved X-ray absorption spectroscopy with 100ā
ps time resolution at beamline NW14A at the Photon Factory Advanced Ring is presented
Sparse Exploratory Factor Analysis
Sparse principal component analysis is a very active research area in the last decade. It produces component loadings with many zero entries which facilitates their interpretation and helps avoid redundant variables. The classic factor analysis is another popular dimension reduction technique which shares similar interpretation problems and could greatly benefit from sparse solutions. Unfortunately, there are very few works considering sparse versions of the classic factor analysis. Our goal is to contribute further in this direction. We revisit the most popular procedures for exploratory factor analysis, maximum likelihood and least squares. Sparse factor loadings are obtained for them by, first, adopting a special reparameterization and, second, by introducing additional [Formula: see text]-norm penalties into the standard factor analysis problems. As a result, we propose sparse versions of the major factor analysis procedures. We illustrate the developed algorithms on well-known psychometric problems. Our sparse solutions are critically compared to ones obtained by other existing methods
- ā¦