229,662 research outputs found

    Weighted Majorization Algorithms for Weighted Least Squares Decomposition Models

    Get PDF
    For many least-squares decomposition models efficient algorithms are well known. A more difficult problem arises in decomposition models where each residual is weighted by a nonnegative value. A special case is principal components analysis with missing data. Kiers (1997) discusses an algorithm for minimizing weighteddecomposition models by iterative majorization. In this paper, we for computing a solution. We will show that the algorithm by Kiers is a special case of our algorithm. Here, we will apply weighted majorization to weighted principal components analysis, robust Procrustes analysis, and logistic bi-additive models of which the two parameter logistic model in item response theory is a specialcase. Simulation studies show that weighted majorization is generally faster than the method by Kiers by a factor one to four and obtains the same or better quality solutions. For logistic bi-additive models, we propose a new iterative majorization algorithm called logistic majorization.iterative majorization;IRT;logistic bi-additive model;robust Procrustes analysis;weighted principal component analysis;two parameter logistic model

    Sparsest factor analysis for clustering variables: a matrix decomposition approach

    Get PDF
    We propose a new procedure for sparse factor analysis (FA) such that each variable loads only one common factor. Thus, the loading matrix has a single nonzero element in each row and zeros elsewhere. Such a loading matrix is the sparsest possible for certain number of variables and common factors. For this reason, the proposed method is named sparsest FA (SSFA). It may also be called FA-based variable clustering, since the variables loading the same common factor can be classified into a cluster. In SSFA, all model parts of FA (common factors, their correlations, loadings, unique factors, and unique variances) are treated as fixed unknown parameter matrices and their least squares function is minimized through specific data matrix decomposition. A useful feature of the algorithm is that the matrix of common factor scores is re-parameterized using QR decomposition in order to efficiently estimate factor correlations. A simulation study shows that the proposed procedure can exactly identify the true sparsest models. Real data examples demonstrate the usefulness of the variable clustering performed by SSFA

    Weighted Majorization Algorithms for Weighted Least Squares Decomposition Models

    Get PDF
    For many least-squares decomposition models efficient algorithms are well known. A more difficult problem arises in decomposition models where each residual is weighted by a nonnegative value. A special case is principal components analysis with missing data. Kiers (1997) discusses an algorithm for minimizing weighted decomposition models by iterative majorization. In this paper, we for computing a solution. We will show that the algorithm by Kiers is a special case of our algorithm. Here, we will apply weighted majorization to weighted principal components analysis, robust Procrustes analysis, and logistic bi-additive models of which the two parameter logistic model in item response theory is a special case. Simulation studies show that weighted majorization is generally faster than the method by Kiers by a factor one to four and obtains the same or better quality solutions. For logistic bi-additive models, we propose a new iterative majorization algorithm called logistic majorization

    Voigt-Profile Analysis of the Lyman-alpha Forest in a Cold Dark Matter Universe

    Full text link
    We use an automated Voigt-profile fitting procedure to extract statistical properties of the Lyα\alpha forest in a numerical simulation of an Ω=1\Omega=1, cold dark matter (CDM) universe. Our analysis method is similar to that used in most observational studies of the forest, and we compare the simulations to recently published results derived from Keck HIRES spectra. With the Voigt-profile decomposition analysis, the simulation reproduces the large number of weak lines (N_{\rm HI}\la 10^{13}\cdunits) found in the HIRES spectra. The column density distribution evolves significantly between z=3z=3 and z=2z=2, with the number of lines at fixed column density dropping by a factor 1.6\sim 1.6 in the range where line blending is not severe. At z=3z=3, the bb-parameter distribution has a median of 35 \kms and a dispersion of 20 \kms, in reasonable agreement with the observed values. The comparison between our new analysis and recent data strengthens earlier claims that the \lya forest arises naturally in hierarchical structure formation as photoionized gas falls into dark matter potential wells. However, there are two statistically signficant discrepancies between the simulated forest and the HIRES results: the model produces too many lines at z=3z=3 by a factor 1.52\sim 1.5-2, and it produces more narrow lines (b<20 \kms) than are seen in the data. The first result is sensitive to our adopted normalization of the mean \lya optical depth, and the second is sensitive to our assumption that helium reionization has not significantly raised gas temperatures at z=3z=3. It is therefore too early to say whether these discrepancies indicate a fundamental problem with the high-redshift structure of the Ω=1\Omega=1 CDM model or reflect errors of detail in our modeling of the gas distribution or the observational procedure.Comment: 13 pages, 3 figures, AAS LaTex, accepted to Ap

    CDOs and systematic risk : why bond ratings are inadequate

    Get PDF
    This paper analyzes the risk properties of typical asset-backed securities (ABS), like CDOs or MBS, relying on a model with both macroeconomic and idiosyncratic components. The examined properties include expected loss, loss given default, and macro factor dependencies. Using a two-dimensional loss decomposition as a new metric, the risk properties of individual ABS tranches can directly be compared to those of corporate bonds, within and across rating classes. By applying Monte Carlo Simulation, we find that the risk properties of ABS differ significantly and systematically from those of straight bonds with the same rating. In particular, loss given default, the sensitivities to macroeconomic risk, and model risk differ greatly between instruments. Our findings have implications for understanding the credit crisis and for policy making. On an economic level, our analysis suggests a new explanation for the observed rating inflation in structured finance markets during the pre-crisis period 2004-2007. On a policy level, our findings call for a termination of the 'one-size-fits-all' approach to the rating methodology for fixed income instruments, requiring an own rating methodology for structured finance instruments. JEL Classification: G21, G2

    CDOs and Systematic Risk: Why bond ratings are inadequate

    Get PDF
    This paper analyzes the risk properties of typical asset-backed securities (ABS), like CDOs or MBS, relying on a model with both macroeconomic and idiosyncratic components. The examined properties include expected loss, loss given default, and macro factor dependencies. Using a two-dimensional loss decomposition as a new metric, the risk properties of individual ABS tranches can directly be compared to those of corporate bonds, within and across rating classes. By applying Monte Carlo Simulation, we find that the risk properties of ABS differ significantly and systematically from those of straight bonds with the same rating. In particular, loss given default, the sensitivities to macroeconomic risk, and model risk differ greatly between instruments. Our findings have implications for understanding the credit crisis and for policy making. On an economic level, our analysis suggests a new explanation for the observed rating inflation in structured finance markets during the pre-crisis period 2004-2007. On a policy level, our findings call for a termination of the 'one-size-fits-all' approach to the rating methodology for fixed income instruments, requiring an own rating methodology for structured finance instruments.Credit Risk, Risk Transfer, Systematic Risk

    Assessing input variables in FDS to model numerically solid-phase pyrolysis of cardboard

    Get PDF
    Understanding a material's fire behaviour implies to know the thermal decomposition processes. Thermal analysis techniques are widely employed to study thermal decomposition processes, especially to calculate the kinetic and thermal properties. Cardboard boxes are widely employed as rack-storage commodities in industrial buildings. Hence, the characterization of the cardboard is considered a key factor for fire safety engineering, because it enables the determination of its thermal behaviour at high temperatures. The employment of mathematical or computational models for modelling the thermal decomposition processes is commonly used in fire safety engineering (FSE). The fire dynamics simulator (FDS) software is one of the most commonly used computational fluid dynamics softwares in FSE to address thermal analysis. To properly set up FDS and obtain accurate results, the numerical values of the thermal and kinetic properties are needed as input data. Owing to the large number of variables to be determined, a preliminary study is bound to be helpful, which can well assess the influence of each variable over the pyrolysis model, discarding or restricting their influence. This study, based on the Monte Carlo method, presents a sensitivity analysis for the variables utilized as input data by the FDS software. The results show the conversion factor α, i.e. the mass involved in each reaction, and the triplet kinetics have a major impact on the reproduction of the thermal decomposition process in fire computer modelling.Authors would like to thank to the Consejo de Seguridad Nuclear for the cooperation and co-financing the project ‘‘Simulation of fires in nuclear power plants’’ and to CAFESTO Project funded by FEDER/Ministerio de Ciencia, Innovación y Universidades – Agencia Estatal de Investigación/Proyecto RTC2017-6066-8

    Investigating modularity in the analysis of process algebra models of biochemical systems

    Full text link
    Compositionality is a key feature of process algebras which is often cited as one of their advantages as a modelling technique. It is certainly true that in biochemical systems, as in many other systems, model construction is made easier in a formalism which allows the problem to be tackled compositionally. In this paper we consider the extent to which the compositional structure which is inherent in process algebra models of biochemical systems can be exploited during model solution. In essence this means using the compositional structure to guide decomposed solution and analysis. Unfortunately the dynamic behaviour of biochemical systems exhibits strong interdependencies between the components of the model making decomposed solution a difficult task. Nevertheless we believe that if such decomposition based on process algebras could be established it would demonstrate substantial benefits for systems biology modelling. In this paper we present our preliminary investigations based on a case study of the pheromone pathway in yeast, modelling in the stochastic process algebra Bio-PEPA
    corecore