441,192 research outputs found

    A novel methodology to create generative statistical models of interconnects

    Get PDF
    This paper addresses the problem of constructing a generative statistical model for an interconnect starting from a limited set of S-parameter samples, which are obtained by simulating or measuring the interconnect for a few random realizations of its stochastic physical properties. These original samples are first converted into a pole-residue representation with common poles. The corresponding residues are modeled as a correlated stochastic process by means of principal component analysis and kernel density estimation. The obtained model allows generating new samples with similar statistics as the original data. A passivity check is performed over the generated samples to retain only passive data. The proposed approach is applied to a representative coupled microstrip line example

    Positive externalities of congestion, human capital, and socio-economic factors: A case study of chronic illness in Japan.

    Get PDF
    This paper explores, using Japanese panel data for the years 1988-2002, how externalities from congestion and human capital influence deaths caused by chronic illnesses. Major findings through fixed effects 2SLS estimation were as follows: (1) the number of deaths were smaller in more densely-populated areas, and this tendency was more distinct for males; (2) higher human capital correlated with a decreased number of deaths, with the effect being greater in females than in males. These findings suggest that human capital and positive externalities stemming from congestion make a contribution to improving lifestyle, which is affected differently by socio-economic circumstance in males and females.population density, education, chronic illness

    Linear model for fast background subtraction in oligonucleotide microarrays

    Get PDF
    One important preprocessing step in the analysis of microarray data is background subtraction. In high-density oligonucleotide arrays this is recognized as a crucial step for the global performance of the data analysis from raw intensities to expression values. We propose here an algorithm for background estimation based on a model in which the cost function is quadratic in a set of fitting parameters such that minimization can be performed through linear algebra. The model incorporates two effects: 1) Correlated intensities between neighboring features in the chip and 2) sequence-dependent affinities for non-specific hybridization fitted by an extended nearest-neighbor model. The algorithm has been tested on 360 GeneChips from publicly available data of recent expression experiments. The algorithm is fast and accurate. Strong correlations between the fitted values for different experiments as well as between the free-energy parameters and their counterparts in aqueous solution indicate that the model captures a significant part of the underlying physical chemistry.Comment: 21 pages, 5 figure

    Comparing canopy density measurement from UAV and hemispherical photography: an evaluation for medium resolution of remote sensing-based mapping

    Get PDF
    UAV and hemispherical photography are common methods used in canopy density measurement. These two methods have opposite viewing angles where hemispherical photography measures canopy density upwardly, while UAV captures images downwardly. This study aims to analyze and compare both methods to be used as the input data for canopy density estimation when linked with a lower spatial resolution of remote sensing data i.e. Landsat image. We correlated the field data of canopy density with vegetation indices (NDVI, MSAVI, and AFRI) from Landsat-8. The canopy density values measured from UAV and hemispherical photography displayed a strong relationship with 0.706 coefficient of correlation. Further results showed that both measurements can be used in canopy density estimation using satellite imagery based on their high correlations with Landsat-based vegetation indices. The highest correlation from downward and upward measurement appeared when linked with NDVI with a correlation of 0.962 and 0.652, respectively. Downward measurement using UAV exhibited a higher relationship compared to hemispherical photography. The strong correlation between UAV data and Landsat data is because both are captured from the vertical direction, and 30 m pixel of Landsat is a downscaled image of the aerial photograph. Moreover, field data collection can be easily conducted by deploying drone to cover inaccessible sample plots

    Estimation of high-resolution dust column density maps. Comparison of modified black-body fits and radiative transfer modelling

    Full text link
    Sub-millimetre dust emission is often used to derive the column density N of dense interstellar clouds. The observations consist of data at several wavelengths but of variable resolution. We examine two procedures that been proposed for the estimation of high resolution N maps. Method A uses a low-resolution temperature map combined with higher resolution intensity data while Method B combines N estimates from different wavelength ranges. Our aim is to determine the accuracy of the methods relative to the true column densities and the estimates obtainable with radiative transfer modelling. We use magnetohydrodynamical (MHD) simulations and radiative transfer calculations to simulate sub-millimetre observations at the wavelengths of the Herschel Space Observatory. The observations are analysed with the methods and the results compared to the true values and to the results from radiative transfer modelling of observations. Both methods A and B give relatively reliable column density estimates at the resolution of 250um data while also making use of the longer wavelengths. For high signal-to-noise data, the results of Method B are better correlated with the true column density, while Method A is less sensitive to noise. When the cloud has internal heating, results of Method B are consistent with those that would be obtained with high-resolution data. Because of line-of-sight temperature variations, these underestimate the true column density and, because of a favourable cancellation of errors, Method A can sometimes give more correct values. Radiative transfer modelling, even with very simple 3D cloud models, can provide better results. However, the complexity of the models required for improvements increases rapidly with the complexity and opacity of the clouds.Comment: 14 pages, Accepted to A&

    Nonlinear versus linearised model on stand density model fitting and stand density index calculation: analysis of coefficients estimation via simulation

    Get PDF
    The stand density index, one of the most important metrics for managing site occupancy, is generally calculated from empirical data by means of a coefficient derived from the "self-thinning rule" or stand density model. I undertook an exploratory analysis of model fitting based on simulated data. I discuss the use of the logarithmic transformation (i.e., linearisation) of the relationship between the total number of trees per hectare (N) and the quadratic mean diameter of the stand (QMD). I compare the classic method used by Reineke (J Agric Res 46:627–638, 1933), i.e., linear OLS model fitting after logarithmic transformation of data, with the "pure" power-law model, which represents the native mathematical structure of this relationship. I evaluated the results according to the correlation between N and QMD. Linear OLS and nonlinear fitting agreed in the estimation of coefficients only for highly correlated (between − 1 and − 0.85) or poorly correlated (> − 0.39) datasets. At average correlation values (i.e., between − 0.75 and − 0.4), it is probable that for practical applications, the differences were relevant, especially concerning the key coefficient for Reineke's stand density index calculation. This introduced a non-negligible bias in SDI calculation. The linearised log–log model always estimated a lower slope term than did the exponent of the nonlinear function except at the extremes of the correlation range. While the logarithmic transformation is mathematically correct and always equivalent to a nonlinear model in case of prediction of the dependent variable, the difference detected in my studies between the two methods (i.e., coefficient estimation) was directly related to the correlation between N and QMD in each simulated/disturbed dataset. In general, given the power law as the "natural" structure of the N versus QMD relationship, the nonlinear model is preferred, with a logarithmic transformation used only in the case of violation of parametric assumptions (e.g. data distributed non-normally)

    Modelling International Bond Markets with Affine Term Structure Models

    Get PDF
    This paper investigates the performance of international affine term structure models (ATSMs) that are driven by a mutual set of global state variables. We discuss which mixture of Gaussian and square root processes is best suited for modelling international bond markets. We derive necessary conditions for the correlation and volatility structure of mixture models to accommodate various empirical stylized facts such as the forward premium puzzle and differently shaped yield curves. Using UK-US data we estimate international ATSMs taking into account the joint transition density of yields and exchange rates without assuming normality. We find strong empirical evidence for negatively correlated global factors in international bond markets. Further, the empirical results do not support the existence of local factors in the UK-US setting, suggesting that diversification benefits from holding currency- hedged bond portfolios in these markets are likely to be small. Altogether, we find that mixture models greatly enhance the performance of ATSMs.International affine term structure models, Estimation, Exchange rate, Model Selection
    • …
    corecore