326,391 research outputs found

    Growth Accounting for Some Selected Developing, Newly Industrialized and Developed Nations from 1966-2000: A Data Envelopment Analysis

    Get PDF
    We work out technical efficiency levels of 29 countries consisting of some selected South Asian, East Asian and EU countries using data envelopment analysis. Luxembourg has an efficiency score of one(most efficient) in all the years .Netherlands also has an efficiency score of one in 1966,1971,1976 and 1981.Japan,UK,Belgium,Ireland,Indonesia,Spain and Germany has an efficiency score of one in at least one of the years from 1966 to 2000.In the year 2000 though mean efficiency levels(without including life expectancy as input) of South Asian countries is higher than the European Union Countries and East Asian countries. Japan has the highest average efficiency followed by Hong Kong in the East Asian region in the period 1966-2000. We also decompose labor productivity growth into components attributable to technological changes (shifts in the overall production frontier), technological catch up or efficiency changes(movement towards or away from the frontier),capital accumulation(movement along the frontier) and human capital accumulation( proxied by life expectancy).The overall production frontier is constructed using deterministic methods requiring no specification of functional form for the technology nor any assumption about market structure or the absence of market imperfections. Growth accounting results tend to convey that for the East Asian and the South Asian countries efficiency changes(technological catch up) have contributed the most, while for the European countries it is the technical changes which has contributed more to labour productivity changes between 1966-2000. We also analyze the evolution of cross country distribution for the 29 countries included in our sample using Kernel densities. It seems that there are other factors like trade openness,quality of governments,population rate of growth, savings rate, corruption perception indices, rule of law index, social capital and trust variables, formal and informal rules governing the society, among others, rather than the ones that are included below for the growth accounting exercise which may be responsible for productivity accounting on point to point basis. For all the seven periods(point to point basis) we see a major role played by technological changes and efficiency changes together to account for the current period counterfactual distributions and for the bimodal distribution in year 2000, and for the period 1966-2000(not point to point basis –an excercise done similar to Kumar and Russell(2002)) we find technical changes and its combination with other tripartite and quadripartite changes jointly account for the bimodal distribution in year 2000.However, from this growth accounting exercise, we do find that there is convergence in statistical terms of efficiency changes and human capital accumulation across countries of the EU, South Asian and East Asian regions.: Data envelopment analysis, growth accounting, technical efficiency, efficiency change, technological change, capital accumulation, human capital accumulation, kernel smoothing, cross country labor productivity distribution and counterfactual distributions

    Contingent Kernel Density Estimation

    Get PDF
    Kernel density estimation is a widely used method for estimating a distribution based on a sample of points drawn from that distribution. Generally, in practice some form of error contaminates the sample of observed points. Such error can be the result of imprecise measurements or observation bias. Often this error is negligible and may be disregarded in analysis. In cases where the error is non-negligible, estimation methods should be adjusted to reduce resulting bias. Several modifications of kernel density estimation have been developed to address specific forms of errors. One form of error that has not yet been addressed is the case where observations are nominally placed at the centers of areas from which the points are assumed to have been drawn, where these areas are of varying sizes. In this scenario, the bias arises because the size of the error can vary among points and some subset of points can be known to have smaller error than another subset or the form of the error may change among points. This paper proposes a “contingent kernel density estimation” technique to address this form of error. This new technique adjusts the standard kernel on a point-by-point basis in an adaptive response to changing structure and magnitude of error. In this paper, equations for our contingent kernel technique are derived, the technique is validated using numerical simulations, and an example using the geographic locations of social networking users is worked to demonstrate the utility of the method

    Wage Dispersion in a Partially Unionized Labor Force

    Get PDF
    Taking as our point of departure a model proposed by David Card (2001), we suggest new methods for analyzing wage dispersion in a partially unionized labor market. Card's method disaggregates the labor population into skill categories, which procedure entails some loss of information. Accordingly, we develop a model in which each worker individually is assigned a union-membership probability and predicted union and nonunion wages. The model yields a natural three-way decomposition of variance. The decomposition permits counterfactual analysis, using concepts and techniques from the theory of factorial experimental design. We examine causes of the increase in U.K. wage dispersion between 1983 and 1995. Of the factors initially considered, the most influential was a change in the structure of remuneration inside both the union and nonunion sectors. Next in importance was the decrease in union membership. Finally, exogenous changes in labor force characteristics had, for most groups considered, only a small negative effect. We supplement this preliminary three-factorial analysis with a five-factorial analysis that allows us to examine effects from the wage-equation parameters in greater detail.wage dispersion, three-way variance decomposition, bivariate kernel density smoothing, union membership, deunionization, factorial experimental design

    Detection and localization of multiple rate changes in Poisson spike trains : poster presentation from Twentieth Annual Computational Neuroscience Meeting CNS*2011 Stockholm, Sweden, 23 - 28 July 2011

    Get PDF
    Poster presentation from Twentieth Annual Computational Neuroscience Meeting: CNS*2011 Stockholm, Sweden. 23-28 July 2011. In statistical spike train analysis, stochastic point process models usually assume stationarity, in particular that the underlying spike train shows a constant firing rate (e.g. [1]). However, such models can lead to misinterpretation of the associated tests if the assumption of rate stationarity is not met (e.g. [2]). Therefore, the analysis of nonstationary data requires that rate changes can be located as precisely as possible. However, present statistical methods focus on rejecting the null hypothesis of stationarity without explicitly locating the change point(s) (e.g. [3]). We propose a test for stationarity of a given spike train that can also be used to estimate the change points in the firing rate. Assuming a Poisson process with piecewise constant firing rate, we propose a Step-Filter-Test (SFT) which can work simultaneously in different time scales, accounting for the high variety of firing patterns in experimental spike trains. Formally, we compare the numbers N1=N1(t,h) and N2=N2(t,h) of spikes in the time intervals (t-h,t] and (h,t+h]. By varying t within a fine time lattice and simultaneously varying the interval length h, we obtain a multivariate statistic D(h,t):=(N1-N2)/V(N1+N2), for which we prove asymptotic multivariate normality under homogeneity. From this a practical, graphical device to spot changes of the firing rate is constructed. Our graphical representation of D(h,t) (Figure 1A) visualizes the changes in the firing rate. For the statistical test, a threshold K is chosen such that under homogeneity, |D(h,t)|<K holds for all investigated h and t with probability 0.95. This threshold can indicate potential change points in order to estimate the inhomogeneous rate profile (Figure 1B). The SFT is applied to a sample data set of spontaneous single unit activity recorded from the substantia nigra of anesthetized mice. In this data set, multiple rate changes are identified which agree closely with visual inspection. In contrast to approaches choosing one fixed kernel width [4], our method has advantages in the flexibility of h

    Consistent change-point detection with kernels

    Get PDF
    International audienceIn this paper we study the kernel change-point algorithm (KCP) proposed by Arlot, Celisse and Harchaoui (2012), which aims at locating an unknown number of change-points in the distribution of a sequence of independent data taking values in an arbitrary set. The change-points are selected by model selection with a penalized kernel empirical criterion. We provide a non-asymptotic result showing that, with high probability, the KCP procedure retrieves the correct number of change-points, provided that the constant in the penalty is well-chosen; in addition, KCP estimates the change-points location at the optimal rate. As a consequence, when using a characteristic kernel, KCP detects all kinds of change in the distribution (not only changes in the mean or the variance), and it is able to do so for complex structured data (not necessarily in Rd\mathbb{R}^d). Most of the analysis is conducted assuming that the kernel is bounded; part of the results can be extended when we only assume a finite second-order moment

    A Kernel-Based Change Detection Method to Map Shifts in Phytoplankton Communities Measured by Flow Cytometry

    Get PDF
    1. Automated, ship-board flow cytometers provide high-resolution maps of phytoplankton composition over large swaths of the world\u27s oceans. They therefore pave the way for understanding how environmental conditions shape community structure. Identification of community changes along a cruise transect commonly segments the data into distinct regions. However, existing segmentation methods are generally not applicable to flow cytometry data, as these data are recorded as ‘point cloud’ data, with hundreds or thousands of particles measured during each time interval. Moreover, nonparametric segmentation methods that do not rely on prior knowledge of the number of species are desirable to map community shifts. 2. We present CytoSegmenter, a kernel-based change-point estimation method for segmenting point cloud data. Our method allows us to represent and summarize a point cloud of data points by a single element in a Hilbert space. The change-point locations can be found using a fast dynamic programming algorithm. 3. Through an analysis of 12 cruises, we demonstrate that CytoSegmenter allows us to locate abrupt changes in phytoplankton community structure. We show that the changes in community structure generally coincide with changes in the temperature and salinity of the ocean. We also illustrate how the main parameter of CytoSegmenter can be easily calibrated using limited auxiliary annotated data. 4. CytoSegmenter is generally applicable for segmenting series of point cloud data from any domain. Moreover, it readily scales to thousands of point clouds, each containing thousands of points. In the context of flow cytometry data collected during research cruises, it does not require prior clustering of particles to define taxa labels, eliminating a potential source of error. This represents an important advance in automating the analysis of large datasets now emerging in biological oceanography and other fields. It also allows for the approach to be applied during research cruises
    • 

    corecore