16,677 research outputs found

    Economics of Disability Research Report #4: Estimates of the Prevalence of Disability, Employment Rates, and Median Household Size-Adjusted Income for People with Disabilities Aged 18 though 64 in the United States by State, 1980 through 2000

    Get PDF
    This report replicates Economics of Disability Reports 1, 2, and 3, with some minor changes. These reports contain the prevalence of a disability, employment rates, and median household size-adjusted income between states over the 1980s and 1990s. In response to the requests of state officials to generate statistics that reflect the population they serve, this report includes people aged 18 through 64 rather than people aged 25 through 61. The new age group is more likely to include those who enter the labor force after high school, during college, and post-college as well as those people who have decided not to take early retirement. In addition, at the request of state officials, the statistics in this report are not separated by gender because most government agencies do not make a strong distinction between men and women, even though men and women face different labor market conditions. This report uses data from the March Current Population Survey to estimate the prevalence of a disability, employment rate, and median household size-adjusted income among the non-institutionalized working-age (aged 18 through 64) civilian population in the United States, and for each state and the District of Columbia for the survey years 1981 through 2000 and income/employment years 1980 through 1999. Two definitions of disability that are commonly used in the literature—work limitation and work disability—are utilized. The prevalence of a work limitation and work disability varies greatly across states and over time. The employment rate of persons with work limitations relative to that of persons without a disability varies greatly across states. However, over the last 20 years the relative employment rate of those with work limitations dramatically declined overall and in most states. Consequently, the decrease in the relative employment rate for persons with work limitations induced the growth in the median household size-adjusted income of those with work limitations

    Rain estimation from satellites: An examination of the Griffith-Woodley technique

    Get PDF
    The Griffith-Woodley Technique (GWT) is an approach to estimating precipitation using infrared observations of clouds from geosynchronous satellites. It is examined in three ways: an analysis of the terms in the GWT equations; a case study of infrared imagery portraying convective development over Florida; and the comparison of a simplified equation set and resultant rain map to results using the GWT. The objective is to determine the dominant factors in the calculation of GWT rain estimates. Analysis of a single day's convection over Florida produced a number of significant insights into various terms in the GWT rainfall equations. Due to the definition of clouds by a threshold isotherm the majority of clouds on this day did not go through an idealized life cycle before losing their identity through merger, splitting, etc. As a result, 85% of the clouds had a defined life of 0.5 or 1 h. For these clouds the terms in the GWT which are dependent on cloud life history become essentially constant. The empirically derived ratio of radar echo area to cloud area is given a singular value (0.02) for 43% of the sample, while the rainrate term is 20.7 mmh-1 for 61% of the sample. For 55% of the sampled clouds the temperature weighting term is identically 1.0. Cloud area itself is highly correlated (r=0.88) with GWT computed rain volume. An important, discriminating parameter in the GWT is the temperature defining the coldest 10% cloud area. The analysis further shows that the two dominant parameters in rainfall estimation are the existence of cold cloud and the duration of cloud over a point

    High transverse momentum suppression and surface effects in Cu+Cu and Au+Au collisions within the PQM model

    Full text link
    We study parton suppression effects in heavy-ion collisions within the Parton Quenching Model (PQM). After a brief summary of the main features of the model, we present comparisons of calculations for the nuclear modification and the away-side suppression factor to data in Au+Au and Cu+Cu collisions at 200 GeV. We discuss properties of light hadron probes and their sensitivity to the medium density within the PQM Monte Carlo framework.Comment: Comments: 6 pages, 8 figures. To appear in the proceedings of Hot Quarks 2006: Workshop for Young Scientists on the Physics of Ultrarelativistic Nucleus-Nucleus Collisions, Villasimius, Italy, 15-20 May 200

    Evaluation of the Axial Vector Commutator Sum Rule for Pion-Pion Scattering

    Full text link
    We consider the sum rule proposed by one of us (SLA), obtained by taking the expectation value of an axial vector commutator in a state with one pion. The sum rule relates the pion decay constant to integrals of pion-pion cross sections, with one pion off the mass shell. We remark that recent data on pion-pion scattering allow a precise evaluation of the sum rule. We also discuss the related Adler--Weisberger sum rule (obtained by taking the expectation value of the same commutator in a state with one nucleon), especially in connection with the problem of extrapolation of the pion momentum off its mass shell. We find, with current data, that both the pion-pion and pion-nucleon sum rules are satisfied to better than six percent, and we give detailed estimates of the experimental and extrapolation errors in the closure discrepancies.Comment: Plain TeX file;minor changes; version to be published in Pys. Rev. D; corrected refs.12,1

    A Shape Theorem for Riemannian First-Passage Percolation

    Full text link
    Riemannian first-passage percolation (FPP) is a continuum model, with a distance function arising from a random Riemannian metric in Rd\R^d. Our main result is a shape theorem for this model, which says that large balls under this metric converge to a deterministic shape under rescaling. As a consequence, we show that smooth random Riemannian metrics are geodesically complete with probability one

    Breaking quantum linearity: constraints from human perception and cosmological implications

    Full text link
    Resolving the tension between quantum superpositions and the uniqueness of the classical world is a major open problem. One possibility, which is extensively explored both theoretically and experimentally, is that quantum linearity breaks above a given scale. Theoretically, this possibility is predicted by collapse models. They provide quantitative information on where violations of the superposition principle become manifest. Here we show that the lower bound on the collapse parameter lambda, coming from the analysis of the human visual process, is ~ 7 +/- 2 orders of magnitude stronger than the original bound, in agreement with more recent analysis. This implies that the collapse becomes effective with systems containing ~ 10^4 - 10^5 nucleons, and thus falls within the range of testability with present-day technology. We also compare the spectrum of the collapsing field with those of known cosmological fields, showing that a typical cosmological random field can yield an efficient wave function collapse.Comment: 13 pages, LaTeX, 3 figure

    Multidimensional Inverse Scattering of Integrable Lattice Equations

    Full text link
    We present a discrete inverse scattering transform for all ABS equations excluding Q4. The nonlinear partial difference equations presented in the ABS hierarchy represent a comprehensive class of scalar affine-linear lattice equations which possess the multidimensional consistency property. Due to this property it is natural to consider these equations living in an N-dimensional lattice, where the solutions depend on N distinct independent variables and associated parameters. The direct scattering procedure, which is one-dimensional, is carried out along a staircase within this multidimensional lattice. The solutions obtained are dependent on all N lattice variables and parameters. We further show that the soliton solutions derived from the Cauchy matrix approach are exactly the solutions obtained from reflectionless potentials, and we give a short discussion on inverse scattering solutions of some previously known lattice equations, such as the lattice KdV equation.Comment: 18 page

    The rationale and suggested approaches for research geosynchronous satellite measurements for severe storm and mesoscale investigations

    Get PDF
    The measurements from current and planned geosynchronous satellites provide quantitative estimates of temperature and moisture profiles, surface temperature, wind, cloud properties, and precipitation. A number of significant observation characteristics remain, they include: (1) temperature and moisture profiles in cloudy areas; (2) high vertical profile resolution; (3) definitive precipitation area mapping and precipitation rate estimates on the convective cloud scale; (4) winds from low level cloud motions at night; (5) the determination of convective cloud structure; and (6) high resolution surface temperature determination. Four major new observing capabilities are proposed to overcome these deficiencies: a microwave sounder/imager, a high resolution visible and infrared imager, a high spectral resolution infrared sounder, and a total ozone mapper. It is suggested that the four sensors are flown together and used to support major mesoscale and short range forecasting field experiments

    Collapse models with non-white noises II: particle-density coupled noises

    Full text link
    We continue the analysis of models of spontaneous wave function collapse with stochastic dynamics driven by non-white Gaussian noise. We specialize to a model in which a classical "noise" field, with specified autocorrelator, is coupled to a local nonrelativistic particle density. We derive general results in this model for the rates of density matrix diagonalization and of state vector reduction, and show that (in the absence of decoherence) both processes are governed by essentially the same rate parameters. As an alternative route to our reduction results, we also derive the Fokker-Planck equations that correspond to the initial stochastic Schr\"odinger equation. For specific models of the noise autocorrelator, including ones motivated by the structure of thermal Green's functions, we discuss the qualitative and qantitative dependence on model parameters, with particular emphasis on possible cosmological sources of the noise field.Comment: Latex, 43 pages; versions 2&3 have minor editorial revision
    • …
    corecore