1,355 research outputs found

    Comparing Tycho-2 Astrometry with UCAC1

    Get PDF
    The Tycho-2 Catalogue, released in February 2000, is based on the ESA Hipparcos space mission data and various ground-based catalogs for proper motions. An external comparison of the Tycho-2 astrometry is presented here using the first U.S. Naval Observatory CCD Astrograph Catalog (UCAC1). The UCAC1 data were obtained from observations performed at CTIO between February 1998 and November 1999, using the 206 mm aperture 5-element lens astrograph and a 4k x 4k CCD. Only small systematic differences in position between Tycho-2 and UCAC1 up to 15 milliarcseconds (mas) are found, mainly as a function of magnitude. The standard deviations of the distributions of the position differences are in the 35 to 140 mas range, depending on magnitude. The observed scatter in the position differences is about 30% larger than expected from the combined formal, internal errors, also depending on magnitude. The Tycho-2 Catalogue has the more precise positions for bright stars (V <= 10 mag) while the UCAC1 positions are significantly better at the faint end (11 mag <= V <= 12.5 mag) of the magnitude range in common. UCAC1 goes much fainter (to R=16) than Tycho-2; however complete sky coverage is not expected before mid 2003.Comment: LaTeX, 8 pages, 3 PS figures, accepted by AJ (Aug 2000) see also http://ad.usno.navy.mil/ad/ucac/ request for UCAC1 CD-ROM: e-mail to [email protected] request for Tycho-2 CD-ROM: e-mail to [email protected] or [email protected]

    Thermodynamic curvature measures interactions

    Full text link
    Thermodynamic fluctuation theory originated with Einstein who inverted the relation S=kBlnΩS=k_B\ln\Omega to express the number of states in terms of entropy: Ω=exp(S/kB)\Omega= \exp(S/k_B). The theory's Gaussian approximation is discussed in most statistical mechanics texts. I review work showing how to go beyond the Gaussian approximation by adding covariance, conservation, and consistency. This generalization leads to a fundamentally new object: the thermodynamic Riemannian curvature scalar RR, a thermodynamic invariant. I argue that R|R| is related to the correlation length and suggest that the sign of RR corresponds to whether the interparticle interactions are effectively attractive or repulsive.Comment: 29 pages, 7 figures (added reference 27

    Active Mass Under Pressure

    Full text link
    After a historical introduction to Poisson's equation for Newtonian gravity, its analog for static gravitational fields in Einstein's theory is reviewed. It appears that the pressure contribution to the active mass density in Einstein's theory might also be noticeable at the Newtonian level. A form of its surprising appearance, first noticed by Richard Chase Tolman, was discussed half a century ago in the Hamburg Relativity Seminar and is resolved here.Comment: 28 pages, 4 figure

    The Beta Generalized Exponential Distribution

    Full text link
    We introduce the beta generalized exponential distribution that includes the beta exponential and generalized exponential distributions as special cases. We provide a comprehensive mathematical treatment of this distribution. We derive the moment generating function and the rrth moment thus generalizing some results in the literature. Expressions for the density, moment generating function and rrth moment of the order statistics also are obtained. We discuss estimation of the parameters by maximum likelihood and provide the information matrix. We observe in one application to real data set that this model is quite flexible and can be used quite effectively in analyzing positive data in place of the beta exponential and generalized exponential distributions

    Random perfect lattices and the sphere packing problem

    Full text link
    Motivated by the search for best lattice sphere packings in Euclidean spaces of large dimensions we study randomly generated perfect lattices in moderately large dimensions (up to d=19 included). Perfect lattices are relevant in the solution of the problem of lattice sphere packing, because the best lattice packing is a perfect lattice and because they can be generated easily by an algorithm. Their number however grows super-exponentially with the dimension so to get an idea of their properties we propose to study a randomized version of the algorithm and to define a random ensemble with an effective temperature in a way reminiscent of a Monte-Carlo simulation. We therefore study the distribution of packing fractions and kissing numbers of these ensembles and show how as the temperature is decreased the best know packers are easily recovered. We find that, even at infinite temperature, the typical perfect lattices are considerably denser than known families (like A_d and D_d) and we propose two hypotheses between which we cannot distinguish in this paper: one in which they improve Minkowsky's bound phi\sim 2^{-(0.84+-0.06) d}, and a competitor, in which their packing fraction decreases super-exponentially, namely phi\sim d^{-a d} but with a very small coefficient a=0.06+-0.04. We also find properties of the random walk which are suggestive of a glassy system already for moderately small dimensions. We also analyze local structure of network of perfect lattices conjecturing that this is a scale-free network in all dimensions with constant scaling exponent 2.6+-0.1.Comment: 19 pages, 22 figure

    WFPC2 Observations of the Hubble Deep Field-South

    Get PDF
    The Hubble Deep Field-South observations targeted a high-galactic-latitude field near QSO J2233-606. We present WFPC2 observations of the field in four wide bandpasses centered at roughly 300, 450, 606, and 814 nm. Observations, data reduction procedures, and noise properties of the final images are discussed in detail. A catalog of sources is presented, and the number counts and color distributions of the galaxies are compared to a new catalog of the HDF-N that has been constructed in an identical manner. The two fields are qualitatively similar, with the galaxy number counts for the two fields agreeing to within 20%. The HDF-S has more candidate Lyman-break galaxies at z > 2 than the HDF-N. The star-formation rate per unit volume computed from the HDF-S, based on the UV luminosity of high-redshift candidates, is a factor of 1.9 higher than from the HDF-N at z ~ 2.7, and a factor of 1.3 higher at z ~ 4.Comment: 93 pages, 25 figures; contains very long table

    Gauss Linking Number and Electro-magnetic Uncertainty Principle

    Full text link
    It is shown that there is a precise sense in which the Heisenberg uncertainty between fluxes of electric and magnetic fields through finite surfaces is given by (one-half \hbar times) the Gauss linking number of the loops that bound these surfaces. To regularize the relevant operators, one is naturally led to assign a framing to each loop. The uncertainty between the fluxes of electric and magnetic fields through a single surface is then given by the self-linking number of the framed loop which bounds the surface.Comment: 13 pages, Revtex file, 3 eps figure

    Multi-model simulations of the impact of international shipping on Atmospheric Chemistry and Climate in 2000 and 2030

    Get PDF
    The global impact of shipping on atmospheric chemistry and radiative forcing, as well as the associated uncertainties, have been quantified using an ensemble of ten state-of-the-art atmospheric chemistry models and a predefined set of emission data. The analysis is performed for present-day conditions ( year 2000) and for two future ship emission scenarios. In one scenario ship emissions stabilize at 2000 levels; in the other ship emissions increase with a constant annual growth rate of 2.2% up to 2030 ( termed the "Constant Growth Scenario" (CGS)). Most other anthropogenic emissions follow the IPCC ( Intergovernmental Panel on Climate Change) SRES ( Special Report on Emission Scenarios) A2 scenario, while biomass burning and natural emissions remain at year 2000 levels. An intercomparison of the model results with observations over the Northern Hemisphere (25 degrees - 60 degrees N) oceanic regions in the lower troposphere showed that the models are capable to reproduce ozone (O-3) and nitrogen oxides (NOx= NO+ NO2) reasonably well, whereas sulphur dioxide (SO2) in the marine boundary layer is significantly underestimated. The most pronounced changes in annual mean tropospheric NO2 and sulphate columns are simulated over the Baltic and North Seas. Other significant changes occur over the North Atlantic, the Gulf of Mexico and along the main shipping lane from Europe to Asia, across the Red and Arabian Seas. Maximum contributions from shipping to annual mean near-surface O-3 are found over the North Atlantic ( 5 - 6 ppbv in 2000; up to 8 ppbv in 2030). Ship contributions to tropospheric O3 columns over the North Atlantic and Indian Oceans reach 1 DU in 2000 and up to 1.8 DU in 2030. Tropospheric O-3 forcings due to shipping are 9.8 +/- 2.0 mW/m(2) in 2000 and 13.6 +/- 2.3 mW/m(2) in 2030. Whilst increasing O-3, ship NOx simultaneously enhances hydroxyl radicals over the remote ocean, reducing the global methane lifetime by 0.13 yr in 2000, and by up to 0.17 yr in 2030, introducing a negative radiative forcing. The models show future increases in NOx and O-3 burden which scale almost linearly with increases in NOx emission totals. Increasing emissions from shipping would significantly counteract the benefits derived from reducing SO2 emissions from all other anthropogenic sources under the A2 scenario over the continents, for example in Europe. Globally, shipping contributes 3% to increases in O-3 burden between 2000 and 2030, and 4.5% to increases in sulphate under A2/CGS. However, if future ground based emissions follow a more stringent scenario, the relative importance of ship emissions will increase. Inter-model differences in the simulated O-3 contributions from ships are significantly smaller than estimated uncertainties stemming from the ship emission inventory, mainly the ship emission totals, the distribution of the emissions over the globe, and the neglect of ship plume dispersion

    Basic Understanding of Condensed Phases of Matter via Packing Models

    Full text link
    Packing problems have been a source of fascination for millenia and their study has produced a rich literature that spans numerous disciplines. Investigations of hard-particle packing models have provided basic insights into the structure and bulk properties of condensed phases of matter, including low-temperature states (e.g., molecular and colloidal liquids, crystals and glasses), multiphase heterogeneous media, granular media, and biological systems. The densest packings are of great interest in pure mathematics, including discrete geometry and number theory. This perspective reviews pertinent theoretical and computational literature concerning the equilibrium, metastable and nonequilibrium packings of hard-particle packings in various Euclidean space dimensions. In the case of jammed packings, emphasis will be placed on the "geometric-structure" approach, which provides a powerful and unified means to quantitatively characterize individual packings via jamming categories and "order" maps. It incorporates extremal jammed states, including the densest packings, maximally random jammed states, and lowest-density jammed structures. Packings of identical spheres, spheres with a size distribution, and nonspherical particles are also surveyed. We close this review by identifying challenges and open questions for future research.Comment: 33 pages, 20 figures, Invited "Perspective" submitted to the Journal of Chemical Physics. arXiv admin note: text overlap with arXiv:1008.298

    Least-squares inversion for density-matrix reconstruction

    Get PDF
    We propose a method for reconstruction of the density matrix from measurable time-dependent (probability) distributions of physical quantities. The applicability of the method based on least-squares inversion is - compared with other methods - very universal. It can be used to reconstruct quantum states of various systems, such as harmonic and and anharmonic oscillators including molecular vibrations in vibronic transitions and damped motion. It also enables one to take into account various specific features of experiments, such as limited sets of data and data smearing owing to limited resolution. To illustrate the method, we consider a Morse oscillator and give a comparison with other state-reconstruction methods suggested recently.Comment: 16 pages, REVTeX, 6 PS figures include
    corecore