2,822 research outputs found

    Flexible format, computer accessed telemetry system

    Get PDF
    With this system, it is possible to sample and generate two or more simultaneous formats; one can be transmitted to ground station in real time, and other is stored for later transmission. Sensor output comparison data, plus information to control format, compression algorithm, and allowable degree of sensor activity, are stored in memory

    Data multiplexer using a tree switch

    Get PDF
    Self-decoding FET-hybrid or integrated-circuit tree configuration uses minimum number of components and can be sequenced by clock or computer. Redundancy features can readily be incorporated into tree configuration; as tree grows in size and more sensors are included, percentage of parts that will affect given percentage of sensors steadily decreases

    Energy deposition in microscopic volumes by high-energy protons

    Get PDF
    Microscopic energy deposition from passing protons in tissue spher

    Bayesian weak lensing tomography: Reconstructing the 3D large-scale distribution of matter with a lognormal prior

    Full text link
    We present a Bayesian reconstruction algorithm that infers the three-dimensional large-scale matter distribution from the weak gravitational lensing effects measured in the image shapes of galaxies. The algorithm is designed to also work with non-Gaussian posterior distributions which arise, for example, from a non-Gaussian prior distribution. In this work, we use a lognormal prior and compare the reconstruction results to a Gaussian prior in a suite of increasingly realistic tests on mock data. We find that in cases of high noise levels (i.e. for low source galaxy densities and/or high shape measurement uncertainties), both normal and lognormal priors lead to reconstructions of comparable quality, but with the lognormal reconstruction being prone to mass-sheet degeneracy. In the low-noise regime and on small scales, the lognormal model produces better reconstructions than the normal model: The lognormal model 1) enforces non-negative densities, while negative densities are present when a normal prior is employed, 2) better traces the extremal values and the skewness of the true underlying distribution, and 3) yields a higher pixel-wise correlation between the reconstruction and the true density.Comment: 23 pages, 12 figures; updated to match version accepted for publication in PR

    Changes in the frequency distribution of energy deposited in short pathlengths as a function of energy degradation of the primary beam

    Get PDF
    Frequency distributions of event size in deposition of energy over small pathlengths measured after penetration of 44.3 MeV protons through thicknesses of tissue-like materia

    Contextual approach to quantum mechanics and the theory of the fundamental prespace

    Full text link
    We constructed a Hilbert space representation of a contextual Kolmogorov model. This representation is based on two fundamental observables -- in the standard quantum model these are position and momentum observables. This representation has all distinguishing features of the quantum model. Thus in spite all ``No-Go'' theorems (e.g., von Neumann, Kochen and Specker,..., Bell) we found the realist basis for quantum mechanics. Our representation is not standard model with hidden variables. In particular, this is not a reduction of quantum model to the classical one. Moreover, we see that such a reduction is even in principle impossible. This impossibility is not a consequence of a mathematical theorem but it follows from the physical structure of the model. By our model quantum states are very rough images of domains in the space of fundamental parameters - PRESPACE. Those domains represent complexes of physical conditions. By our model both classical and quantum physics describe REDUCTION of PRESPACE-INFORMATION. Quantum mechanics is not complete. In particular, there are prespace contexts which can be represented only by a so called hyperbolic quantum model. We predict violations of the Heisenberg's uncertainty principle and existence of dispersion free states.Comment: Plenary talk at Conference "Quantum Theory: Reconsideration of Foundations-2", Vaxjo, 1-6 June, 200

    Cosmology with the lights off: Standard sirens in the Einstein Telescope era

    Full text link
    We explore the prospects for constraining cosmology using gravitational-wave (GW) observations of neutron-star binaries by the proposed Einstein Telescope (ET), exploiting the narrowness of the neutron-star mass function. Double neutron-star (DNS) binaries are expected to be one of the first sources detected after "first-light" of Advanced LIGO and are expected to be detected at a rate of a few tens per year in the advanced era. However the proposed ET could catalog tens of thousands per year. Combining the measured source redshift distributions with GW-network distance determinations will permit not only the precision measurement of background cosmological parameters, but will provide an insight into the astrophysical properties of these DNS systems. Of particular interest will be to probe the distribution of delay times between DNS-binary creation and subsequent merger, as well as the evolution of the star-formation rate density within ET's detection horizon. Keeping H_0, \Omega_{m,0} and \Omega_{\Lambda,0} fixed and investigating the precision with which the dark-energy equation-of-state parameters could be recovered, we found that with 10^5 detected DNS binaries we could constrain these parameters to an accuracy similar to forecasted constraints from future CMB+BAO+SNIa measurements. Furthermore, modeling the merger delay-time distribution as a power-law, and the star-formation rate (SFR) density as a parametrized version of the Porciani and Madau SF2 model, we find that the associated astrophysical parameters are constrained to within ~ 10%. All parameter precisions scaled as 1/sqrt(N), where N is the number of cataloged detections. We also investigated how precisions varied with the intrinsic underlying properties of the Universe and with the distance reach of the network (which may be affected by the low-frequency cutoff of the detector).Comment: 24 pages, 11 figures, 6 tables. Minor changes to reflect published version. References updated and correcte

    A comparison of the excess mass around CFHTLenS galaxy-pairs to predictions from a semi-analytic model using galaxy-galaxy-galaxy lensing

    Full text link
    The matter environment of galaxies is connected to the physics of galaxy formation and evolution. Utilising galaxy-galaxy-galaxy lensing as a direct probe, we map out the distribution of correlated surface mass-density around galaxy pairs for different lens separations in the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS). We compare, for the first time, these so-called excess mass maps to predictions provided by a recent semi-analytic model, which is implanted within the dark-matter Millennium Simulation. We analyse galaxies with stellar masses between 109−1011 M⊙10^9-10^{11}\,{\rm M}_\odot in two photometric redshift bins, for lens redshifts z≲0.6z\lesssim0.6, focusing on pairs inside groups and clusters. To allow us a better interpretation of the maps, we discuss the impact of chance pairs, i.e., galaxy pairs that appear close to each other in projection only. Our tests with synthetic data demonstrate that the patterns observed in the maps are essentially produced by correlated pairs that are close in redshift (Δz≲5×10−3\Delta z\lesssim5\times10^{-3}). We also verify the excellent accuracy of the map estimators. In an application to the galaxy samples in the CFHTLenS, we obtain a 3σ−6σ3\sigma-6\sigma significant detection of the excess mass and an overall good agreement with the galaxy model predictions. There are, however, a few localised spots in the maps where the observational data disagrees with the model predictions on a ≈3.5σ\approx3.5\sigma confidence level. Although we have no strong indications for systematic errors in the maps, this disagreement may be related to the residual B-mode pattern observed in the average of all maps. Alternatively, misaligned galaxy pairs inside dark matter halos or lensing by a misaligned distribution of the intra-cluster gas might also cause the unanticipated bulge in the distribution of the excess mass between lens pairs.Comment: 21 pages, 12 figures; abridged abstract; revised version for A&A after addressing all comments by the refere

    Discriminants, symmetrized graph monomials, and sums of squares

    Full text link
    Motivated by the necessities of the invariant theory of binary forms J. J. Sylvester constructed in 1878 for each graph with possible multiple edges but without loops its symmetrized graph monomial which is a polynomial in the vertex labels of the original graph. In the 20-th century this construction was studied by several authors. We pose the question for which graphs this polynomial is a non-negative resp. a sum of squares. This problem is motivated by a recent conjecture of F. Sottile and E. Mukhin on discriminant of the derivative of a univariate polynomial, and an interesting example of P. and A. Lax of a graph with 4 edges whose symmetrized graph monomial is non-negative but not a sum of squares. We present detailed information about symmetrized graph monomials for graphs with four and six edges, obtained by computer calculations
    • …
    corecore