12,500 research outputs found

    Discovery of distant high luminosity infrared galaxies

    Get PDF
    We have developed a method for selecting the most luminous galaxies detected by IRAS based on their extreme values of R, the ratio of 60 micron and B-band luminosity. These objects have optical counterparts that are close to or below the limits of Schmidt surveys. We have tested our method on a 1079 deg^2 region of sky, where we have selected a sample of IRAS sources with 60 micron flux densities greater than 0.2 Jy, corresponding to a redshift limit z~1 for objects with far-IR luminosities of 10^{13} L_sun. Optical identifications for these were obtained from the UK Schmidt Telescope plates, using the likelihood ratio method. Optical spectroscopy has been carried out to reliably identify and measure the redshifts of six objects with very faint optical counterparts, which are the only objects with R>100 in the sample. One object is a hyperluminous infrared galaxy (HyLIG) at z=0.834. Of the remaining, fainter objects, five are ultraluminous infrared galaxies (ULIGs) with a mean redshift of 0.45, higher than the highest known redshift of any non-hyperluminous ULIG prior to this study. High excitation lines reveal the presence of an active nucleus in the HyLIG, just as in the other known infrared-selected HyLIGs. In contrast, no high excitation lines are found in the non-hyperluminous ULIGs. We discuss the implications of our results for the number density of HyLIGs at z<1 and for the evolution of the infrared galaxy population out to this redshift, and show that substantial evolution is indicated. Our selection method is robust against the presence of gravitational lensing if the optical and infrared magnification factors are similar, and we suggest a way of using it to select candidate gravitationally lensed infrared galaxies.Comment: 6 pages, accepted for publication in A&

    Gauge fields, ripples and wrinkles in graphene layers

    Full text link
    We analyze elastic deformations of graphene sheets which lead to effective gauge fields acting on the charge carriers. Corrugations in the substrate induce stresses, which, in turn, can give rise to mechanical instabilities and the formation of wrinkles. Similar effects may take place in suspended graphene samples under tension.Comment: contribution to the special issue of Solid State Communications on graphen

    Volunteer studies replacing animal experiments in brain research - Report and recommendations of a Volunteers in Research and Testing workshop

    Get PDF

    Testing equality of variances in the analysis of repeated measurements

    Get PDF
    The problem of comparing the precisions of two instruments using repeated measurements can be cast as an extension of the Pitman-Morgan problem of testing equality of variances of a bivariate normal distribution. Hawkins (1981) decomposes the hypothesis of equal variances in this model into two subhypotheses for which simple tests exist. For the overall hypothesis he proposes to combine the tests of the subhypotheses using Fisher's method and empirically compares the component tests and their combination with the likelihood ratio test. In this paper an attempt is made to resolve some discrepancies and puzzling conclusions in Hawkins's study and to propose simple modifications.\ud \ud The new tests are compared to the tests discussed by Hawkins and to each other both in terms of the finite sample power (estimated by Monte Carlo simulation) and theoretically in terms of asymptotic relative efficiencies

    The Double Quasar Q2138-431: Lensing by a Dark Galaxy?

    Get PDF
    We report the discovery of a new gravitational lens candidate Q2138-431AB, comprising two quasar images at a redshift of 1.641 separated by 4.5 arcsecs. The spectra of the two images are very similar, and the redshifts agree to better than 115 km.sec−1^{-1}. The two images have magnitudes BJ=19.8B_J = 19.8 and BJ=21.0B_J = 21.0, and in spite of a deep search and image subtraction procedure, no lensing galaxy has been found with R<23.8R < 23.8. Modelling of the system configuration implies that the mass-to-light ratio of any lensing galaxy is likely to be around 1000M⊙/L⊙1000 M_{\odot}/L_{\odot}, with an absolute lower limit of 200M⊙/L⊙200 M_{\odot}/L_{\odot} for an Einstein-de Sitter universe. We conclude that the most likely explanation of the observations is gravitational lensing by a dark galaxy, although it is possible we are seeing a binary quasar.Comment: 17 pages (Latex), 8 postscript figures included, accepted by MNRA

    The CARMA correlator

    Get PDF
    The Combined Array for Research in Millimeter-wave Astronomy (CARMA) requires a flexible correlator to process the data from up to 23 telescopes and up to 8GHz of receiver bandwidth. The Caltech Owens Valley Broadband Reconfigurable Array (COBRA) correlator, developed for use at the Owens Valley millimeter-wave array and being used by the Sunyaev-Zeldovich Array (SZA), will be adapted for use by CARMA. The COBRA correlator system, a hybrid analog-digital design, consisting of downconverters, digitizers and correlators will be presented in this paper. The downconverters receive an input IF of 1-9GHz and produce a selectable output bandwidth of 62.5MHz, 125MHz, 250MHz, or 500MHz. The downconverter output is digitized at 1Gsample/s to 2-bits per sample. The digitized data is optionally digitally filtered to produce bands narrower than 62.5MHz (down to 2MHz). The digital correlator system is a lag- or XF-based system implemented using Field-Programmable Gate Arrays (FPGAs). The digital system implements delay lines, calculates the autocorrelations for each antenna, and the cross-correlations for each baseline. The number of lags, and hence spectral channels, produced by the system is a function of the input bandwidth; with the 500MHz band having the coarsest resolution, and the narrowest bandwidths having the finest resolution

    On the relationship between sigma models and spin chains

    Full text link
    We consider the two-dimensional O(3)\rm O(3) non-linear sigma model with topological term using a lattice regularization introduced by Shankar and Read [Nucl.Phys. B336 (1990), 457], that is suitable for studying the strong coupling regime. When this lattice model is quantized, the coefficient θ\theta of the topological term is quantized as θ=2πs\theta=2\pi s, with ss integer or half-integer. We study in detail the relationship between the low energy behaviour of this theory and the one-dimensional spin-ss Heisenberg model. We generalize the analysis to sigma models with other symmetries.Comment: To appear in Int. J. MOd. Phys.

    Public geographies II: being organic

    Get PDF
    This second report on ‘public geographies' considers the diverse, emergent and shifting spaces of engaging with and in public/s. Taking as its focus the more ‘organic’ rather than ‘traditional’ approach to doing public geography, as discussed in the first report, it explores the multiple and unorthodox ways in which engagements across academic-public spheres play out, and what such engagements may mean for geography/ers. The report first explores the role of the internet in ‘enabling conversations', generating a range of opportunities for public geography through websites, wikis, blogs, file-sharing sites, discussion forums and more, thinking critically about how technologies may enable/disable certain kinds of publically engaged activities. It then considers issues of process and praxis: how collaborations with groups/communities/organizations beyond academia are often unplanned, serendipitous encounters that evolve organically into research/learning/teaching endeavours; but also that personal politics/positionality bring an agency to bear upon whether we, as academics, follow the leads we may stumble upon. The report concludes with a provocative question – given that many non-academics appear to be doing some amazing and inspiring projects and activities, thoughtful, critical and (arguably) examples of organic public geographies, what then is academia’s role

    An examination of the genotyping error detection function of SIMWALK2

    Get PDF
    This investigation was undertaken to assess the sensitivity and specificity of the genotyping error detection function of the computer program SIMWALK2. We chose to examine chromosome 22, which had 7 microsatellite markers, from a single simulated replicate (330 pedigrees with a pattern of missing genotype data similar to the Framingham families). We created genotype errors at five overall frequencies (0.0, 0.025, 0.050, 0.075, and 0.100) and applied SIMWALK2 to each of these five data sets, respectively assuming that the total error rate (specified in the program), was at each of these same five levels. In this data set, up to an assumed error rate of 10%, only 50% of the Mendelian-consistent mistypings were found under any level of true errors. And since as many as 70% of the errors detected were false-positives, blanking suspect genotypes (at any error probability) will result in a reduction of statistical power due to the concomitant blanking of correctly typed alleles. This work supports the conclusion that allowing for genotyping errors within likelihood calculations during statistical analysis may be preferable to choosing an arbitrary cut-off
    • …
    corecore