10,293 research outputs found

    Observational Prospects for Afterglows of Short Duration Gamma-ray Bursts

    Get PDF
    If the efficiency for producing γ\gamma-rays is the same in short duration (\siml 2 s) Gamma-Ray Bursts (GRBs) as in long duration GRBs, then the average kinetic energy of short GRBs must be 20\sim 20 times less than that of long GRBs. Assuming further that the relativistic shocks in short and long duration GRBs have similar parameters, we show that the afterglows of short GRBs will be on average 10--40 times dimmer than those of long GRBs. We find that the afterglow of a typical short GRB will be below the detection limit (\siml 10 \microJy) of searches at radio frequencies. The afterglow would be difficult to observe also in the optical, where we predict R \simg 23 a few hours after the burst. The radio and optical afterglow would be even fainter if short GRBs occur in a low-density medium, as expected in NS-NS and NS-BH merger models. The best prospects for detecting short-GRB afterglows are with early (\siml 1 day) observations in X-rays.Comment: 5 pages, 2 figures, submitted to ApJ lette

    The Rapidly Fading Optical Afterglow of GRB 980519

    Get PDF
    GRB 980519 had the most rapidly fading of the well-documented GRB afterglows, consistent with t^{-2.05 +/- 0.04} in BVRI as well as in X-rays during the two days in which observations were made. We report VRI observations from the MDM 1.3m and WIYN 3.5m telescopes, and we synthesize an optical spectrum from all of the available photometry. The optical spectrum alone is well fitted by a power law of the form nu^{-1.20 +/- 0.25}, with some of the uncertainty due to the significant Galactic reddening in this direction. The optical and X-ray spectra together are adequately fitted by a single power law nu^{-1.05 +/- 0.10}. This combination of steep temporal decay and flat broad-band spectrum places a severe strain on the simplest afterglow models involving spherical blast waves in a homogeneous medium. Instead, the rapid observed temporal decay is more consistent with models of expansion into a medium of density n(r) proportional to r^{-2}, or with predictions of the evolution of a jet after it slows down and spreads laterally. The jet model would relax the energy requirements on some of the more extreme GRBs, of which GRB 980519 is likely to be an example because of its large gamma-ray fluence and faint host galaxy.Comment: 13 pages, submitted to ApJ Letter

    Registration of retinal images from Public Health by minimising an error between vessels using an affine model with radial distortions

    Get PDF
    In order to estimate a registration model of eye fundus images made of an affinity and two radial distortions, we introduce an estimation criterion based on an error between the vessels. In [1], we estimated this model by minimising the error between characteristics points. In this paper, the detected vessels are selected using the circle and ellipse equations of the overlap area boundaries deduced from our model. Our method successfully registers 96 % of the 271 pairs in a Public Health dataset acquired mostly with different cameras. This is better than our previous method [1] and better than three other state-of-the-art methods. On a publicly available dataset, ours still better register the images than the reference method

    A microgravity isolation mount

    Get PDF
    The design and preliminary testing of a system for isolating microgravity sensitive payloads from spacecraft vibrational and impulsive disturbances is discussed. The Microgravity Isolation Mount (MGIM) concept consists of a platform which floats almost freely within a limited volume inside the spacecraft, but which is constrained to follow the spacecraft in the long term by means of very weak springs. The springs are realized magnetically and form part of a six degree of freedom active magnetic suspension system. The latter operates without any physical contact between the spacecraft and the platform itself. Power and data transfer is also performed by contactless means. Specifications are given for the expected level of input disturbances and the tolerable level of platform acceleration. The structural configuration of the mount is discussed and the design of the principal elements, i.e., actuators, sensors, control loops and power/data transfer devices are described. Finally, the construction of a hardware model that is being used to verify the predicted performance of the MGIM is described

    Cross Pixel Optical Flow Similarity for Self-Supervised Learning

    Full text link
    We propose a novel method for learning convolutional neural image representations without manual supervision. We use motion cues in the form of optical flow, to supervise representations of static images. The obvious approach of training a network to predict flow from a single image can be needlessly difficult due to intrinsic ambiguities in this prediction task. We instead propose a much simpler learning goal: embed pixels such that the similarity between their embeddings matches that between their optical flow vectors. At test time, the learned deep network can be used without access to video or flow information and transferred to tasks such as image classification, detection, and segmentation. Our method, which significantly simplifies previous attempts at using motion for self-supervision, achieves state-of-the-art results in self-supervision using motion cues, competitive results for self-supervision in general, and is overall state of the art in self-supervised pretraining for semantic image segmentation, as demonstrated on standard benchmarks

    Objects that Sound

    Full text link
    In this paper our objectives are, first, networks that can embed audio and visual inputs into a common space that is suitable for cross-modal retrieval; and second, a network that can localize the object that sounds in an image, given the audio signal. We achieve both these objectives by training from unlabelled video using only audio-visual correspondence (AVC) as the objective function. This is a form of cross-modal self-supervision from video. To this end, we design new network architectures that can be trained for cross-modal retrieval and localizing the sound source in an image, by using the AVC task. We make the following contributions: (i) show that audio and visual embeddings can be learnt that enable both within-mode (e.g. audio-to-audio) and between-mode retrieval; (ii) explore various architectures for the AVC task, including those for the visual stream that ingest a single image, or multiple images, or a single image and multi-frame optical flow; (iii) show that the semantic object that sounds within an image can be localized (using only the sound, no motion or flow information); and (iv) give a cautionary tale on how to avoid undesirable shortcuts in the data preparation.Comment: Appears in: European Conference on Computer Vision (ECCV) 201

    Alfalfa Cultivar Yield Test for South Dakota: 2000 Report

    Get PDF
    The South Dakota Alfalfa Cultivar Yield Test reports relative forage production characteristic for available cultivars at several locations in South Dakota. Cultivar are entered in the test by seed companies and public breeders at their own discretion. A list of cultivar and companies is in Table 8 at the end of this circula
    corecore