1,436 research outputs found

    A submillimetre survey of the star-formation history of radio galaxies

    Full text link
    We present the results of the first major systematic submillimetre survey of radio galaxies spanning the redshift range 1 < z < 5. The primary aim of this work is to elucidate the star-formation history of this sub-class of elliptical galaxies by tracing the cosmological evolution of dust mass. Using SCUBA on the JCMT we have obtained 850-micron photometry of 47 radio galaxies to a consistent rms depth of 1 mJy, and have detected dust emission in 14 cases. The radio galaxy targets have been selected from a series of low-frequency radio surveys of increasing depth (3CRR, 6CE, etc), in order to allow us to separate the effects of increasing redshift and increasing radio power on submillimetre luminosity. Although the dynamic range of our study is inevitably small, we find clear evidence that the typical submillimetre luminosity (and hence dust mass) of a powerful radio galaxy is a strongly increasing function of redshift; the detection rate rises from 15 per cent at z 2.5, and the average submillimetre luminosity rises as (1+z)^3 out to z~4. Moreover our extensive sample allows us to argue that this behaviour is not driven by underlying correlations with other radio galaxy properties such as radio power, radio spectral index, or radio source size/age. Although radio selection may introduce other more subtle biases, the redshift distribution of our detected objects is in fact consistent with the most recent estimates of the redshift distribution of comparably bright submillimetre sources discovered in blank field surveys. The evolution of submillimetre luminosity found here for radio galaxies may thus be representative of massive ellipticals in general.Comment: 31 pages - 10 figures in main text, 3 pages of figures in appendix. This revised version has been re-structured, but the analysis and conclusions have not changed. Accepted for publication in MNRA

    Towards the automated reduction and calibration of SCUBA data from the James Clerk Maxwell Telescope

    Get PDF
    The Submillimetre Common User Bolometer Array (SCUBA) instrument has been operating on the James Clerk Maxwell Telescope (JCMT) since 1997. The data archive is now sufficiently large that it can be used to investigate instrumental properties and the variability of astronomical sources. This paper describes the automated calibration and reduction scheme used to process the archive data with particular emphasis on `jiggle-map' observations of compact sources. We demonstrate the validity of our automated approach at both 850- and 450-microns and apply it to several of the JCMT secondary flux calibrators. We determine light curves for the variable sources IRC+10216 and OH231.8. This automation is made possible by using the ORAC-DR data reduction pipeline, a flexible and extensible data reduction pipeline that is used on UKIRT and the JCMT.Comment: 9 pages, 8 figures, accepted for publication in Monthly Notices of the Royal Astronomical Societ

    Chelator free gallium-68 radiolabelling of silica coated iron oxide nanorods via surface interactions

    Get PDF
    The commercial availability of combined magnetic resonance imaging (MRI)/positron emission tomography (PET) scanners for clinical use has increased demand for easily prepared agents which offer signal or contrast in both modalities. Herein we describe a new class of silica coated iron–oxide nanorods (NRs) coated with polyethylene glycol (PEG) and/or a tetraazamacrocyclic chelator (DO3A). Studies of the coated NRs validate their composition and confirm their properties as in vivo T₂ MRI contrast agents. Radiolabelling studies with the positron emitting radioisotope gallium-68 (t1/2 = 68 min) demonstrate that, in the presence of the silica coating, the macrocyclic chelator was not required for preparation of highly stable radiometal-NR constructs. In vivo PET-CT and MR imaging studies show the expected high liver uptake of gallium-68 radiolabelled nanorods with no significant release of gallium-68 metal ions, validating our innovation to provide a novel simple method for labelling of iron oxide NRs with a radiometal in the absence of a chelating unit that can be used for high sensitivity liver imaging

    Effects of climate-induced changes in isoprene emissions after the eruption of Mount Pinatubo

    Get PDF
    In the 1990s the rates of increase of greenhouse gas concentrations, most notably of methane, were observed to change, for reasons that have yet to be fully determined. This period included the eruption of Mt. Pinatubo and an El Nino warm event, both of which affect biogeochemical processes, by changes in temperature, precipitation and radiation. We examine the impact of these changes in climate on global isoprene emissions and the effect these climate dependent emissions have on the hydroxy radical, OH, the dominant sink for methane. We model a reduction of isoprene emissions in the early 1990s, with a maximum decrease of 40 Tg(C)/yr in late 1992 and early 1993, a change of 9%. This reduction is caused by the cooler, drier conditions following the eruption of Mt. Pinatubo. Isoprene emissions are reduced both directly, by changes in temperature and a soil moisture dependent suppression factor, and indirectly, through reductions in the total biomass. The reduction in isoprene emissions causes increases of tropospheric OH which lead to an increased sink for methane of up to 5 Tg(CH4)/year, comparable to estimated source changes over the time period studied. There remain many uncertainties in the emission and oxidation of isoprene which may affect the exact size of this effect, but its magnitude is large enough that it should remain important

    Edge Detection by Adaptive Splitting II. The Three-Dimensional Case

    Full text link
    In Llanas and Lantarón, J. Sci. Comput. 46, 485–518 (2011) we proposed an algorithm (EDAS-d) to approximate the jump discontinuity set of functions defined on subsets of ℝ d . This procedure is based on adaptive splitting of the domain of the function guided by the value of an average integral. The above study was limited to the 1D and 2D versions of the algorithm. In this paper we address the three-dimensional problem. We prove an integral inequality (in the case d=3) which constitutes the basis of EDAS-3. We have performed detailed computational experiments demonstrating effective edge detection in 3D function models with different interface topologies. EDAS-1 and EDAS-2 appealing properties are extensible to the 3D cas

    Use of non-Gaussian time-of-flight kernels for image reconstruction of Monte Carlo simulated data of ultra-fast PET scanners

    Get PDF
    Introduction: Time-of-flight (TOF) positron emission tomography (PET) scanners can provide significant benefits by improving the noise properties of reconstructed images. In order to achieve this, the timing response of the scanner needs to be modelled as part of the reconstruction process. This is currently achieved using Gaussian TOF kernels. However, the timing measurements do not necessarily follow a Gaussian distribution. In ultra-fast timing resolutions, the depth of interaction of the γ-photon and the photon travel spread (PTS) in the crystal volume become increasingly significant factors for the timing performance. The PTS of a single photon can be approximated better by a truncated exponential distribution. Therefore, we computed the corresponding TOF kernel as a modified Laplace distribution for long crystals. The obtained (CTR) kernels could be more appropriate to model the joint probability of the two in-coincidenceγ-photons. In this paper, we investigate the impact of using a CTR kernel vs. Gaussian kernels in TOF reconstruction using Monte Carlo generated data. Materials and methods: The geometry and physics of a PET scanner with two timing configurations, (a) idealised timing resolution, in which only the PTS contributed in the CTR, and (b) with a range of ultra-fast timings, were simulated. In order to assess the role of the crystal thickness, different crystal lengths were considered. The evaluation took place in terms of Kullback–Leibler (K-L) distance between the proposed model and the simulated timing response, contrast recovery (CRC) and spatial resolution. The reconstructions were performed using STIR image reconstruction toolbox. Results: Results for the idealised scanner showed that the CTR kernel was in excellent agreement with the simulated time differences. In terms of K-L distance outperformed the a fitted normal distribution for all tested crystal sizes. In the case of the ultra-fast configurations, a convolution kernel between the CTR and a Gaussian showed the best agreement with the simulated data below 40 ps timing resolution. In terms of CRC, the CTR kernel demonstrated improvements, with values that ranged up to 3.8% better CRC for the thickest crystal. In terms of spatial resolution, evaluated at the 60th iteration, the use of CTR kernel showed a modest improvement of the peek-to-valley ratios up to 1% for the 10-mm crystal, while for larger crystals, a clear trend was not observed. In addition, we showed that edge artefacts can appear in the reconstructed images when the timing kernel used for the reconstruction is not carefully optimised. Further iterations, can help improve the edge artefacts

    Quasi-Newton methods for atmospheric chemistry simulations: implementation in UKCA UM vn10.8

    Get PDF
    A key and expensive part of coupled atmospheric chemistry–climate model simulations is the integration of gas-phase chemistry, which involves dozens of species and hundreds of reactions. These species and reactions form a highly coupled network of differential equations (DEs). There exist orders of magnitude variability in the lifetimes of the different species present in the atmosphere, and so solving these DEs to obtain robust numerical solutions poses a stiff problem. With newer models having more species and increased complexity, it is now becoming increasingly important to have chemistry solving schemes that reduce time but maintain accuracy. While a sound way to handle stiff systems is by using implicit DE solvers, the computational costs for such solvers are high due to internal iterative algorithms (e.g. Newton–Raphson methods). Here, we propose an approach for implicit DE solvers that improves their convergence speed and robustness with relatively small modification in the code. We achieve this by blending the existing Newton–Raphson (NR) method with quasi-Newton (QN) methods, whereby the QN routine is called only on selected iterations of the solver. We test our approach with numerical experiments on the UK Chemistry and Aerosol (UKCA) model, part of the UK Met Office Unified Model suite, run in both an idealised box-model environment and under realistic 3-D atmospheric conditions. The box-model tests reveal that the proposed method reduces the time spent in the solver routines significantly, with each QN call costing 27&thinsp;% of a call to the full NR routine. A series of experiments over a range of chemical environments was conducted with the box model to find the optimal iteration steps to call the QN routine which result in the greatest reduction in the total number of NR iterations whilst minimising the chance of causing instabilities and maintaining solver accuracy. The 3-D simulations show that our moderate modification, by means of using a blended method for the chemistry solver, speeds up the chemistry routines by around 13&thinsp;%, resulting in a net improvement in overall runtime of the full model by approximately 3&thinsp;% with negligible loss in the accuracy. The blended QN method also improves the robustness of the solver, reducing the number of grid cells which fail to converge after 50 iterations by 40&thinsp;%. The relative differences in chemical concentrations between the control run and that using the blended QN method are of order  ∼ &thinsp;10−7 for longer-lived species, such as ozone, and below the threshold for solver convergence (10−4) almost everywhere for shorter-lived species such as the hydroxyl radical.</p
    corecore