7,233 research outputs found

    Large-Eddy Simulation closures of passive scalar turbulence: a systematic approach

    Full text link
    The issue of the parameterization of small scale (``subgrid'') turbulence is addressed in the context of passive scalar transport. We focus on the Kraichnan advection model which lends itself to the analytical investigation of the closure problem. We derive systematically the dynamical equations which rule the evolution of the coarse-grained scalar field. At the lowest-order approximation in l/rl/r, ll being the characteristic scale of the filter defining the coarse-grained scalar field and rr the inertial range separation, we recover the classical eddy-diffusivity parameterization of small scales. At the next-leading order a dynamical closure is obtained. The latter outperforms the classical model and is therefore a natural candidate for subgrid modelling of scalar transport in generic turbulent flows.Comment: 10 LaTex pages, 1 PS figure. Changes: comments added below previous (3.10); Previous (3.16) has been corrected; Minor changes in the conclusion

    The radio spectra of reddened 2MASS QSOs: evidence for young radio jets

    Full text link
    Multifrequency radio continuum observations (1.4-22 GHz) of a sample of reddened QSOs are presented. We find a high incidence (13/16) of radio spectral properties, such as low frequency turnovers, high frequency spectral breaks or steep power-law slopes, similar to those observed in powerful compact steep spectrum (CSS) and gigahertz-peaked spectrum (GPS) sources. The radio data are consistent with relatively young radio jets with synchotron ages <1e6-1e7yr. This calculation is limited by the lack of high resolution (milli-arcsec) radio observations. For the one source in the sample that such data are available a much younger radio age is determined, <2e3yr, similar to those of GPS/CSS sources. These findings are consistent with claims that reddened QSOs are young systems captured at the first stages of the growth of their supermassive black holes. It also suggests that expanding radio lobes may be an important feedback mode at the early stages of the evolution of AGN.Comment: 9 pages, to appear in MNRA

    Do fiscal imbalances deteriorate sovereign debt ratings ?

    Get PDF
    We use sovereign debt rating estimations from Afonso, Gomes and Rother (2009, 2011) for Fitch, Moody’s, and Standard & Poor’s, to assess to what extent the recent fiscal imbalances are being reflected on the sovereign debt notations. With macro and fiscal data up to 2010, and macro and fiscal projections, we obtain the expected rating for several OECD countries. The answer to the title question is yes, but in a diverse way for each country. Our average model predictions point to a heterogeneous behaviour of rating agencies across countries

    The Phoenix Deep Survey: The 1.4 GHz microJansky catalogue

    Full text link
    The initial Phoenix Deep Survey (PDS) observations with the Australia Telescope Compact Array have been supplemented by additional 1.4 GHz observations over the past few years. Here we present details of the construction of a new mosaic image covering an area of 4.56 square degrees, an investigation of the reliability of the source measurements, and the 1.4 GHz source counts for the compiled radio catalogue. The mosaic achieves a 1-sigma rms noise of 12 microJy at its most sensitive, and a homogeneous radio-selected catalogue of over 2000 sources reaching flux densities as faint as 60 microJy has been compiled. The source parameter measurements are found to be consistent with the expected uncertainties from the image noise levels and the Gaussian source fitting procedure. A radio-selected sample avoids the complications of obscuration associated with optically-selected samples, and by utilising complementary PDS observations including multicolour optical, near-infrared and spectroscopic data, this radio catalogue will be used in a detailed investigation of the evolution in star-formation spanning the redshift range 0 < z < 1. The homogeneity of the catalogue ensures a consistent picture of galaxy evolution can be developed over the full cosmologically significant redshift range of interest. The 1.4 GHz mosaic image and the source catalogue are available on the web at http://www.atnf.csiro.au/~ahopkins/phoenix/ or from the authors by request.Comment: 16 pages, 11 figures, 4 tables. Accepted for publication by A

    Observation of Microlensing towards the Galactic Spiral Arms. EROS II 2 year survey

    Full text link
    We present the analysis of the light curves of 8.5 million stars observed during two seasons by EROS (Experience de Recherche d'Objets Sombres), in the galactic plane away from the bulge. Three stars have been found that exhibit luminosity variations compatible with gravitational microlensing effects due to unseen objects. The corresponding optical depth, averaged over four directions, is 0.38 (+0.53, -0.15) 10^{-6}. All three candidates have long Einstein radius crossing times (∼\sim 70 to 100 days). For one of them, the lack of evidence for a parallax or a source size effect enabled us to constrain the lens-source % geometric configuration. Another candidate displays a modulation of the magnification, which is compatible with the lensing of a binary source. The interpretation of the optical depths inferred from these observations is hindered by the imperfect knowledge of the distance to the target stars. Our measurements are compatible with expectations from simple galactic models under reasonable assumptions on the target distances.Comment: 11 pages, 13 figures, accepted by A&A in Aug 9

    Fast Image Recovery Using Variable Splitting and Constrained Optimization

    Full text link
    We propose a new fast algorithm for solving one of the standard formulations of image restoration and reconstruction which consists of an unconstrained optimization problem where the objective includes an â„“2\ell_2 data-fidelity term and a non-smooth regularizer. This formulation allows both wavelet-based (with orthogonal or frame-based representations) regularization or total-variation regularization. Our approach is based on a variable splitting to obtain an equivalent constrained optimization formulation, which is then addressed with an augmented Lagrangian method. The proposed algorithm is an instance of the so-called "alternating direction method of multipliers", for which convergence has been proved. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is faster than the current state of the art methods.Comment: Submitted; 11 pages, 7 figures, 6 table

    Scene-adapted plug-and-play algorithm with convergence guarantees

    Full text link
    Recent frameworks, such as the so-called plug-and-play, allow us to leverage the developments in image denoising to tackle other, and more involved, problems in image processing. As the name suggests, state-of-the-art denoisers are plugged into an iterative algorithm that alternates between a denoising step and the inversion of the observation operator. While these tools offer flexibility, the convergence of the resulting algorithm may be difficult to analyse. In this paper, we plug a state-of-the-art denoiser, based on a Gaussian mixture model, in the iterations of an alternating direction method of multipliers and prove the algorithm is guaranteed to converge. Moreover, we build upon the concept of scene-adapted priors where we learn a model targeted to a specific scene being imaged, and apply the proposed method to address the hyperspectral sharpening problem

    Saving phase: Injectivity and stability for phase retrieval

    Full text link
    Recent advances in convex optimization have led to new strides in the phase retrieval problem over finite-dimensional vector spaces. However, certain fundamental questions remain: What sorts of measurement vectors uniquely determine every signal up to a global phase factor, and how many are needed to do so? Furthermore, which measurement ensembles lend stability? This paper presents several results that address each of these questions. We begin by characterizing injectivity, and we identify that the complement property is indeed a necessary condition in the complex case. We then pose a conjecture that 4M-4 generic measurement vectors are both necessary and sufficient for injectivity in M dimensions, and we prove this conjecture in the special cases where M=2,3. Next, we shift our attention to stability, both in the worst and average cases. Here, we characterize worst-case stability in the real case by introducing a numerical version of the complement property. This new property bears some resemblance to the restricted isometry property of compressed sensing and can be used to derive a sharp lower Lipschitz bound on the intensity measurement mapping. Localized frames are shown to lack this property (suggesting instability), whereas Gaussian random measurements are shown to satisfy this property with high probability. We conclude by presenting results that use a stochastic noise model in both the real and complex cases, and we leverage Cramer-Rao lower bounds to identify stability with stronger versions of the injectivity characterizations.Comment: 22 page
    • …
    corecore