2,326 research outputs found

    Application of the inhomogeneous Lippmann-Schwinger equation to inverse scattering problems

    Full text link
    In this paper we present a hybrid approach to numerically solve two-dimensional electromagnetic inverse scattering problems, whereby the unknown scatterer is hosted by a possibly inhomogeneous background. The approach is `hybrid' in that it merges a qualitative and a quantitative method to optimize the way of exploiting the a priori information on the background within the inversion procedure, thus improving the quality of the reconstruction and reducing the data amount necessary for a satisfactory result. In the qualitative step, this a priori knowledge is utilized to implement the linear sampling method in its near-field formulation for an inhomogeneous background, in order to identify the region where the scatterer is located. On the other hand, the same a priori information is also encoded in the quantitative step by extending and applying the contrast source inversion method to what we call the `inhomogeneous Lippmann-Schwinger equation': the latter is a generalization of the classical Lippmann-Schwinger equation to the case of an inhomogeneous background, and in our paper is deduced from the differential formulation of the direct scattering problem to provide the reconstruction algorithm with an appropriate theoretical basis. Then, the point values of the refractive index are computed only in the region identified by the linear sampling method at the previous step. The effectiveness of this hybrid approach is supported by numerical simulations presented at the end of the paper.Comment: accepted in SIAM Journal on Applied Mathematic

    Solar hard X-ray imaging by means of Compressed Sensing and Finite Isotropic Wavelet Transform

    Full text link
    This paper shows that compressed sensing realized by means of regularized deconvolution and the Finite Isotropic Wavelet Transform is effective and reliable in hard X-ray solar imaging. The method utilizes the Finite Isotropic Wavelet Transform with Meyer function as the mother wavelet. Further, compressed sensing is realized by optimizing a sparsity-promoting regularized objective function by means of the Fast Iterative Shrinkage-Thresholding Algorithm. Eventually, the regularization parameter is selected by means of the Miller criterion. The method is applied against both synthetic data mimicking the Spectrometer/Telescope Imaging X-rays (STIX) measurements and experimental observations provided by the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI). The performances of the method are compared with the results provided by standard visibility-based reconstruction methods. The results show that the application of the sparsity constraint and the use of a continuous, isotropic framework for the wavelet transform provide a notable spatial accuracy and significantly reduce the ringing effects due to the instrument point spread functions

    Fast spectral fitting of hard X-ray bremsstrahlung from truncated power-law electron spectra

    Get PDF
    <p><b>Context:</b> Hard X-ray bremsstrahlung continuum spectra, such as from solar flares, are commonly described in terms of power-law fits, either to the photon spectra themselves or to the electron spectra responsible for them. In applications various approximate relations between electron and photon spectral indices are often used for energies both above and below electron low-energy cutoffs.</p> <p><b>Aims:</b> We examine the form of the exact relationships in various situations, and for various cross-sections, showing that empirical relations sometimes used can be highly misleading especially at energies below the low-energy cutoff, and consider how to improve fitting procedures.</p> <p><b>Methods:</b> We obtain expressions for photon spectra from single, double and truncated power-law electron spectra for a variety of cross-sections and for the thin and thick target models and simple analytic expressions for the non-relativistic Bethe-Heitler case.</p> <p><b>Results:</b> We show that below the low-energy cutoff Kramers and other constant spectral index forms commonly used are very poor approximations to accurate results, but that our analytical forms are a good match; and that above a low-energy cutoff, the Kramers and non-relativistic Bethe-Heitler results match reasonably well with results for up to energies around 100 keV.</p> <p><b>Conclusions:</b> Analytical forms of the non-relativistic Bethe-Heitler photon spectra from general power-law electron spectra are good match to exact results for both thin and thick targets and they enable much faster spectral fitting than evaluation of the full spectral integrations.</p&gt

    Feature augmentation for the inversion of the Fourier transform with limited data

    Get PDF
    We investigate an interpolation/extrapolation method that, given scattered observations of the Fourier transform, approximates its inverse. The interpolation algorithm takes advantage of modeling the available data via a shape-driven interpolation based on variably scaled Kernels (VSKs), whose implementation is here tailored for inverse problems. The so-constructed interpolants are used as inputs for a standard iterative inversion scheme. After providing theoretical results concerning the spectrum of the VSK collocation matrix, we test the method on astrophysical imaging benchmarks

    Determination of the Acceleration Region Size in a Loop-structured Solar Flare

    Full text link
    In order to study the acceleration and propagation of bremsstrahlung-producing electrons in solar flares, we analyze the evolution of the flare loop size with respect to energy at a variety of times. A GOES M3.7 loop-structured flare starting around 23:55 on 2002 April 14 is studied in detail using \textit{Ramaty High Energy Solar Spectroscopic Imager} (\textit{RHESSI}) observations. We construct photon and mean-electron-flux maps in 2-keV energy bins by processing observationally-deduced photon and electron visibilities, respectively, through several image-processing methods: a visibility-based forward-fit (FWD) algorithm, a maximum entropy (MEM) procedure and the uv-smooth (UVS) approach. We estimate the sizes of elongated flares (i.e., the length and width of flaring loops) by calculating the second normalized moments of the intensity in any given map. Employing a collisional model with an extended acceleration region, we fit the loop lengths as a function of energy in both the photon and electron domains. The resulting fitting parameters allow us to estimate the extent of the acceleration region which is between 13arcsec\sim 13 \rm{arcsec} and 19arcsec\sim 19 \rm{arcsec}. Both forward-fit and uv-smooth algorithms provide substantially similar results with a systematically better fit in the electron domain.The consistency of the estimates from these methods provides strong support that the model can reliably determine geometric parameters of the acceleration region. The acceleration region is estimated to be a substantial fraction (1/2\sim 1/2) of the loop extent, indicating that this dense flaring loop incorporates both acceleration and transport of electrons, with concurrent thick-target bremsstrahlung emission.Comment: 8 pages, 5 figures, accepted to Astronomy and Astrophysics journa

    Count-based imaging model for the Spectrometer/Telescope for Imaging X-rays (STIX) in Solar Orbiter

    Get PDF
    The Spectrometer/Telescope for Imaging X-rays (STIX) will study solar flares across the hard X-ray window provided by the Solar Orbiter cluster. Similarly to the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI), STIX is a visibility-based imaging instrument that will require Fourier-based image reconstruction methods. However, in this paper we show that as for RHESSI, count-based imaging is also possible for STIX. Specifically, we introduce and illustrate a mathematical model that mimics the STIX data formation process as a projection from the incoming photon flux into a vector consisting of 120 count components. Then we test the reliability of expectation maximization for image reconstruction in the case of several simulated configurations that are typical of flare morphology

    Culverted rivers in the historic center of Genoa (Italy) as an emblematic case of human pressure and fluvial landscape changes

    Get PDF
    The city of Genoa is internationally known its the recurrent floods, mainly related to the Bisagno River. The high risk is linked to meteo-hydrological hazard and to the urbanisation in hazardous areas and consequently to the high exposure of risk elements. The present research concerns the hydrographic network that characterises the historical center of Genoa, i.e. the natural amphitheatre bordering the Polcevera valley to the W and the Bisagno valley to the E. In this area of just 8.5 km2 there are eight catchments ranging from 0.49 km2 to 2.36 km2 in size: from W to E we recognise the basins of the San Bartolomeo, San Lazzaro, San Teodoro, Lagaccio, Sant'Ugo, Carbonara, Sant'Anna and Torbido streams. These watercourses have been subject to anthropic modifications since the Middle Ages, sometimes with significant diversions, rectifications and channelling; today the watercourse network appears almost entirely artificial, flowing under the streets and buildings of the historic centre. The name of some alleys recalls their presence, which is otherwise not perceptible. Only the upper basin of the Lagaccio and San Lazzaro streams still have a watercourse with a natural riverbed, although the area is still significantly urbanised. The construction of these culverts over time and the modifications they have undergone over the following centuries up to very recent times due to progressive urbanisation have led to a reduction in the hydraulic cross-section, which can lead to a possible flow of water under pressure and the consequent flooding hazard. Therefore a better geographic knowledge of these culverted streams in Genoa historical city is crucial for hazard and risk assessments and for the planning of related hydraulic risk reduction activities

    Highly Automated Dipole EStimation (HADES)

    Get PDF
    Automatic estimation of current dipoles from biomagnetic data is still a problematic task. This is due not only to the ill-posedness of the inverse problem but also to two intrinsic difficulties introduced by the dipolar model: the unknown number of sources and the nonlinear relationship between the source locations and the data. Recently, we have developed a new Bayesian approach, particle filtering, based on dynamical tracking of the dipole constellation. Contrary to many dipole-based methods, particle filtering does not assume stationarity of the source configuration: the number of dipoles and their positions are estimated and updated dynamically during the course of the MEG sequence. We have now developed a Matlab-based graphical user interface, which allows nonexpert users to do automatic dipole estimation from MEG data with particle filtering. In the present paper, we describe the main features of the software and show the analysis of both a synthetic data set and an experimental dataset

    Numerical and experimental study on metamaterials featuring acoustical and thermal properties

    Get PDF
    Metamaterials can be defined as materials which, for their peculiar composition or structure, exhibit characteristics that are not normally found in nature. "Multifunctional" metamaterials could be used to optimise different characteristics at the same time. In this paper the authors try to apply them for thermal and acoustic optimization of external building walls. Thermal optimization consists in obtaining a low transmittance, important in winter, and a low periodic thermal transmittance, important in summer. Acoustic optimization consists in obtaining high sound transmission loss, to respect the law prescriptions, and a good sound absorption coefficient, if possible. In this way should be possible enhance the comfort conditions in buildings and reduce the energy demand for winter heating and summer cooling. The proposed solution consists of several layers with different suitable characteristics: the sequence of the layers has been chosen with particular care. The thermal analysis has been performed by means of a self-developed code based on the ISO 13786 standard. The acoustic behaviour of the single layers has been determined following the procedure given by the ASTM E2611-09 standard using a four-microphone impedance tube and the transfer matrix method has been used for the complete assembly. This preliminary combined study showed encouraging results
    corecore