388 research outputs found
No temperature fluctuations in the giant HII region H 1013
While collisionally excited lines in HII regions allow one to easily probe
the chemical composition of the interstellar medium in galaxies, the possible
presence of important temperature fluctuations casts some doubt on the derived
abundances. To provide new insights into this question, we have carried out a
detailed study of a giant HII region, H 1013, located in the galaxy M101, for
which many observational data exist and which has been claimed to harbour
temperature fluctuations at a level of t^2 = 0.03-0.06. We have first
complemented the already available optical observational datasets with a
mid-infrared spectrum obtained with the Spitzer Space Telescope. Combined with
optical data, this spectrum provides unprecedented information on the
temperature structure of this giant HII region. A preliminary analysis based on
empirical temperature diagnostics suggests that temperature fluctuations should
be quite weak. We have then performed a detailed modelling using the pyCloudy
package based on the photoionization code Cloudy. We have been able to produce
photoionization models constrained by the observed Hb surface brightness
distribution and by the known properties of the ionizing stellar population
than can account for most of the line ratios within their uncertainties. Since
the observational constraints are both strong and numerous, this argues against
the presence of significant temperature fluctuations in H 1013. The oxygen
abundance of our best model is 12 + log O/H = 8.57, as opposed to the values of
8.73 and 8.93 advocated by Esteban et al. (2009) and Bresolin (2007),
respectively, based on the significant temperature fluctuations they derived.
However, our model is not able to reproduce the intensities of the oxygen
recombination lines . This cannot be attributed to observational uncertainties
and requires an explanation other than temperature fluctuations.Comment: accepted in Astronomy & Astrophysic
Photoionization models of the CALIFA HII regions. I. Hybrid models
Photoionization models of HII regions require as input a description of the
ionizing SED and of the gas distribution, in terms of ionization parameter U
and chemical abundances (e.g. O/H and N/O). A strong degeneracy exists between
the hardness of the SED and U, which in turn leads to high uncertainties in the
determination of the other parameters, including abundances. One way to resolve
the degeneracy is to fix one of the parameters using additional information.
For each of the ~ 20000 sources of the CALIFA HII regions catalog, a grid of
photoionization models is computed assuming the ionizing SED being described by
the underlying stellar population obtained from spectral synthesis modeling.
The ionizing SED is then defined as the sum of various stellar bursts of
different ages and metallicities. This solves the degeneracy between the shape
of the ionizing SED and U. The nebular metallicity (associated to O/H) is
defined using the classical strong line method O3N2 (which gives to our models
the status of "hybrids"). The remaining free parameters are the abundance ratio
N/O and the ionization parameter U, which are determined by looking for the
model fitting [NII]/Ha and [OIII]/Hb. The models are also selected to fit
[OII]/Hb. This process leads to a set of ~ 3200 models that reproduce
simultaneously the three observations.
We find that the regions associated to young stellar bursts suffer leaking of
the ionizing photons, the proportion of escaping photons having a median of
80\%. The set of photoionization models satisfactorily reproduces the electron
temperature derived from the [OIII]4363/5007 line ratio. We determine new
relations between the ionization parameter U and the [OII]/[OIII] or
[SII]/[SIII] line ratios. New relations between N/O and O/H and between U and
O/H are also determined.
All the models are publicly available on the 3MdB database.Comment: Accepted for publication in A&
Three-dimensional photoionization modelling of the planetary nebula NGC 3918
The three-dimensional Monte Carlo photoionization code Mocassin has been applied to construct a realistic model of the planetary nebula NGC 3918. Three different geometric models were tried. The effects of the interaction of the diffuse fields coming from two adjacent regions of different densities were investigated. These are found to be non-negligible, even for the relatively uncomplicated case of a biconical geometry. We found that the ionization structure of low ionization species near the boundaries is particularly affected. It is found that all three models provided acceptable matches to the integrated nebular optical and ultraviolet spectrum. Large discrepancies were found between all of the model predictions of infrared fine-structure line fluxes and ISO SWS measurements. This was found to be largely due to an offset of ~14 arcsec from the centre of the nebula that affected all of the ISO observations of NGC 3918. For each model, we also produced projected emission-line maps and position-velocity diagrams from synthetic long-slit spectra, which could be compared to recent HST images and ground-based long-slit echelle spectra. This comparison showed that spindle-like model B provided the best match to the observations. We have therefore shown that although the integrated emission line spectrum of NGC 3918 can be reproduced by all three of the three-dimensional models investigated in this work, the capability of creating projected emission-line maps and position-velocity diagrams from synthetic long-slit spectra was crucial in allowing us to constrain the structure of this object
Caryotype de la Mouffette rayée, Mephitis mephitis
International audienc
On the computation of interstellar extinction in photoionized nebulae
Ueta & Otsuka (2021) proposed a method, named as the "Proper Plasma Analysis
Practice", to analyze spectroscopic data of ionized nebulae. The method is
based on a coherent and simultaneous determination of the reddening correction
and physical conditions in the nebulae. The same authors (Ueta & Otsuka 2022,
UO22) reanalyzed the results of Galera-Rosillo et al. (2022, GR22) on nine of
the brightest planetary nebulae in M31. They claim that, if standard values of
the physical conditions are used to compute the extinction instead of their
proposed method, extinction correction is underestimated by more than 50% and
hence, ionic and elemental abundance determinations, especially the N/O ratio,
are incorrect. Several tests were performed to assess the accuracy of the
results of GR22, when determining: i) the extinction coefficient, ii) the
electron temperature and density, and iii) the ionic abundances. In the latter
case, N+ /H+ ionic abundance was recalculated using both H_alpha and H_beta as
the reference H I emissivity. The analysis shows that the errors introduced by
adopting standard values of the plasma conditions by GR22 are small, within
their quoted uncertainties. On the other hand, the interstellar extinction in
UO22 is found to be overestimated for five of the nine nebulae considered. This
propagates into their analysis of the properties of the nebulae and their
progenitors. The python notebook used to generate all the results presented in
this paper are of public access on a Github repository. The results from GR22
are proven valid and the conclusions of the paper hold firmly. Although the
PPAP is, in principle, a recommended practice, we insist that it is equally
important to critically assess which H I lines are to be included in the
determination of the interstellar extinction coefficient, and to assert that
physical results are obtained for the undereddened line ratios.Comment: Accepted for publication in A&A Lette
Optimization of Planck/LFI on--board data handling
To asses stability against 1/f noise, the Low Frequency Instrument (LFI)
onboard the Planck mission will acquire data at a rate much higher than the
data rate allowed by its telemetry bandwith of 35.5 kbps. The data are
processed by an onboard pipeline, followed onground by a reversing step. This
paper illustrates the LFI scientific onboard processing to fit the allowed
datarate. This is a lossy process tuned by using a set of 5 parameters Naver,
r1, r2, q, O for each of the 44 LFI detectors. The paper quantifies the level
of distortion introduced by the onboard processing, EpsilonQ, as a function of
these parameters. It describes the method of optimizing the onboard processing
chain. The tuning procedure is based on a optimization algorithm applied to
unprocessed and uncompressed raw data provided either by simulations, prelaunch
tests or data taken from LFI operating in diagnostic mode. All the needed
optimization steps are performed by an automated tool, OCA2, which ends with
optimized parameters and produces a set of statistical indicators, among them
the compression rate Cr and EpsilonQ. For Planck/LFI the requirements are Cr =
2.4 and EpsilonQ <= 10% of the rms of the instrumental white noise. To speedup
the process an analytical model is developed that is able to extract most of
the relevant information on EpsilonQ and Cr as a function of the signal
statistics and the processing parameters. This model will be of interest for
the instrument data analysis. The method was applied during ground tests when
the instrument was operating in conditions representative of flight. Optimized
parameters were obtained and the performance has been verified, the required
data rate of 35.5 Kbps has been achieved while keeping EpsilonQ at a level of
3.8% of white noise rms well within the requirements.Comment: 51 pages, 13 fig.s, 3 tables, pdflatex, needs JINST.csl, graphicx,
txfonts, rotating; Issue 1.0 10 nov 2009; Sub. to JINST 23Jun09, Accepted
10Nov09, Pub.: 29Dec09; This is a preprint, not the final versio
Copper-enriched diamond-like carbon coatings promote regeneration at the bone�implant interface
There have been several attempts to design innovative biomaterials as surface coatings to enhance the biological performance of biomedical implants. The objective of this study was to design multifunctional Cu/a-C:H thin coating depositing on the Ti-6Al-4V alloy (TC4) via magnetron sputtering in the presence of Ar and CH4 for applications in bone implants. Moreover, the impact of Cu amount and sp2/sp3 ratio on the interior stress, corrosion behavior, mechanical properties, and tribological performance and biocompatibility of the resulting biomaterial was discussed. X-ray photoelectron spectroscopy (XPS) revealed that the sp2/sp3 portion of the coating was enhanced for samples having higher Cu contents. The intensity of the interior stress of the Cu/a-C:H thin bio-films decreased by increase of Cu content as well as the sp2/sp3 ratio. By contrast, the values of Young's modulus, the H3/E2 ratio, and hardness exhibited no significant difference with enhancing Cu content and sp2/sp3 ratio. However, there was an optimum Cu content (36.8 wt.) and sp2/sp3 ratio (4.7) that it is feasible to get Cu/a-C:H coating with higher hardness and tribological properties. Electrochemical impedance spectroscopy test results depicted significant improvement of Ti-6Al-4V alloy corrosion resistance by deposition of Cu/a-C:H thin coating at an optimum Ar/CH4 ratio. Furthermore, Cu/a-C:H thin coating with higher Cu contents showed better antibacterial properties and higher angiogenesis and osteogenesis activities. The coated samples inhibited the growth of bacteria as compared to the uncoated sample (p < 0.05). In addition, such coating composition can stimulate angiogenesis, osteogenesis and control host response, thereby increasing the success rate of implants. Moreover, Cu/a-C:H thin films encouraged development of blood vessels on the surface of titanium alloy when the density of grown blood vessels was increased with enhancing the Cu amount of the films. It is speculated that such coating can be a promising candidate for enhancing the osseointegration features. © 2020 Biomedical engineering; Materials science; Biomimetics; Tissue engineering; Coatings; Angiogenesis, Osteogenesis corrosion resistance; Copper; Hydrogenated amorphous carbon © 202
Off-line radiometric analysis of Planck/LFI data
The Planck Low Frequency Instrument (LFI) is an array of 22
pseudo-correlation radiometers on-board the Planck satellite to measure
temperature and polarization anisotropies in the Cosmic Microwave Background
(CMB) in three frequency bands (30, 44 and 70 GHz). To calibrate and verify the
performances of the LFI, a software suite named LIFE has been developed. Its
aims are to provide a common platform to use for analyzing the results of the
tests performed on the single components of the instrument (RCAs, Radiometric
Chain Assemblies) and on the integrated Radiometric Array Assembly (RAA).
Moreover, its analysis tools are designed to be used during the flight as well
to produce periodic reports on the status of the instrument. The LIFE suite has
been developed using a multi-layered, cross-platform approach. It implements a
number of analysis modules written in RSI IDL, each accessing the data through
a portable and heavily optimized library of functions written in C and C++. One
of the most important features of LIFE is its ability to run the same data
analysis codes both using ground test data and real flight data as input. The
LIFE software suite has been successfully used during the RCA/RAA tests and the
Planck Integrated System Tests. Moreover, the software has also passed the
verification for its in-flight use during the System Operations Verification
Tests, held in October 2008.Comment: Planck LFI technical papers published by JINST:
http://www.iop.org/EJ/journal/-page=extra.proc5/1748-022
- …