3,310 research outputs found

    Infrared Helium-Hydrogen Line Ratios as a Measure of Stellar Effective Temperature

    Get PDF
    We have observed a large sample of compact planetary nebulae in the near-infrared to determine how the 2^1P-2^1S HeI line at 2.058um varies as a function of stellar effective temperature, Teff. The ratio of this line with HI Br g at 2.166um has often been used as a measure of the highest Teff present in a stellar cluster, and hence on whether there is a cut-off in the stellar initial mass function at high masses. However, recent photoionisation modelling has revealed that the behaviour of this line is more complex than previously anticipated. Our work shows that in most aspects the photoionisation models are correct. In particular, we confirm the weakening of the 2^1P-2^1S as Teff increases beyond 40000K. However, in many cases the model underpredicts the observed ratio when we consider the detailed physical conditions in the individual planetary nebulae. Furthermore, there is evidence that there is still significant 2^1P-2^1S HeI line emission even in the planetary nebulae with very hot (Teff>100000K) central stars. It is clear from our work that this ratio cannot be considered as a reliable measure of effective temperature on its own.Comment: 24 pages 11 figures (in 62 separate postscript files) Accepted for publication in Monthly Notices of the Royal Astronomical Societ

    Near Infrared Spectra of Compact Planetary Nebulae

    Get PDF
    This paper continues our study of the behaviour of near infrared helium recombination lines in planetary nebula. We find that the 1.7007um 4^3D-3^3P HeI line is a good measure of the HeI recombination rate, since it varies smoothly with the effective temperature of the central star. We were unable to reproduce the observed data using detailed photoionisation models at both low and high effective temperatures, but plausible explanations for the difference exist for both. We therefore conclude that this line could be used as an indicator of the effective temperature in obscured nebula. We also characterised the nature of the molecular hydrogen emission present in a smaller subset of our sample. The results are consistent with previous data indicating that ultraviolet excitation rather than shocks is the main cause of the molecular hydrogen emission in planetary nebulae.Comment: Accepted for publication in MNRA

    IR Dust Bubbles: Probing the Detailed Structure and Young Massive Stellar Populations of Galactic HII Regions

    Full text link
    We present an analysis of wind-blown, parsec-sized, mid-infrared bubbles and associated star-formation using GLIMPSE/IRAC, MIPSGAL/MIPS and MAGPIS/VLA surveys. Three bubbles from the Churchwell et al. (2006) catalog were selected. The relative distribution of the ionized gas (based on 20 cm emission), PAH emission (based on 8 um, 5.8 um and lack of 4.5 um emission) and hot dust (24 um emission) are compared. At the center of each bubble there is a region containing ionized gas and hot dust, surrounded by PAHs. We identify the likely source(s) of the stellar wind and ionizing flux producing each bubble based upon SED fitting to numerical hot stellar photosphere models. Candidate YSOs are also identified using SED fitting, including several sites of possible triggered star formation.Comment: 37 pages, 17 figure

    On partial order semantics for SAT/SMT-based symbolic encodings of weak memory concurrency

    Full text link
    Concurrent systems are notoriously difficult to analyze, and technological advances such as weak memory architectures greatly compound this problem. This has renewed interest in partial order semantics as a theoretical foundation for formal verification techniques. Among these, symbolic techniques have been shown to be particularly effective at finding concurrency-related bugs because they can leverage highly optimized decision procedures such as SAT/SMT solvers. This paper gives new fundamental results on partial order semantics for SAT/SMT-based symbolic encodings of weak memory concurrency. In particular, we give the theoretical basis for a decision procedure that can handle a fragment of concurrent programs endowed with least fixed point operators. In addition, we show that a certain partial order semantics of relaxed sequential consistency is equivalent to the conjunction of three extensively studied weak memory axioms by Alglave et al. An important consequence of this equivalence is an asymptotically smaller symbolic encoding for bounded model checking which has only a quadratic number of partial order constraints compared to the state-of-the-art cubic-size encoding.Comment: 15 pages, 3 figure

    Some Further Results for the Stationary Points and Dynamics of Supercooled Liquids

    Full text link
    We present some new theoretical and computational results for the stationary points of bulk systems. First we demonstrate how the potential energy surface can be partitioned into catchment basins associated with every stationary point using a combination of Newton-Raphson and eigenvector-following techniques. Numerical results are presented for a 256-atom supercell representation of a binary Lennard-Jones system. We then derive analytical formulae for the number of stationary points as a function of both system size and the Hessian index, using a framework based upon weakly interacting subsystems. This analysis reveals a simple relation between the total number of stationary points, the number of local minima, and the number of transition states connected on average to each minimum. Finally we calculate two measures of localisation for the displacements corresponding to Hessian eigenvectors in samples of stationary points obtained from the Newton-Raphson-based geometry optimisation scheme. Systematic differences are found between the properties of eigenvectors corresponding to positive and negative Hessian eigenvalues, and localised character is most pronounced for stationary points with low values of the Hessian index.Comment: 16 pages, 2 figure

    The influence of psychological factors on pre-employment activities in the unemployed

    Get PDF
    Structural relationships among latent and economic deprivation, employment commitment, personal resources, and pre-employment activities are examined using a cross- sectional survey of the unemployed. The dependent variable, pre-employment activities, constitutes some of the main activities (other than their daily chores) that the unemployed engage in, including job-seeking, training, volunteer or unpaid work, and leisure activities. The research draws on concepts from Jahoda's latent deprivation theory, Fryer's agency restriction theory, and expectancy value theory. Latent and economic deprivation, employment commitment, and personal resources are expected to directly predict the type of pre-employment activities the unemployed engage in. Latent deprivation is an endogenous construct underlying measures of time structure measured by time structure, enforced activity, social contact, collective purpose, and social status. Measures of personal resources include job-search self-efficacy, self-esteem, affective disposition, and psychological wellbeing. Significant interactions between the predictor variables are also hypothesised. For example, unemployed individuals with higher perceived latent and economic deprivation and higher employment commitment are expected to engage more frequently in employment-related activities (e.g., jobseeking, training, and unpaid work participation). Supplementary hypotheses are framed to test the relative importance of each of the predictor variables. Hypotheses are tested using structural equation modelling. This study is the first stage of a longitudinal study designed to identify psychological factors that influence employment outcomes in the unemployed. Findings from the study will identify psychological barriers to active economic and social participation in the workforce that can be targeted for intervention programs for the unemployed

    Constraints in the circumstellar density distribution of massive Young Stellar Objects

    Get PDF
    We use a Monte Carlo code to generate synthetic near-IR reflection nebulae that resemble those (normally associated with a bipolar outflow cavity) seen towards massive young stellar objects (YSOs). The 2D axi-symmetric calculations use an analytic expression for a flattened infalling rotating envelope with a bipolar cavity representing an outflow. We are interested in which aspects of the circumstellar density distribution can be constrained by observations of these reflection nebulae. We therefore keep the line of sight optical depth constant in the model grid, as this is often constrained independently by observations. It is found that envelopes with density distributions corresponding to mass infall rates of similar to10(-4) M-circle dot yr(-1) (for an envelope radius of 4700 AU) seen at an inclination angle of similar to45degrees approximately reproduce the morphology and extension of the sub-arcsecond nebulae observed in massive YSOs. Based on the flux ratio between the approaching and receding lobe of the nebula, we can constrain the system inclination angle. The cavity opening angle is well constrained from the nebula opening angle. Our simulations indicate that to constrain the outflow cavity shape and the degree of flattening in the envelope, near-IR imaging with higher resolution and dynamic range than speckle imaging in 4 m-class telescopes is needed. The radiative transfer code is also used to simulate the near-IR sub-arcsecond nebula seen in Mon R2 IRS3. We find indications of a shallower opacity law in this massive YSO than in the interstellar medium, or possibly a sharp drop in the envelope density distribution at distances of similar to1000 AU from the illuminating source.</p

    Helium and Hydrogen Line Ratios and The Stellar Content of Compact HII Regions

    Get PDF
    We present observations and models of the behaviour of the HI and HeI lines between 1.6 and 2.2um in a small sample of compact HII regions. As in our previous papers on planetary nebulae, we find that the `pure' 1.7007um 4^3D-3^3P and 2.16475um 7^(3,1)G-4^(3,1)F HeI recombination lines behave approximately as expected as the effective temperature of the central exciting star(s) increases. However, the 2.058um 2^1P-2^1S HeI line does not behave as the model predicts, or as seen in planetary nebulae. Both models and planetary nebulae showed a decrease in the HeI 2^1P-2^1S/HI Br gamma ratio above an effective temperature of 40000K. The compact HII regions do not show any such decrease. The problem with this line ratio is probably due to the fact that the photoionisation model does not account correctly for the high densities seen in these HII regions, and that we are therefore seeing more collisional excitation of the 2^1P level than the model predicts. It may also reflect some deeper problem in the assumed model stellar atmospheres. In any event, although the normal HeI recombination lines can be used to place constraints on the temperature of the hottest star present, the HeI 2^1P-2^1S/HI Br gamma ratio should not be used for this purpose in either Galactic HII regions or in starburst galaxies, and conclusions from previous work using this ratio should be regarded with extreme caution. We also show that the combination of the near infrared `pure' recombination line ratios with mid-infrared forbidden line data provides a good discriminant of the form of the far ultraviolet spectral energy distribution of the exciting star(s). From this we conclude that CoStar models are a poor match to the available data for our sources, though the more recent WM-basic models are a better fit.Comment: Accepted for publication in MNRA

    Sampling and sensitivity analyses tools (SaSAT) for computational modelling

    Get PDF
    SaSAT (Sampling and Sensitivity Analysis Tools) is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated

    Improving sentiment analysis through ensemble learning of meta-level features

    Get PDF
    In this research, the well-known microblogging site, Twitter, was used for a sentiment analysis investigation. We propose an ensemble learning approach based on the meta-level features of seven existing lexicon resources for automated polarity sentiment classification. The ensemble employs four base learners (a Two-Class Support Vector Machine, a Two-Class Bayes Point Machine, a Two-Class Logistic Regression and a Two-Class Decision Forest) for the classification task. Three different labelled Twitter datasets were used to evaluate the effectiveness of this approach to sentiment analysis. Our experiment shows that, based on a combination of existing lexicon resources, the ensemble learners minimize the error rate by avoiding poor selection from stand-alone classifiers
    • …
    corecore