1,690 research outputs found

    Age and Mass for 920 LMC Clusters Derived from 100 Million Monte Carlo Simulations

    Full text link
    We present new age and mass estimates for 920 stellar clusters in the Large Magellanic Cloud (LMC) based on previously published broad-band photometry and the stellar cluster analysis package, MASSCLEANage. Expressed in the generic fitting formula, d^{2}N/dM dt ~ M^{\alpha} t^{\beta}, the distribution of observed clusters is described by \alpha = -1.5 to -1.6 and \beta = -2.1 to -2.2. For 288 of these clusters, ages have recently been determined based on stellar photometric color-magnitude diagrams, allowing us to gauge the confidence of our ages. The results look very promising, opening up the possibility that this sample of 920 clusters, with reliable and consistent age, mass and photometric measures, might be used to constrain important characteristics about the stellar cluster population in the LMC. We also investigate a traditional age determination method that uses a \chi^2 minimization routine to fit observed cluster colors to standard infinite mass limit simple stellar population models. This reveals serious defects in the derived cluster age distribution using this method. The traditional \chi^2 minimization method, due to the variation of U,B,V,R colors, will always produce an overdensity of younger and older clusters, with an underdensity of clusters in the log(age/yr)=[7.0,7.5] range. Finally, we present a unique simulation aimed at illustrating and constraining the fading limit in observed cluster distributions that includes the complex effects of stochastic variations in the observed properties of stellar clusters.Comment: Accepted for publication in The Astrophysical Journal, 37 pages, 18 figure

    A First Look at Data Center Network Condition Through the Eyes of PTPmesh

    Get PDF
    © 2018 IFIP. Increased network latency and packets losses can affect substantially application performance. Due to the scale of data centers, custom network monitoring tools have been developed to measure network latency and packet loss. In our previous work, we used the Precision Time Protocol (PTP) to measure one-way delay and to quantify packet loss ratios, and we proposed PTPmesh as a cloud network monitoring tool. In this work, we provide a better understanding on how to exploit the measurement data offered by PTPmesh and present a detailed analysis of PTPmesh measurements collected in ten data centers from three cloud providers. Our findings reveal different latency, latency variance and packet loss characteristics across data centers. Through our analysis, we showcase the strengths and limitations of PTPmesh as a cloud network monitoring tool. To foster further research in this area, we make our dataset available

    PTPmesh: Data Center Network Latency Measurements Using PTP

    Get PDF
    Many data center applications are latency-sensitive. Monitoring continuously the network latency and reacting to congestion on a network path is important to ensure that the applications performance does not suffer penalties. We show how to use the Precision Time Protocol (PTP) to infer network latency and packet loss in data centers, and we conduct network latency and packet loss measurements in data centers from different cloud providers, using PTPd, an open-source software implementation of PTP

    Simulating the impact of dust cooling on the statistical properties of the intracluster medium

    Full text link
    From the first stages of star and galaxy formation, non-gravitational processes such as ram pressure stripping, SNs, galactic winds, AGNs, galaxy-galaxy mergers, etc... lead to the enrichment of the IGM in stars, metals as well as dust, via the ejection of galactic material into the IGM. We know now that these processes shape, side by side with gravitation, the formation and the evolution of structures. We present here hydrodynamic simulations of structure formation implementing the effect of the cooling by dust on large scale structure formation. We focus on the scale of galaxy clusters and study the statistical properties of clusters. Here we present our results on the TXMT_X-M and the LXML_X-M scaling relations which exhibit changes on both the slope and normalization when adding cooling by dust to the standard radiative cooling model. For example, the normalization of the TXMT_X-M relation changes only by a maximum of 2% at M=1014M=10^{14} M_\odot whereas the normalization of the LXTXL_X-T_X changes by as much as 10% at TX=1T_X=1 keV for models that including dust cooling. Our study shows that the dust is an added non-gravitational process that contributes shaping the thermodynamical state of the hot ICM gas.Comment: 11 pages, 4 figures, ASR in pres

    The dust SED of dwarf galaxies

    Get PDF
    Context. High-resolution data from Spitzer, Herschel, and Planck allow us to probe the entire spectral energy distribution (SED) of morphologically separated components of the dust emission from nearby galaxies and allow a more detailed comparison between data and models. Aims. We wish to establish the physical origin of dust heating and emission based on radiation transfer models, that self-consistently connect the emission components from diffuse dust and the dust in massive star forming regions. Methods. NGC 4214 is a nearby dwarf galaxy with a large set of ancillary data, ranging from the ultraviolet (UV) to radio, including maps from Spitzer and Herschel and detections from Planck. We mapped this galaxy with MAMBO at 1.2mm at the IRAM 30m telescope. We extracted separate dust emission components for the HII regions (plus their associated PDRs on pc scales) and for the diffuse dust (on kpc scales). We analysed the full UV to FIR/submm SED of the galaxy using a radiation transfer model that self-consistently treats the dust emission from diffuse and star forming (SF) complexes components, considering the illumination of diffuse dust both by the distributed stellar populations and by escaping light from the HII regions. While maintaining consistency within the framework of this model, we additionally used a model that provides a detailed description of the dust emission from the HII regions and their surrounding PDRs on pc scales. Thanks to the large amount of available data and many previous studies for NGC 4214, very few free parameters remained in the model fitting process. Results. We achieve a satisfactory fit for the emission from HII + PDR regions on pc scales, with the exception of the emission at 8 μm, which is underpredicted by the model. For the diffuse emission we achieve a good fit if we assume that about 40-65% of the emission escaping the HII + PDR regions is able to leave the galaxy without passing through a diffuse ISM, which is not an unlikely scenario for a dwarf galaxy that has recently undergone a nuclear starburst. We determine a dust-to-gas mass ratio of 350-470, which is close to the expected value based on the metallicity. © 2012 ESO

    The case for retraining of ML models for IoT device identification at the edge

    Get PDF
    Internet-of-Things (IoT) devices are known to be the source of many security problems, and as such they would greatly benefit from automated management. This requires robustly identifying devices so that appropriate network security policies can be applied. We address this challenge by exploring how to accurately identify IoT devices based on their network behavior, using resources available at the edge of the network. In this paper, we compare the accuracy of five different machine learning models (tree-based and neural network-based) for identifying IoT devices by using packet trace data from a large IoT test-bed, showing that all models need to be updated over time to avoid significant degradation in accuracy. In order to effectively update the models, we find that it is necessary to use data gathered from the deployment environment, e.g., the household. We therefore evaluate our approach using hardware resources and data sources representative of those that would be available at the edge of the network, such as in an IoT deployment. We show that updating neural network-based models at the edge is feasible, as they require low computational and memory resources and their structure is amenable to being updated. Our results show that it is possible to achieve device identification and categorization with over 80% and 90% accuracy respectively at the edge
    corecore