1,690 research outputs found
Age and Mass for 920 LMC Clusters Derived from 100 Million Monte Carlo Simulations
We present new age and mass estimates for 920 stellar clusters in the Large
Magellanic Cloud (LMC) based on previously published broad-band photometry and
the stellar cluster analysis package, MASSCLEANage. Expressed in the generic
fitting formula, d^{2}N/dM dt ~ M^{\alpha} t^{\beta}, the distribution of
observed clusters is described by \alpha = -1.5 to -1.6 and \beta = -2.1 to
-2.2. For 288 of these clusters, ages have recently been determined based on
stellar photometric color-magnitude diagrams, allowing us to gauge the
confidence of our ages. The results look very promising, opening up the
possibility that this sample of 920 clusters, with reliable and consistent age,
mass and photometric measures, might be used to constrain important
characteristics about the stellar cluster population in the LMC. We also
investigate a traditional age determination method that uses a \chi^2
minimization routine to fit observed cluster colors to standard infinite mass
limit simple stellar population models. This reveals serious defects in the
derived cluster age distribution using this method. The traditional \chi^2
minimization method, due to the variation of U,B,V,R colors, will always
produce an overdensity of younger and older clusters, with an underdensity of
clusters in the log(age/yr)=[7.0,7.5] range. Finally, we present a unique
simulation aimed at illustrating and constraining the fading limit in observed
cluster distributions that includes the complex effects of stochastic
variations in the observed properties of stellar clusters.Comment: Accepted for publication in The Astrophysical Journal, 37 pages, 18
figure
Recommended from our members
Characterizing the impact of network latency on cloud-based applications’ performance
Businesses and individuals run increasing numbers of applications in the cloud. The performance of an application running in the cloud depends on the data center conditions and upon the resources committed to an application. Small network delays may lead to a significant performance degradation, which affects both the user’s cost and the service provider’s resource usage, power consumption and data center efficiency. In this work, we quantify the effect of network latency on several typical cloud workloads, varying in complexity and use cases. Our results show that different applications are affected by network latency to differing amounts. These insights into the effect of network latency on different applications have ramifications for workload placement and physical host sharing when trying to reach performance targets
Recommended from our members
Enabling fast hierarchical heavy hitter detection using programmable data planes
© 2017 Copyright held by the owner/author(s). Measuring and monitoring network traffic is a fundamental aspect in network management. This poster is a first step towards an SDN solution using an event triggered approach to support advanced monitoring dataplane capabilities. Leveraging P4 programmability, we built a solution to inform a remote controller about the detected hierarchical heavy hitters, thus minimizing control plane overheads
Recommended from our members
Omniscient: Towards realizing near real-time data center network traffic maps
In order to make measurement-based placement, an optimiser
must make informed decisions. Currently, it is
difficult to assign routes or assign resource commitments
to network paths in data centers, as applications do not
declare what is carried within each flow. We propose to
provide insight into the traffic that traverses each network
link, realizing a near real-time map of a data center’s
network traffic. We present Omniscient, a system
that aims to increase the visibility into the data center
network traffic by computing link utilization broken
down by application instance using OpenFlow stats in
an SDN-enabled data center. The goal of the system is
to inform application instance placement and redundant
network path assignment in order to improve application
performance and resource utilization.Diana Andreea Popescu is supported by the EU FP7 ITN METRICS (607728) project
A First Look at Data Center Network Condition Through the Eyes of PTPmesh
© 2018 IFIP. Increased network latency and packets losses can affect substantially application performance. Due to the scale of data centers, custom network monitoring tools have been developed to measure network latency and packet loss. In our previous work, we used the Precision Time Protocol (PTP) to measure one-way delay and to quantify packet loss ratios, and we proposed PTPmesh as a cloud network monitoring tool. In this work, we provide a better understanding on how to exploit the measurement data offered by PTPmesh and present a detailed analysis of PTPmesh measurements collected in ten data centers from three cloud providers. Our findings reveal different latency, latency variance and packet loss characteristics across data centers. Through our analysis, we showcase the strengths and limitations of PTPmesh as a cloud network monitoring tool. To foster further research in this area, we make our dataset available
PTPmesh: Data Center Network Latency Measurements Using PTP
Many data center applications are latency-sensitive.
Monitoring continuously the network latency and reacting to
congestion on a network path is important to ensure that the
applications performance does not suffer penalties. We show
how to use the Precision Time Protocol (PTP) to infer network
latency and packet loss in data centers, and we conduct network
latency and packet loss measurements in data centers from
different cloud providers, using PTPd, an open-source software
implementation of PTP
Simulating the impact of dust cooling on the statistical properties of the intracluster medium
From the first stages of star and galaxy formation, non-gravitational
processes such as ram pressure stripping, SNs, galactic winds, AGNs,
galaxy-galaxy mergers, etc... lead to the enrichment of the IGM in stars,
metals as well as dust, via the ejection of galactic material into the IGM. We
know now that these processes shape, side by side with gravitation, the
formation and the evolution of structures. We present here hydrodynamic
simulations of structure formation implementing the effect of the cooling by
dust on large scale structure formation. We focus on the scale of galaxy
clusters and study the statistical properties of clusters. Here we present our
results on the and the scaling relations which exhibit changes
on both the slope and normalization when adding cooling by dust to the standard
radiative cooling model. For example, the normalization of the relation
changes only by a maximum of 2% at M whereas the
normalization of the changes by as much as 10% at keV for
models that including dust cooling. Our study shows that the dust is an added
non-gravitational process that contributes shaping the thermodynamical state of
the hot ICM gas.Comment: 11 pages, 4 figures, ASR in pres
The dust SED of dwarf galaxies
Context. High-resolution data from Spitzer, Herschel, and Planck allow us to probe the entire spectral energy distribution (SED) of morphologically separated components of the dust emission from nearby galaxies and allow a more detailed comparison between data and models. Aims. We wish to establish the physical origin of dust heating and emission based on radiation transfer models, that self-consistently connect the emission components from diffuse dust and the dust in massive star forming regions. Methods. NGC 4214 is a nearby dwarf galaxy with a large set of ancillary data, ranging from the ultraviolet (UV) to radio, including maps from Spitzer and Herschel and detections from Planck. We mapped this galaxy with MAMBO at 1.2mm at the IRAM 30m telescope. We extracted separate dust emission components for the HII regions (plus their associated PDRs on pc scales) and for the diffuse dust (on kpc scales). We analysed the full UV to FIR/submm SED of the galaxy using a radiation transfer model that self-consistently treats the dust emission from diffuse and star forming (SF) complexes components, considering the illumination of diffuse dust both by the distributed stellar populations and by escaping light from the HII regions. While maintaining consistency within the framework of this model, we additionally used a model that provides a detailed description of the dust emission from the HII regions and their surrounding PDRs on pc scales. Thanks to the large amount of available data and many previous studies for NGC 4214, very few free parameters remained in the model fitting process. Results. We achieve a satisfactory fit for the emission from HII + PDR regions on pc scales, with the exception of the emission at 8 μm, which is underpredicted by the model. For the diffuse emission we achieve a good fit if we assume that about 40-65% of the emission escaping the HII + PDR regions is able to leave the galaxy without passing through a diffuse ISM, which is not an unlikely scenario for a dwarf galaxy that has recently undergone a nuclear starburst. We determine a dust-to-gas mass ratio of 350-470, which is close to the expected value based on the metallicity. © 2012 ESO
The case for retraining of ML models for IoT device identification at the edge
Internet-of-Things (IoT) devices are known to be the source of many security problems, and as such they would greatly benefit from automated management. This requires robustly identifying devices so that appropriate network security policies can be applied. We address this challenge by exploring how to accurately identify IoT devices based on their network behavior, using resources available at the edge of the network. In this paper, we compare the accuracy of five different machine learning models (tree-based and neural network-based) for identifying IoT devices by using packet trace data from a large IoT test-bed, showing that all models need to be updated over time to avoid significant degradation in accuracy. In order to effectively update the models, we find that it is necessary to use data gathered from the deployment environment, e.g., the household. We therefore evaluate our approach using hardware resources and data sources representative of those that would be available at the edge of the network, such as in an IoT deployment. We show that updating neural network-based models at the edge is feasible, as they require low computational and memory resources and their structure is amenable to being updated. Our results show that it is possible to achieve device identification and categorization with over 80% and 90% accuracy respectively at the edge
- …