6,353 research outputs found

    Signatures of very massive stars: supercollapsars and their cosmological rate

    Full text link
    We compute the rate of supercollapsars by using cosmological, N-body, hydro, chemistry simulations of structure formation, following detailed stellar evolution according to proper yields (for He, C, N, O, Si, S, Fe, Mg, Ca, Ne, etc.) and lifetimes for stars having different masses and metallicities, and for different stellar populations (population III and population II-I). We find that supercollapsars are usually associated to dense, collapsing gas with little metal pollution and with abundances dominated by oxygen. The resulting supercollapsar rate is about 10−2 yr−1sr−110^{-2}\,\rm yr^{-1} sr^{-1} at redshift z=0z=0, and their contribution to the total rate is <0.1 < 0.1 per cent, which explains why they have never been detected so far. Expected rates at redshift z≃6z\simeq 6 are of the order of ∌10−3 yr−1sr−1\sim 10^{-3}\,\rm yr^{-1} sr^{-1} and decrease further at higher zz. Because of the strong metal enrichment by massive, short-lived stars, only ∌1\sim 1 supercollapsar generation is possible in the same star forming region. Given their sensitivity to the high-mass end of the primordial stellar mass function, they are suitable candidates to probe pristine population III star formation and stellar evolution at low metallicities.Comment: 6 pages; accepted, MNRAS. "Apri la mente a quel ch'io ti paleso" (Par. V, 40

    Model Order Selection Rules For Covariance Structure Classification

    Full text link
    The adaptive classification of the interference covariance matrix structure for radar signal processing applications is addressed in this paper. This represents a key issue because many detection architectures are synthesized assuming a specific covariance structure which may not necessarily coincide with the actual one due to the joint action of the system and environment uncertainties. The considered classification problem is cast in terms of a multiple hypotheses test with some nested alternatives and the theory of Model Order Selection (MOS) is exploited to devise suitable decision rules. Several MOS techniques, such as the Akaike, Takeuchi, and Bayesian information criteria are adopted and the corresponding merits and drawbacks are discussed. At the analysis stage, illustrating examples for the probability of correct model selection are presented showing the effectiveness of the proposed rules

    Rainsford Island Shoreline Evolution Study (RISES)

    Get PDF
    RISES conducted a shoreline change study in order to accurately map, quantify, and predict trends in shoreline evolution on Rainsford Island occurring from 1890-2008. It employed geographic information systems (GIS) and analytical statistical techniques to identify coastal hazard zones vulnerable to coastal erosion, rising sea-levels, and storm surges. The 11-acre Rainsford Island, located in Boston Harbor, Massachusetts, consists of two eroded drumlins connected by a low-lying spit. Settled by Europeans in 1636, the Island was later used as the Harbor’s main quarantine station. Previous archeological surveys have identified numerous historically sensitive sites dating to before the Revolutionary War period, including a large cemetery. Multiple data sources were integrated within a GIS, including historical maps, aerial photographs, and Light Detection and Ranging (LIDAR) data. The United States Geological Survey’s (USGS) Digital Shoreline Analysis System (DSAS) was utilized to determine rate-of-change statistics and distances. A comparison analysis was carried out between datasets to determine the change in area above the high water line (HWL). RISES used two proxies to delineate shoreline positions and one to delineate vegetated areas. The main shoreline indicator was the visually discernable high water line (HWL). A tidal datum/LIDAR derived mean high water (MHW) shoreline was also developed. Lastly, the visually discernable vegetation line was used to delineate vegetated areas. The results show that 14% of the Island has been eroded during the study period with the largest losses coming between 1970 and 1992. There has been 60 m of accretion, at a rate of 0.83 m/y, within the West Cove. The spit connecting the two drumlins has migrated southeast by 17 m at a rate of 0.33 m/y resulting in erosion along its northern side and accretion along its southern side. The southeast beach on the northern drumlin eroded 43 m at a rate of 0.59 m/y. All other areas of the Island remained stable. Predictive modeling indicates that 26% of the Island would become inundated with 1-m of sea-level-rise including the area containing the cemetery. The northern beaches and the cemetery area on the southern drumlin have been identified as coastal hazard zones

    A cellular automaton for the factor of safety field in landslides modeling

    Full text link
    Landslide inventories show that the statistical distribution of the area of recorded events is well described by a power law over a range of decades. To understand these distributions, we consider a cellular automaton to model a time and position dependent factor of safety. The model is able to reproduce the complex structure of landslide distribution, as experimentally reported. In particular, we investigate the role of the rate of change of the system dynamical variables, induced by an external drive, on landslide modeling and its implications on hazard assessment. As the rate is increased, the model has a crossover from a critical regime with power-laws to non power-law behaviors. We suggest that the detection of patterns of correlated domains in monitored regions can be crucial to identify the response of the system to perturbations, i.e., for hazard assessment.Comment: 4 pages, 3 figure

    Treatment of end-of-life concrete in an innovative heating-air classification system for circular cement-based products

    Get PDF
    A stronger commitment towards Green Building and circular economy, in response to environmental concerns and economic trends, is evident in modern industrial cement and concrete production processes. The critical demand for an overall reduction in the environmental impact of the construction sector can be met through the consumption of high-grade supplementary raw materials. Advanced solutions are under development in current research activities that will be capable of up-cycling larger quantities of valuable raw materials from the fine fractions of End-of-Life (EoL) concrete waste. New technology, in particular the Heating-Air classification System (HAS), simultaneously applies a combination of heating and separation processes within a fluidized bed-like chamber under controlled temperatures (±600 °C) and treatment times (25–40 s). In that process, moisture and contaminants are removed from the EoL fine concrete aggregates (0–4 mm), yielding improved fine fractions, and ultrafine recycled concrete particles (<0.125 mm), consisting mainly of hydrated cement, thereby adding value to finer EoL concrete fractions. In this study, two types of ultrafine recycled concrete (either siliceous or limestone EoL concrete waste) are treated in a pilot HAS technology for their conversion into Supplementary Cementitious Material (SCM). The physico-chemical effect of the ultrafine recycled concrete particles and their potential use as SCM in new cement-based products is assessed by employing substitutions of up to 10% of the conventional binder. The environmental viability of their use as SCM is then evaluated in a Life Cycle Assessment (LCA). The results demonstrated accelerated hydration kinetics of the mortars that incorporated these SCMs at early ages and higher mechanical strengths at all curing ages. Optimal substitutions were established at 5%. The results suggested that the overall environmental impact could be reduced by up to 5% when employing the ultrafine recycled concrete particles as SCM in circular cement-based products, reducing greenhouse gas emissions by as much as 41 kg CO2 eq./ton of cement (i.e. 80 million tons CO2 eq./year). Finally, the environmental impacts were reduced even further by running the HAS on biofuel rather than fossil fuel.The authors of the present paper, prepared in the framework ofthe Project VEEP "Cost-Effective Recycling of C&DW in High AddedValue Energy Efficient Prefabricated Concrete Components forMassive Retrofitting of our Built Environment", wish to acknowl-edge the European Commission for its support. This project hasreceived funding from the European Union’s Horizon 2020 researchand innovation programme under grant agreement No 723582.This paper reflects only the author’s view and the European Com-mission is not responsible for any use that may be made of theinformation it contains.The authors are also grateful to the Spanish Ministry of Science,Innovation and Universities (MICIU) and the European RegionalDevelopment Fund (FEDER) for funding this line of research(RTI2018-097074-B-C21)

    A workload-aware energy model for virtual machine migration

    Get PDF
    Energy consumption has become a significant issue for data centres. Assessing their consumption requires precise and detailed models. In the latter years, many models have been proposed, but most of them either do not consider energy consumption related to virtual machine migration or do not consider the variation of the workload on (1) the virtual machines (VM) and (2) the physical machines hosting the VMs. In this paper, we show that omitting migration and workload variation from the models could lead to misleading consumption estimates. Then, we propose a new model for data centre energy consumption that takes into account the previously omitted model parameters and provides accurate energy consumption predictions for paravirtualised virtual machines running on homogeneous hosts. The new model's accuracy is evaluated with a comprehensive set of operational scenarios. With the use of these scenarios we present a comparative analysis of our model with similar state-of-the-art models for energy consumption of VM Migration, showing an improvement up to 24% in accuracy of prediction. © 2015 IEEE

    The impact of primordial supersonic flows on early structure formation, reionization and the lowest-mass dwarf galaxies

    Get PDF
    Tseliakhovich and Hirata recently discovered that higher order corrections to the cosmological linear-perturbation theory lead to supersonic coherent baryonic flows just after recombination (i.e. z ≈ 1020), with rms velocities of ˜30 km s-1 relative to the underlying dark matter distribution, on comoving scales of â‰Č3 Mpc h-1. To study the impact of these coherent flows, we performed high-resolution N-body plus smoothed particle hydrodynamic simulations in boxes of 5.0 and 0.7 Mpc h-1, for bulk-flow velocities of 0 (as reference), 30 and 60 km s-1. The simulations follow the evolution of cosmic structures by taking into account detailed, primordial, non-equilibrium gas chemistry (i.e. H, He, H2, HD, HeH, etc.), cooling, star formation and feedback effects from stellar evolution. We find that these bulk flows suppress star formation in low-mass haloes (i.e. Mvirâ‰Č 108 M⊙ until z ˜ 13), lower the abundance of the first objects by ˜1-20 per cent and as a consequence delay cosmic star formation history by ˜2 × 107 yr. The gas fractions in individual objects can change by up to a factor of 2 at very early times. Coherent bulk flow therefore has implications for (i) the star formation in the lowest-mass haloes (e.g. dSphs); (ii) the start of reionization by suppressing it in some patches of the Universe; and (iii) the heating (i.e. spin temperature) of neutral hydrogen. We speculate that the patchy nature of reionization and heating on several Mpc scales could lead to enhanced differences in the H I spin temperature, giving rise to stronger variations in the H I brightness temperatures during the late dark ages

    The Overlooked Potential of Generalized Linear Models in Astronomy - I: Binomial Regression

    Get PDF
    Revealing hidden patterns in astronomical data is often the path to fundamental scientific breakthroughs; meanwhile the complexity of scientific inquiry increases as more subtle relationships are sought. Contemporary data analysis problems often elude the capabilities of classical statistical techniques, suggesting the use of cutting edge statistical methods. In this light, astronomers have overlooked a whole family of statistical techniques for exploratory data analysis and robust regression, the so-called Generalized Linear Models (GLMs). In this paper -- the first in a series aimed at illustrating the power of these methods in astronomical applications -- we elucidate the potential of a particular class of GLMs for handling binary/binomial data, the so-called logit and probit regression techniques, from both a maximum likelihood and a Bayesian perspective. As a case in point, we present the use of these GLMs to explore the conditions of star formation activity and metal enrichment in primordial minihaloes from cosmological hydro-simulations including detailed chemistry, gas physics, and stellar feedback. We predict that for a dark mini-halo with metallicity ≈1.3×10−4Z⹀\approx 1.3 \times 10^{-4} Z_{\bigodot}, an increase of 1.2×10−21.2 \times 10^{-2} in the gas molecular fraction, increases the probability of star formation occurrence by a factor of 75%. Finally, we highlight the use of receiver operating characteristic curves as a diagnostic for binary classifiers, and ultimately we use these to demonstrate the competitive predictive performance of GLMs against the popular technique of artificial neural networks.Comment: 20 pages, 10 figures, 3 tables, accepted for publication in Astronomy and Computin
    • 

    corecore