1,198 research outputs found
Imbricated slip rate processes during slow slip transients imaged by low-frequency earthquakes
Low Frequency Earthquakes (LFEs) often occur in conjunction with transient strain episodes, or Slow Slip Events (SSEs), in subduction zones. Their focal mechanism and location consistent with shear failure on the plate interface argue for a model where LFEs are discrete dynamic ruptures in an otherwise slowly slipping interface. SSEs are mostly observed by surface geodetic instruments with limited resolution and it is likely that only the largest ones are detected. The time synchronization of LFEs and SSEs suggests that we could use the recorded LFEs to constrain the evolution of SSEs, and notably of the geodetically-undetected small ones. However, inferring slow slip rate from the temporal evolution of LFE activity is complicated by the strong temporal clustering of LFEs. Here we apply dedicated statistical tools to retrieve the temporal evolution of SSE slip rates from the time history of LFE occurrences in two subduction zones, Mexico and Cascadia, and in the deep portion of the San Andreas fault at Parkfield. We find temporal characteristics of LFEs that are similar across these three different regions. The longer term episodic slip transients present in these datasets show a slip rate decay with time after the passage of the SSE front possibly as t^(â1/4). They are composed of multiple short term transients with steeper slip rate decay as t^(âα) with α between 1.4 and 2. We also find that the maximum slip rate of SSEs has a continuous distribution. Our results indicate that creeping faults host intermittent deformation at various scales resulting from the imbricated occurrence of numerous slow slip events of various amplitudes
Small Solar Panels Can Drastically Reduce the Carbon Footprint of Radio Access Networks
The limited power requirements of new generations of base stations (BSs) make the use of renewable energy sources, solar in particular, extremely attractive for mobile network operators. Exploiting solar energy implies a reduction of the network operation cost as well as of the carbon footprint of radio access networks, but previous research works indicate that the area of the solar panels that are necessary to power a standard macro BS is large, so large to make the solar panel deployment problematic, especially within urban areas. In this paper we use a modeling approach based on Markov reward processes to investigate the possibility of combining small area solar panels with a connection to the power grid to run a macro BS. By so doing, it is possible to increase the amount of renewable energy used to run a radio access network, while also reducing the cost incurred by the network operator to power its base stations. We assume that energy is drawn from the power grid only when needed to keep the BS operational, or during the night, that corresponds to the period with lowest electricity price. This has advantages in terms of both cost and carbon footprint. We show that solar panels of the order of 1-2 kW peak, i.e., with a surface of about 5-10 m2, combined with limited capacity energy storage (of the order of 10-15 kWh, corresponding to about 3-5 car batteries), and a smart energy management policy, can lead to an effective exploitation of renewable energy
Unified Scaling Law for Earthquakes
We show that the distribution of waiting times between earthquakes occurring
in California obeys a simple unified scaling law valid from tens of seconds to
tens of years, see Eq. (1) and Fig. 4. The short time clustering, commonly
referred to as aftershocks, is nothing but the short time limit of the general
hierarchical properties of earthquakes. There is no unique operational way of
distinguishing between main shocks and aftershocks. In the unified law, the
Gutenberg-Richter b-value, the exponent -1 of the Omori law for aftershocks,
and the fractal dimension d_f of earthquakes appear as critical indices.Comment: 4 pages, 4 figure
On the Use of Small Solar Panels and Small Batteries to Reduce the RAN Carbon Footprint
The limited power requirements of new generations of base stations make the use of renewable energy sources, solar in particular, extremely attractive for mobile network operators. Exploiting solar energy implies a reduction of the network operation cost as well as of the carbon footprint of radio access networks. However, previous research works indicate that the area of the solar panels that are necessary to power a standard macro base station (BS) is large, making the solar panel deployment problematic, especially within urban areas.In this paper we use a modeling approach based on Markov reward processes to investigate the possibility of combining a connection to the power grid with small area solar panels and small batteries to run a macro base station. By so doing, it is possible to exploit a significant fraction of renewable energy to run a radio access network, while also reducing the cost incurred by the network operator to power its base stations. We assume that energy is drawn from the power grid only when needed to keep the BS operational, or during the night, which corresponds to the period with lowest electricity price. The proposed energy management policies have advantages in terms of both cost and carbon footprint. Our results show that solar panels of the order of 1-2 kW peak, i.e., with a surface of about 5-10 m2, combined with limited capacity energy storage (of the order of 1-5 kWh, corresponding to about 1-2 car batteries) and a smart energy management policy, can lead to an effective exploitation of renewable energy
Confluence reduction for Markov automata
Markov automata are a novel formalism for specifying systems exhibiting nondeterminism, probabilistic choices and Markovian rates. Recently, the process algebra MAPA was introduced to efficiently model such systems. As always, the state space explosion threatens the analysability of the models generated by such specifications. We therefore introduce confluence reduction for Markov automata, a powerful reduction technique to keep these models small. We define the notion of confluence directly on Markov automata, and discuss how to syntactically detect confluence on the MAPA language as well. That way, Markov automata generated by MAPA specifications can be reduced on-the-fly while preserving divergence-sensitive branching bisimulation. Three case studies demonstrate the significance of our approach, with reductions in analysis time up to an order of magnitude
The Progenitors of Local Ultra-massive Galaxies Across Cosmic Time: from Dusty Star-bursting to Quiescent Stellar Populations
Using the UltraVISTA catalogs, we investigate the evolution in the 11.4~Gyr
since of the progenitors of local ultra-massive galaxies (; UMGs), providing a complete and consistent
picture of how the most massive galaxies at have assembled. By selecting
the progenitors with a semi-empirical approach using abundance matching, we
infer a growth in stellar mass of 0.56 dex,
0.45~dex, and 0.27 dex from , ,
and , respectively, to . At , the progenitors of UMGs constitute
a homogeneous population of only quiescent galaxies with old stellar
populations. At , the contribution from star-forming galaxies
progressively increases, with the progenitors at being dominated by
massive (M), dusty (1--2.2 mag), star-forming (SFR100--400~M yr)
galaxies with a large range in stellar ages. At , 15\% of the
progenitors are quiescent, with properties typical of post-starburst galaxies
with little dust extinction and strong Balmer break, and showing a large
scatter in color. Our findings indicate that at least half of the stellar
content of local UMGs was assembled at , whereas the remaining was
assembled via merging from to the present. Most of the quenching of
the star-forming progenitors happened between and , in good
agreement with the typical formation redshift and scatter in age of UMGs
as derived from their fossil records. The progenitors of local UMGs, including
the star-forming ones, never lived on the blue cloud since . We propose an
alternative path for the formation of local UMGs that refines previously
proposed pictures and that is fully consistent with our findings.Comment: 20 pages, 15 figures (6 of which in appendix); accepted for
publication in the Astrophysical Journa
Heavy metal load and effects on biochemical properties in urban soils of a medium-sized city, Ancona, Italy
none6noUrban soils are often mixed with extraneous materials and show a high spatial variability that determine great differences from their agricultural or natural counterparts. The soils of 18 localities of a medium-sized city (Ancona, Italy) were analysed for their main physicochemical and biological properties, and for chromium (Cr), copper (Cu), cobalt (Co), lead (Pb), nickel (Ni), zinc (Zn), and mercury (Hg) total content, distribution among particle-size fractions, and extractability. Because of the absence of thresholds defining a hot spot for heavy metal pollution in urban soils, we defined a âthreshold of attentionâ (ToA) for each heavy metal aiming to bring out hot spot soils where it is more impellent to intervene to mitigate or avoid potential environmental concerns. In several city locations, the soil displayed sub-alkaline pH, large contents of clay-size particles, and higher TOC, total N, and available P with respect to the surrounding rural areas, joined with high contents of total heavy metals, but low availability. The C biomass, basal respiration, qCO2, and enzyme activities were compared to that detected in the near rural soils, and results suggested that heavy metals content has not substantially compromised the soil ecological services. We conclude that ToA can be considered as a valuable tool to highlight soil hot spots especially for cities with a long material history and, for a proper risk assessment in urban soils, we suggest considering the content of available heavy metals (rather than the total content) and soil functions.openSerrani D.; Ajmone-Marsan F.; Corti G.; Cocco S.; Cardelli V.; Adamo P.Serrani, D.; Ajmone-Marsan, F.; Corti, G.; Cocco, S.; Cardelli, V.; Adamo, P
On the Limit Performance of Floating Gossip
In this paper we investigate the limit performance of Floating Gossip, a new,
fully distributed Gossip Learning scheme which relies on Floating Content to
implement location-based probabilistic evolution of machine learning models in
an infrastructure-less manner. We consider dynamic scenarios where continuous
learning is necessary, and we adopt a mean field approach to investigate the
limit performance of Floating Gossip in terms of amount of data that users can
incorporate into their models, as a function of the main system parameters.
Different from existing approaches in which either communication or computing
aspects of Gossip Learning are analyzed and optimized, our approach accounts
for the compound impact of both aspects. We validate our results through
detailed simulations, proving good accuracy. Our model shows that Floating
Gossip can be very effective in implementing continuous training and update of
machine learning models in a cooperative manner, based on opportunistic
exchanges among moving users
- âŠ