3,539 research outputs found
Network recovery after massive failures
This paper addresses the problem of efficiently restoring sufficient resources in a communications network to support the demand of mission critical services after a large scale disruption. We give a formulation of the problem as an MILP and show that it is NP-hard. We propose a polynomial time heuristic, called Iterative Split and Prune (ISP) that decomposes the original problem recursively into smaller problems, until it determines the set of network components to be restored. We performed extensive simulations by varying the topologies, the demand intensity, the number of critical services, and the disruption model. Compared to several greedy approaches ISP performs better in terms of number of repaired components, and does not result in any demand loss. It performs very close to the optimal when the demand is low with respect to the supply network capacities, thanks to the ability of the algorithm to maximize sharing of repaired resources
On critical service recovery after massive network failures
This paper addresses the problem of efficiently restoring sufficient resources in a communications network to support the demand of mission critical services after a large-scale disruption. We give a formulation of the problem as a mixed integer linear programming and show that it is NP-hard. We propose a polynomial time heuristic, called iterative split and prune (ISP) that decomposes the original problem recursively into smaller problems, until it determines the set of network components to be restored. ISP's decisions are guided by the use of a new notion of demand-based centrality of nodes. We performed extensive simulations by varying the topologies, the demand intensity, the number of critical services, and the disruption model. Compared with several greedy approaches, ISP performs better in terms of total cost of repaired components, and does not result in any demand loss. It performs very close to the optimal when the demand is low with respect to the supply network capacities, thanks to the ability of the algorithm to maximize sharing of repaired resources
Improved Methods of Task Assignment and Resource Allocation with Preemption in Edge Computing Systems
Edge computing has become a very popular service that enables mobile devices
to run complex tasks with the help of network-based computing resources.
However, edge clouds are often resource-constrained, which makes resource
allocation a challenging issue. In addition, edge cloud servers must make
allocation decisions with only limited information available, since the arrival
of future client tasks might be impossible to predict, and the states and
behavior of neighboring servers might be obscured. We focus on a distributed
resource allocation method in which servers operate independently and do not
communicate with each other, but interact with clients (tasks) to make
allocation decisions. We follow a two-round bidding approach to assign tasks to
edge cloud servers, and servers are allowed to preempt previous tasks to
allocate more useful ones. We evaluate the performance of our system using
realistic simulations and real-world trace data from a high-performance
computing cluster. Results show that our heuristic improves system-wide
performance by over previous work when accounting for the time taken
by each approach. In this way, an ideal trade-off between performance and speed
is achieved.Comment: 13 pages,added IEEE disclaime
Inequality, Institutions, and Informality
This paper presents theory and evidence on the determinants of the size of the informal sector. We propose a simple theoretical model in which the informal sector`s size is negatively related to institutional quality and positively related to income inequality. These predictions are then empirically validated using different proxies of the size of the informal sector, income inequality, and institutional quality. The results are shown to be robust with respect to a variety of econometric specifications
Twenty years of ground-based NDACC FTIR spectrometry at IzaĂąa Observatory - overview and long-term comparison to other techniques
High-resolution Fourier transform infrared (FTIR) solar observations are particularly relevant for climate studies, as they allow atmospheric gaseous composition and multiple climate processes to be monitored in detail. In this context, the present paper provides an overview of 20 years of FTIR measurements taken in the framework of the NDACC (Network for the Detection of Atmospheric Composition Change) from 1999 to 2018 at the subtropical IzaĂąa Observatory (IZO, Spain). Firstly, long-term instrumental performance is comprehensively assessed, corroborating the temporal stability and reliable instrumental characterization of the two FTIR spectrometers installed at IZO since 1999. Then, the time series of all trace gases contributing to NDACC at IZO are presented (i.e. C2H6, CH4, ClONO2, CO, HCl, HCN, H2CO, HF, HNO3, N2O, NO2, NO, O3, carbonyl sulfide (OCS), and water vapour isotopologues H162O, H182O, and HD16O), reviewing the major accomplishments drawn from these observations. In order to examine the quality and long-term consistency of the IZO FTIR observations, a comparison of those NDACC products for which other high-quality measurement techniques are available at IZO has been performed (i.e. CH4, CO, H2O, NO2, N2O, and O3). This quality assessment was carried out on different timescales to examine what temporal signals are captured by the FTIR records, and to what extent. After 20 years of operation, the IZO NDACC FTIR observations have been found to be very consistent and reliable over time, demonstrating great potential for climate research. Long-term NDACC FTIR data sets, such as IZO, are indispensable tools for the investigation of atmospheric composition trends, multi-year phenomena, and complex climate feedback processes, as well as for the validation of past and present space-based missions and chemistry climate models.The IzaĂąa FTIR station has been supported by the German Bundesministerium fĂźr Wirtschaft und Energie (BMWi) via DLRunder grants 50EE1711A and by the Helmholtz Society via the research program ATMO. In addition, this research was funded by the European Research Council under FP7/(2007-2013)/ERC Grant agreement nÂş 256961 (project MUSICA), by the Deutsche Forschungsgemeinschaft for the project MOTIV (GeschaFTIRzeichen SCHN 1126/2-1), by the Ministerio de EconomĂa y Competitividad from Spain through the projects CGL2012-37505 (project NOVIA) and CGL2016-80688-P (project INMENSE), and by EUMETSAT under its Fellowship Programme (project VALIASI)
Impacts of climate change on plant diseases â opinions and trends
There has been a remarkable scientific output on the topic of how climate change is likely to affect plant diseases in the coming decades. This review addresses the need for review of this burgeoning literature by summarizing opinions of previous reviews and trends in recent studies on the impacts of climate change on plant health. Sudden Oak Death is used as an introductory case study: Californian forests could become even more susceptible to this emerging plant disease, if spring precipitations will be accompanied by warmer temperatures, although climate shifts may also affect the current synchronicity between host cambium activity and pathogen colonization rate. A summary of observed and predicted climate changes, as well as of direct effects of climate change on pathosystems, is provided. Prediction and management of climate change effects on plant health are complicated by indirect effects and the interactions with global change drivers. Uncertainty in models of plant disease development under climate change calls for a diversity of management strategies, from more participatory approaches to interdisciplinary science. Involvement of stakeholders and scientists from outside plant pathology shows the importance of trade-offs, for example in the land-sharing vs. sparing debate. Further research is needed on climate change and plant health in mountain, boreal, Mediterranean and tropical regions, with multiple climate change factors and scenarios (including our responses to it, e.g. the assisted migration of plants), in relation to endophytes, viruses and mycorrhiza, using long-term and large-scale datasets and considering various plant disease control methods
Measurements of Higgs boson production and couplings in diboson final states with the ATLAS detector at the LHC
Measurements are presented of production properties and couplings of the recently discovered Higgs boson using the decays into boson pairs, H âÎł Îł, H â Z Zâ â4l and H âW Wâ âlνlν. The results are based on the complete pp collision data sample recorded by the ATLAS experiment at the CERN Large Hadron Collider at centre-of-mass energies of âs = 7 TeV and âs = 8 TeV, corresponding to an integrated luminosity of about 25 fbâ1. Evidence for Higgs boson production through vector-boson fusion is reported. Results of combined ďŹts probing Higgs boson couplings to fermions and bosons, as well as anomalous contributions to loop-induced production and decay modes, are presented. All measurements are consistent with expectations for the Standard Model Higgs boson
Standalone vertex ďŹnding in the ATLAS muon spectrometer
A dedicated reconstruction algorithm to find decay vertices in the ATLAS muon spectrometer is presented. The algorithm searches the region just upstream of or inside the muon spectrometer volume for multi-particle vertices that originate from the decay of particles with long decay paths. The performance of the algorithm is evaluated using both a sample of simulated Higgs boson events, in which the Higgs boson decays to long-lived neutral particles that in turn decay to bbar b final states, and pp collision data at âs = 7 TeV collected with the ATLAS detector at the LHC during 2011
- âŚ