54 research outputs found
Testing Efficient Risk Sharing with Heterogeneous Risk Preferences: Semi-parametric Tests with an Application to Village Economies
Previous tests of efficient risk sharing have assumed that households have identical risk preferences. This assumption is equivalent to the restriction that households can pool their
resources, but cannot optimally allocate them according to individual risk preferences. In this paper, we first test the hypothesis of homogeneous risk preferences and reject it. This
result implies that previous tests should have rejected efficiency even if households are perfectly sharing risk. We then derive two
tests of efficient risk sharing that allow for heterogeneity in risk preferences. Using the two tests we cannot reject efficient risk sharingRisk Sharing, Efficiency, Heterogeneous Risk Preferences
Possible use of self-calibration to reduce systematic uncertainties in determining distance-redshift relation via gravitational radiation from merging binaries
By observing mergers of compact objects, future gravity wave experiments
would measure the luminosity distance to a large number of sources to a high
precision but not their redshifts. Given the directional sensitivity of an
experiment, a fraction of such sources (gold plated -- GP) can be identified
optically as single objects in the direction of the source. We show that if an
approximate distance-redshift relation is known then it is possible to
statistically resolve those sources that have multiple galaxies in the beam. We
study the feasibility of using gold plated sources to iteratively resolve the
unresolved sources, obtain the self-calibrated best possible distance-redshift
relation and provide an analytical expression for the accuracy achievable. We
derive lower limit on the total number of sources that is needed to achieve
this accuracy through self-calibration. We show that this limit depends
exponentially on the beam width and give estimates for various experimental
parameters representative of future gravitational wave experiments DECIGO and
BBO.Comment: 6 pages, 2 figures, accepted for publication in PR
Estimation of Cosmological Parameters from HI Observations of Post-reionization Epoch
The emission from neutral hydrogen (HI) clouds in the post-reionization era
(z < 6), too faint to be individually detected, is present as a diffuse
background in all low frequency radio observations below 1420 MHz. The angular
and frequency fluctuations of this radiation (~ 1 mK) is an important future
probe of the large scale structures in the Universe. We show that such
observations are a very effective probe of the background cosmological model
and the perturbed Universe. In our study we focus on the possibility of
determining the redshift space distortion parameter, coordinate distance and
its derivative with redshift. Using reasonable estimates for the observational
uncertainties and configurations representative of the ongoing and upcoming
radio interferometers, we predict parameter estimation at a precision
comparable with supernova Ia observations and galaxy redshift surveys, across a
wide range in redshift that is only partially accessed by other probes. Future
HI observations of the post-reionization era present a new technique,
complementing several existing one, to probe the expansion history and to
elucidate the nature of the dark energy.Comment: 11 pages, 5 figure
Outage-Watch: Early Prediction of Outages using Extreme Event Regularizer
Cloud services are omnipresent and critical cloud service failure is a fact
of life. In order to retain customers and prevent revenue loss, it is important
to provide high reliability guarantees for these services. One way to do this
is by predicting outages in advance, which can help in reducing the severity as
well as time to recovery. It is difficult to forecast critical failures due to
the rarity of these events. Moreover, critical failures are ill-defined in
terms of observable data. Our proposed method, Outage-Watch, defines critical
service outages as deteriorations in the Quality of Service (QoS) captured by a
set of metrics. Outage-Watch detects such outages in advance by using current
system state to predict whether the QoS metrics will cross a threshold and
initiate an extreme event. A mixture of Gaussian is used to model the
distribution of the QoS metrics for flexibility and an extreme event
regularizer helps in improving learning in tail of the distribution. An outage
is predicted if the probability of any one of the QoS metrics crossing
threshold changes significantly. Our evaluation on a real-world SaaS company
dataset shows that Outage-Watch significantly outperforms traditional methods
with an average AUC of 0.98. Additionally, Outage-Watch detects all the outages
exhibiting a change in service metrics and reduces the Mean Time To Detection
(MTTD) of outages by up to 88% when deployed in an enterprise cloud-service
system, demonstrating efficacy of our proposed method.Comment: Accepted to ESEC/FSE 202
ESRO: Experience Assisted Service Reliability against Outages
Modern cloud services are prone to failures due to their complex
architecture, making diagnosis a critical process. Site Reliability Engineers
(SREs) spend hours leveraging multiple sources of data, including the alerts,
error logs, and domain expertise through past experiences to locate the root
cause(s). These experiences are documented as natural language text in outage
reports for previous outages. However, utilizing the raw yet rich
semi-structured information in the reports systematically is time-consuming.
Structured information, on the other hand, such as alerts that are often used
during fault diagnosis, is voluminous and requires expert knowledge to discern.
Several strategies have been proposed to use each source of data separately for
root cause analysis. In this work, we build a diagnostic service called ESRO
that recommends root causes and remediation for failures by utilizing
structured as well as semi-structured sources of data systematically. ESRO
constructs a causal graph using alerts and a knowledge graph using outage
reports, and merges them in a novel way to form a unified graph during
training. A retrieval-based mechanism is then used to search the unified graph
and rank the likely root causes and remediation techniques based on the alerts
fired during an outage at inference time. Not only the individual alerts, but
their respective importance in predicting an outage group is taken into account
during recommendation. We evaluated our model on several cloud service outages
of a large SaaS enterprise over the course of ~2 years, and obtained an average
improvement of 27% in rouge scores after comparing the likely root causes
against the ground truth over state-of-the-art baselines. We further establish
the effectiveness of ESRO through qualitative analysis on multiple real outage
examples.Comment: Accepted to 38th IEEE/ACM International Conference on Automated
Software Engineering (ASE 2023
Towards Optimizing Storage Costs on the Cloud
We study the problem of optimizing data storage and access costs on the cloud
while ensuring that the desired performance or latency is unaffected. We first
propose an optimizer that optimizes the data placement tier (on the cloud) and
the choice of compression schemes to apply, for given data partitions with
temporal access predictions. Secondly, we propose a model to learn the
compression performance of multiple algorithms across data partitions in
different formats to generate compression performance predictions on the fly,
as inputs to the optimizer. Thirdly, we propose to approach the data
partitioning problem fundamentally differently than the current default in most
data lakes where partitioning is in the form of ingestion batches. We propose
access pattern aware data partitioning and formulate an optimization problem
that optimizes the size and reading costs of partitions subject to access
patterns.
We study the various optimization problems theoretically as well as
empirically, and provide theoretical bounds as well as hardness results. We
propose a unified pipeline of cost minimization, called SCOPe that combines the
different modules. We extensively compare the performance of our methods with
related baselines from the literature on TPC-H data as well as enterprise
datasets (ranging from GB to PB in volume) and show that SCOPe substantially
improves over the baselines. We show significant cost savings compared to
platform baselines, of the order of 50% to 83% on enterprise Data Lake datasets
that range from terabytes to petabytes in volume.Comment: The first two authors contributed equally. 12 pages, Accepted to the
International Conference on Data Engineering (ICDE) 202
Using Gravitational Lensing to study HI clouds at high redshift
We investigate the possibility of detecting HI emission from gravitationally
lensed HI clouds (akin to damped Lyman- clouds) at high redshift by
carrying out deep radio observations in the fields of known cluster lenses.
Such observations will be possible with present radio telescopes only if the
lens substantially magnifies the flux of the HI emission. While at present this
holds the only possibility of detecting the HI emission from such clouds, it
has the disadvantage of being restricted to clouds that lie very close to the
caustics of the lens. We find that observations at a detection threshold of 50
micro Jy at 320 MHz (possible with the GMRT) have a greater than 20%
probability of detecting an HI cloud in the field of a cluster, provided the
clouds have HI masses in the range 5 X 10^8 M_{\odot} < M_{HI} < 2.5 X 10^{10}
M_{\odot}. The probability of detecting a cloud increases if they have larger
HI masses, except in the cases where the number of HI clouds in the cluster
field becomes very small. The probability of a detection at 610 MHz and 233 MHz
is comparable to that at 320 MHz, though a definitive statement is difficult
owing to uncertainties in the HI content at the redshifts corresponding to
these frequencies. Observations at a detection threshold of 2 micro Jy
(possible in the future with the SKA) are expected to detect a few HI clouds in
the field of every cluster provided the clouds have HI masses in the range 2 X
10^7 M_{\odot} < M_{HI} < 10^9 M_{\odot}. Even if such observations do not
result in the detection of HI clouds, they will be able to put useful
constraints on the HI content of the clouds.Comment: 21 pages, 7 figures, minor changes in figures, accepted for
publication in Ap
Nanostructured Oxide Thin Films for Sustainable Development
In the effort to emancipate mankind from fossil fuels dependence and minimize the CO2 emissions, efficient transport and conversion of energy is required. Advanced materials such as superconductors and thermoelectrics are expected to play an important role in sustainable science and development. We propose an overview of our recent progress on nanostructured thin films of superconducting and thermoelectric oxides. Superconducting properties of YBa2Cu3Ox and thermoelectric properties of Al-doped ZnO are described in relation to preparation techniques, experimental conditions, substrates used, structure and morphology. We especially discuss a nanoengineering approach for the enhancement of energy transport and energy conversion efficiency of oxide thin films compared to their corresponding counterpart of bulk materials.The 3rd International Conference on Sustainable Civil Engineering Structures and Construction Materials(SCESCM 2016), Bali, Indonesia, 5-7 September 201
- …