414,314 research outputs found
High Availability Cluster System for Local Disaster Recovery with Markov Modeling Approach
The need for high availability (HA) and disaster recovery (DR) in IT
environment is more stringent than most of the other sectors of enterprises.
Many businesses require the availability of business-critical applications 24
hours a day, seven days a week, and can afford no data loss in the event of a
disaster. It is vital that the IT infrastructure is resilient with regard to
disruption, even site failures, and that business operations can continue
without significant impact. As a result, DR has gained great importance in IT.
Clustering of multiple industries standard servers together to allow workload
sharing and fail-over capabilities is a low cost approach. In this paper, we
present the availability model through Semi-Markov Process (SMP) and also
analyze the difference in downtime of the SMP model and the approximate
Continuous Time Markov Chain (CTMC) model. To acquire system availability, we
perform numerical analysis and SHARPE tool evaluation.Comment: International Journal of Computer Science Issues, IJCSI Volume 6,
Issue 2, pp25-32, November 200
Forecasting High-Dimensional Realized Volatility Matrices Using A Factor Model
Modeling and forecasting covariance matrices of asset returns play a crucial
role in finance. The availability of high frequency intraday data enables the
modeling of the realized covariance matrix directly. However, most models in
the literature suffer from the curse of dimensionality. To solve the problem,
we propose a factor model with a diagonal CAW model for the factor realized
covariance matrices. Asymptotic theory is derived for the estimated parameters.
In an extensive empirical analysis, we find that the number of parameters can
be reduced significantly. Furthermore, the proposed model maintains a
comparable performance with a benchmark vector autoregressive model
SINR Model with Best Server Association for High Availability Studies of Wireless Networks
The signal-to-interference-and-noise ratio (SINR) is of key importance for
the analysis and design of wireless networks. For addressing new requirements
imposed on wireless communication, in particular high availability, a highly
accurate modeling of the SINR is needed. We propose a stochastic model of the
SINR distribution where shadow fading is characterized by random variables.
Therein, the impact of shadow fading on the user association is incorporated by
modification of the distributions involved. The SINR model is capable to
describe all parts of the SINR distribution in detail, especially the left tail
which is of interest for studies of high availability.Comment: 11 pages, 4 figures, accepted for publication in IEEE Wireless
Communications Letter
Availability Analysis of Redundant and Replicated Cloud Services with Bayesian Networks
Due to the growing complexity of modern data centers, failures are not
uncommon any more. Therefore, fault tolerance mechanisms play a vital role in
fulfilling the availability requirements. Multiple availability models have
been proposed to assess compute systems, among which Bayesian network models
have gained popularity in industry and research due to its powerful modeling
formalism. In particular, this work focuses on assessing the availability of
redundant and replicated cloud computing services with Bayesian networks. So
far, research on availability has only focused on modeling either
infrastructure or communication failures in Bayesian networks, but have not
considered both simultaneously. This work addresses practical modeling
challenges of assessing the availability of large-scale redundant and
replicated services with Bayesian networks, including cascading and
common-cause failures from the surrounding infrastructure and communication
network. In order to ease the modeling task, this paper introduces a high-level
modeling formalism to build such a Bayesian network automatically. Performance
evaluations demonstrate the feasibility of the presented Bayesian network
approach to assess the availability of large-scale redundant and replicated
services. This model is not only applicable in the domain of cloud computing it
can also be applied for general cases of local and geo-distributed systems.Comment: 16 pages, 12 figures, journa
The volatility of realized volatility
Using unobservable conditional variance as measure, latent-variable approaches, such as GARCH and stochastic-volatility models, have traditionally been dominating the empirical finance literature. In recent years, with the availability of high-frequency financial market data modeling realized volatility has become a new and innovative research direction. By constructing "observable" or realized volatility series from intraday transaction data, the use of standard time series models, such as ARFIMA models, have become a promising strategy for modeling and predicting (daily) volatility. In this paper, we show that the residuals of the commonly used time-series models for realized volatility exhibit non-Gaussianity and volatility clustering. We propose extensions to explicitly account for these properties and assess their relevance when modeling and forecasting realized volatility. In an empirical application for S&P500 index futures we show that allowing for time-varying volatility of realized volatility leads to a substantial improvement of the model's fit as well as predictive performance. Furthermore, the distributional assumption for residuals plays a crucial role in density forecasting. Klassifikation: C22, C51, C52, C5
Interoperator fixed-mobile network sharing
We propose the novel idea of interoperator fixed-mobile network sharing,
which can be software-defined and readily-deployed. We study the benefits which
the sharing brings in terms of resiliency, and show that, with the appropriate
placement of a few active nodes, the mean service downtime can be reduced more
than threefold by providing interoperator communication to as little as one
optical network unit in one hundred. The implementation of the proposed idea
can be carried out in stages when needed (the pay-as-you-grow deployment), and
in those parts of the network where high service availability is needed most,
e.g., in a business district. While the performance should expectedly increase,
we show the resiliency is brought almost out of thin air by using redundant
resources of different operators. We evaluated the service availability for
87400 networks with the relative standard error of the sample mean below 1%.Comment: 19th International Conference on Optical Network Design and Modeling
(ONDM), pp. 192-197, May 201
ROMEO: A Plug-and-play Software Platform of Robotics-inspired Algorithms for Modeling Biomolecular Structures and Motions
Motivation: Due to the central role of protein structure in molecular
recognition, great computational efforts are devoted to modeling protein
structures and motions that mediate structural rearrangements. The size,
dimensionality, and non-linearity of the protein structure space present
outstanding challenges. Such challenges also arise in robot motion planning,
and robotics-inspired treatments of protein structure and motion are
increasingly showing high exploration capability. Encouraged by such findings,
we debut here ROMEO, which stands for Robotics prOtein Motion ExplOration
framework. ROMEO is an open-source, object-oriented platform that allows
researchers access to and reproducibility of published robotics-inspired
algorithms for modeling protein structures and motions, as well as facilitates
novel algorithmic design via its plug-and-play architecture.
Availability and implementation: ROMEO is written in C++ and is available in
GitLab (https://github.com/). This software is freely available under the
Creative Commons license (Attribution and Non-Commercial).
Contact: [email protected]: 6 pages, 5 figure
Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders
Generative models in vision have seen rapid progress due to algorithmic
improvements and the availability of high-quality image datasets. In this
paper, we offer contributions in both these areas to enable similar progress in
audio modeling. First, we detail a powerful new WaveNet-style autoencoder model
that conditions an autoregressive decoder on temporal codes learned from the
raw audio waveform. Second, we introduce NSynth, a large-scale and high-quality
dataset of musical notes that is an order of magnitude larger than comparable
public datasets. Using NSynth, we demonstrate improved qualitative and
quantitative performance of the WaveNet autoencoder over a well-tuned spectral
autoencoder baseline. Finally, we show that the model learns a manifold of
embeddings that allows for morphing between instruments, meaningfully
interpolating in timbre to create new types of sounds that are realistic and
expressive
D2D-Aware Device Caching in MmWave-Cellular Networks
In this paper, we propose a novel policy for device caching that facilitates
popular content exchange through high-rate device-to-device (D2D)
millimeter-wave (mmWave) communication. The D2D-aware caching (DAC) policy
splits the cacheable content into two content groups and distributes it
randomly to the user equipment devices (UEs), with the goal to enable D2D
connections. By exploiting the high bandwidth availability and the
directionality of mmWaves, we ensure high rates for the D2D transmissions,
while mitigating the co-channel interference that limits the throughput gains
of D2D communication in the sub-6 GHz bands. Furthermore, based on a
stochastic-geometry modeling of the network topology, we analytically derive
the offloading gain that is achieved by the proposed policy and the
distribution of the content retrieval delay considering both half- and
full-duplex mode for the D2D communication. The accuracy of the proposed
analytical framework is validated through Monte-Carlo simulations. In addition,
for a wide range of a content popularity indicator the results show that the
proposed policy achieves higher offloading and lower content-retrieval delays
than existing state-of-the-art approaches.Comment: added main body of the pape
Localized Realized Volatility Modelling
With the recent availability of high-frequency Financial data the long range dependence of volatility regained researchers' interest and has lead to the consideration of long memory models for realized volatility. The long range diagnosis of volatility, however, is usually stated for long sample periods, while for small sample sizes, such as e.g. one year, the volatility dynamics appears to be better described by short-memory processes. The ensemble of these seemingly contradictory phenomena point towards short memory models of volatility with nonstationarities, such as structural breaks or regime switches, that spuriously generate a long memory pattern (see e.g. Diebold and Inoue, 2001; Mikosch and Starica, 2004b). In this paper we adopt this view on the dependence structure of volatility and propose a localized procedure for modeling realized volatility. That is at each point in time we determine a past interval over which volatility is approximated by a local linear process. Using S&P500 data we find that our local approach outperforms long memory type models in terms of predictability.Localized Autoregressive Modeling, Realized Volatility, Adaptive Procedure
- …