4,147 research outputs found
Deep Forecast: Deep Learning-based Spatio-Temporal Forecasting
The paper presents a spatio-temporal wind speed forecasting algorithm using
Deep Learning (DL)and in particular, Recurrent Neural Networks(RNNs). Motivated
by recent advances in renewable energy integration and smart grids, we apply
our proposed algorithm for wind speed forecasting. Renewable energy resources
(wind and solar)are random in nature and, thus, their integration is
facilitated with accurate short-term forecasts. In our proposed framework, we
model the spatiotemporal information by a graph whose nodes are data generating
entities and its edges basically model how these nodes are interacting with
each other. One of the main contributions of our work is the fact that we
obtain forecasts of all nodes of the graph at the same time based on one
framework. Results of a case study on recorded time series data from a
collection of wind mills in the north-east of the U.S. show that the proposed
DL-based forecasting algorithm significantly improves the short-term forecasts
compared to a set of widely-used benchmarks models.Comment: Accepted to the ICML 2017, Time Series Workshop. arXiv admin note:
text overlap with arXiv:1503.0121
Octet and decuplet contribution to the proton self energy
Within the hadronic language of Chiral Perturbation Theory we present the
full leading-order octet-baryonmeson and decuplet-baryonmeson
contribution to the proton self energy and thus to its wave function
renormalization factor . By Fock-expanding the physical proton state into
its bare and hadron-cloud part, we show how each individual baryon-meson
probability depend on the average momenta of the particles in the fluctuation.
We present how the results depend on the choice of the form factor involved in
the regularization (Gaussian or Besselian) and how they depend on the cut-off
parameter. We also show how the results vary with respect to a variation of the
decuplet coupling constant . The momentum distributions of the
fluctuations are given and the fluctuations' relative probabilities are
presented.
We show that for reasonable values of the cut-off parameter, the Delta-pion
fluctuation is of the same strength as the nucleon-pion fluctuation.Comment: 32 pages, 10 figures, fixed Ref.-format, added a Ref., fixed a couple
of irrelevant typo
On the method of likelihood-induced priors
We demonstrate that the functional form of the likelihood contains a
sufficient amount of information for constructing a prior for the unknown
parameters. We develop a four-step algorithm by invoking the information
entropy as the measure of uncertainty and show how the information gained from
coarse-graining and resolving power of the likelihood can be used to construct
the likelihood-induced priors. As a consequence, we show that if the data model
density belongs to the exponential family, the likelihood-induced prior is the
conjugate prior to the corresponding likelihood
Opinion Dynamics in Social Networks: A Local Interaction Game with Stubborn Agents
The process by which new ideas, innovations, and behaviors spread through a
large social network can be thought of as a networked interaction game: Each
agent obtains information from certain number of agents in his friendship
neighborhood, and adapts his idea or behavior to increase his benefit. In this
paper, we are interested in how opinions, about a certain topic, form in social
networks. We model opinions as continuous scalars ranging from 0 to 1 with 1(0)
representing extremely positive(negative) opinion. Each agent has an initial
opinion and incurs some cost depending on the opinions of his neighbors, his
initial opinion, and his stubbornness about his initial opinion. Agents
iteratively update their opinions based on their own initial opinions and
observing the opinions of their neighbors. The iterative update of an agent can
be viewed as a myopic cost-minimization response (i.e., the so-called best
response) to the others' actions. We study whether an equilibrium can emerge as
a result of such local interactions and how such equilibrium possibly depends
on the network structure, initial opinions of the agents, and the location of
stubborn agents and the extent of their stubbornness. We also study the
convergence speed to such equilibrium and characterize the convergence time as
a function of aforementioned factors. We also discuss the implications of such
results in a few well-known graphs such as Erdos-Renyi random graphs and
small-world graphs
An Improved Bound for Minimizing the Total Weighted Completion Time of Coflows in Datacenters
In data-parallel computing frameworks, intermediate parallel data is often
produced at various stages which needs to be transferred among servers in the
datacenter network (e.g. the shuffle phase in MapReduce). A stage often cannot
start or be completed unless all the required data pieces from the preceding
stage are received. \emph{Coflow} is a recently proposed networking abstraction
to capture such communication patterns. We consider the problem of efficiently
scheduling coflows with release dates in a shared datacenter network so as to
minimize the total weighted completion time of coflows.
Several heuristics have been proposed recently to address this problem, as
well as a few polynomial-time approximation algorithms with provable
performance guarantees. Our main result in this paper is a polynomial-time
deterministic algorithm that improves the prior known results. Specifically, we
propose a deterministic algorithm with approximation ratio of , which
improves the prior best known ratio of . For the special case when all
coflows are released at time zero, our deterministic algorithm obtains
approximation ratio of which improves the prior best known ratio of .
The key ingredient of our approach is an improved linear program formulation
for sorting the coflows followed by a simple list scheduling policy. Extensive
simulation results, using both synthetic and real traffic traces, are presented
that verify the performance of our algorithm and show improvement over the
prior approaches.Comment: 12 pages, 11 figure
Medical Image Watermarking using 2D-DWT with Enhanced security and capacity
Teleradiology enables medical images to be transferred over the computer
networks for many purposes including clinical interpretation, diagnosis,
archive, etc. In telemedicine, medical images can be manipulated while
transferring. In addition, medical information security requirements are
specified by the legislative rules, and concerned entities must adhere to them.
In this research, we propose a new scheme based on 2-dimensional Discrete
Wavelet Transform (2D DWT) to improve the robustness and authentication of
medical images. In addition, the current research improves security and
capacity of watermarking using encryption and compression in medical images.
The evaluation is performed on the personal dataset, which contains 194 CTI and
68 MRI cases
On the Scalability of Reliable Data Transfer in High Speed Networks
This paper considers reliable data transfer in a high-speed network (HSN) in
which the per-connection capacity is very large. We focus on sliding window
protocols employing selective repeat for reliable data transfer and study two
reliability mechanisms based on ARQ and FEC. The question we ask is which
mechanism is more suitable for an HSN in which the scalability of reliable data
transfer in terms of receiver's buffer requirement and achievable delay and
throughput is a concern. To efficiently utilize the large bandwidth available
to a connection in an HSN, sliding window protocols require a large
transmission window. In this regime of large transmission windows, we show that
while both mechanisms achieve the same asymptotic throughput in the presence of
packet losses, their delay and buffer requirements are different. Specifically,
an FEC-based mechanism has delay and receiver's buffer requirement that are
asymptotically smaller than that of an ARQ-based selective repeat mechanism by
a factor of log W, where W is the window size of the selective repeat
mechanism. This result is then used to investigate the implications of each
reliability mechanism on protocol design in an HSN in terms of throughput,
delay, buffer requirement, and control overhead
A Theory of Auto-Scaling for Resource Reservation in Cloud Services
We consider a distributed server system consisting of a large number of
servers, each with limited capacity on multiple resources (CPU, memory, disk,
etc.). Jobs with different rewards arrive over time and require certain amounts
of resources for the duration of their service. When a job arrives, the system
must decide whether to admit it or reject it, and if admitted, in which server
to schedule the job. The objective is to maximize the expected total reward
received by the system. This problem is motivated by control of cloud computing
clusters, in which, jobs are requests for Virtual Machines or Containers that
reserve resources for various services, and rewards represent service priority
of requests or price paid per time unit of service by clients. We study this
problem in an asymptotic regime where the number of servers and jobs' arrival
rates scale by a factor , as becomes large. We propose a resource
reservation policy that asymptotically achieves at least , and under
certain monotone property on jobs' rewards and resources, at least of
the optimal expected reward. The policy automatically scales the number of VM
slots for each job type as the demand changes, and decides in which servers the
slots should be created in advance, without the knowledge of traffic rates. It
effectively tracks a low-complexity greedy packing of existing jobs in the
system while maintaining only a small number, , of
reserved VM slots for high priority jobs that pack well
Two globally convergent nonmonotone trust-region methods for unconstrained optimization
This paper addresses some trust-region methods equipped with nonmonotone
strategies for solving nonlinear unconstrained optimization problems. More
specifically, the importance of using nonmonotone techniques in nonlinear
optimization is motivated, then two new nonmonotone terms are proposed, and
their combinations into the traditional trust-region framework are studied. The
global convergence to first- and second-order stationary points and local
superlinear and quadratic convergence rates for both algorithms are
established. Numerical experiments on the \textsf{CUTEst} test collection of
unconstrained problems and some highly nonlinear test functions are reported,
where a comparison among state-of-the-art nonmonotone trust-region methods show
the efficiency of the proposed nonmonotne schemes
Towards a Theory of Anonymous Networking
The problem of anonymous networking when an eavesdropper observes packet
timings in a communication network is considered. The goal is to hide the
identities of source-destination nodes, and paths of information flow in the
network. One way to achieve such an anonymity is to use mixers. Mixers are
nodes that receive packets from multiple sources and change the timing of
packets, by mixing packets at the output links, to prevent the eavesdropper
from finding sources of outgoing packets. In this paper, we consider two simple
but fundamental scenarios: double input-single output mixer and double
input-double output mixer. For the first case, we use the information-theoretic
definition of the anonymity, based on average entropy per packet, and find an
optimal mixing strategy under a strict latency constraint. For the second case,
perfect anonymity is considered, and maximal throughput strategies with perfect
anonymity are found under a strict latency constraint and an average queue
length constraint
- β¦