7,435 research outputs found
Joint Data Routing and Power Scheduling for Wireless Powered Communication Networks
In a wireless powered communication network (WPCN), an energy access point
supplies the energy needs of the network nodes through radio frequency wave
transmission, and the nodes store the received energy in their batteries for
their future data transmission. In this paper, we propose an online stochastic
policy that jointly controls energy transmission from the EAP to the nodes and
data transfer among the nodes. For this purpose, we first introduce a novel
perturbed Lyapunov function to address the limitations on the energy
consumption of the nodes imposed by their batteries. Then, using Lyapunov
optimization method, we propose a policy which is adaptive to any arbitrary
channel statistics in the network. Finally, we provide theoretical analysis for
the performance of the proposed policy and show that it stabilizes the network,
and the average power consumption of the network under this policy is within a
bounded gap of the minimum power level required for stabilizing the network
Reconciling modern machine learning practice and the bias-variance trade-off
Breakthroughs in machine learning are rapidly changing science and society,
yet our fundamental understanding of this technology has lagged far behind.
Indeed, one of the central tenets of the field, the bias-variance trade-off,
appears to be at odds with the observed behavior of methods used in the modern
machine learning practice. The bias-variance trade-off implies that a model
should balance under-fitting and over-fitting: rich enough to express
underlying structure in data, simple enough to avoid fitting spurious patterns.
However, in the modern practice, very rich models such as neural networks are
trained to exactly fit (i.e., interpolate) the data. Classically, such models
would be considered over-fit, and yet they often obtain high accuracy on test
data. This apparent contradiction has raised questions about the mathematical
foundations of machine learning and their relevance to practitioners.
In this paper, we reconcile the classical understanding and the modern
practice within a unified performance curve. This "double descent" curve
subsumes the textbook U-shaped bias-variance trade-off curve by showing how
increasing model capacity beyond the point of interpolation results in improved
performance. We provide evidence for the existence and ubiquity of double
descent for a wide spectrum of models and datasets, and we posit a mechanism
for its emergence. This connection between the performance and the structure of
machine learning models delineates the limits of classical analyses, and has
implications for both the theory and practice of machine learning
Iterative Row Sampling
There has been significant interest and progress recently in algorithms that
solve regression problems involving tall and thin matrices in input sparsity
time. These algorithms find shorter equivalent of a n*d matrix where n >> d,
which allows one to solve a poly(d) sized problem instead. In practice, the
best performances are often obtained by invoking these routines in an iterative
fashion. We show these iterative methods can be adapted to give theoretical
guarantees comparable and better than the current state of the art.
Our approaches are based on computing the importances of the rows, known as
leverage scores, in an iterative manner. We show that alternating between
computing a short matrix estimate and finding more accurate approximate
leverage scores leads to a series of geometrically smaller instances. This
gives an algorithm that runs in
time for any , where the term is comparable
to the cost of solving a regression problem on the small approximation. Our
results are built upon the close connection between randomized matrix
algorithms, iterative methods, and graph sparsification.Comment: 26 pages, 2 figure
Machine Learning for Fluid Mechanics
The field of fluid mechanics is rapidly advancing, driven by unprecedented
volumes of data from field measurements, experiments and large-scale
simulations at multiple spatiotemporal scales. Machine learning offers a wealth
of techniques to extract information from data that could be translated into
knowledge about the underlying fluid mechanics. Moreover, machine learning
algorithms can augment domain knowledge and automate tasks related to flow
control and optimization. This article presents an overview of past history,
current developments, and emerging opportunities of machine learning for fluid
mechanics. It outlines fundamental machine learning methodologies and discusses
their uses for understanding, modeling, optimizing, and controlling fluid
flows. The strengths and limitations of these methods are addressed from the
perspective of scientific inquiry that considers data as an inherent part of
modeling, experimentation, and simulation. Machine learning provides a powerful
information processing framework that can enrich, and possibly even transform,
current lines of fluid mechanics research and industrial applications.Comment: To appear in the Annual Reviews of Fluid Mechanics, 202
Inductive Transfer and Deep Neural Network Learning-Based Cross-Model Method for Short-Term Load Forecasting in Smarts Grids
In a real-world scenario of load forecasting, it is crucial to determine the energy consumption in electrical networks. The energy consumption data exhibit high variability between historical data and newly arriving data streams. To keep the forecasting models updated with the current trends, it is important to fine-tune the models in a timely manner. This article proposes a reliable inductive transfer learning (ITL) method, to use the knowledge from existing deep learning (DL) load forecasting models, to innovatively develop highly accurate ITL models at a large number of other distribution nodes reducing model training time. The outlier-insensitive clustering-based technique is adopted to group similar distribution nodes into clusters. ITL is considered in the setting of homogeneous inductive transfer. To solve overfitting that exists with ITL, a novel weight regularized optimization approach is implemented. The proposed novel cross-model methodology is evaluated on a real-world case study of 1000 distribution nodes of an electrical grid for one-day ahead hourly forecasting. Experimental results demonstrate that overfitting and negative learning in ITL can be avoided by the dissociated weight regularization (DWR) optimizer and that the proposed methodology delivers a reduction in training time by almost 85.6% and has no noticeable accuracy losses.Peer reviewe
- …