67,082 research outputs found
A database management capability for Ada
The data requirements of mission critical defense systems have been increasing dramatically. Command and control, intelligence, logistics, and even weapons systems are being required to integrate, process, and share ever increasing volumes of information. To meet this need, systems are now being specified that incorporate data base management subsystems for handling storage and retrieval of information. It is expected that a large number of the next generation of mission critical systems will contain embedded data base management systems. Since the use of Ada has been mandated for most of these systems, it is important to address the issues of providing data base management capabilities that can be closely coupled with Ada. A comprehensive distributed data base management project has been investigated. The key deliverables of this project are three closely related prototype systems implemented in Ada. These three systems are discussed
Riemannian Optimization for Skip-Gram Negative Sampling
Skip-Gram Negative Sampling (SGNS) word embedding model, well known by its
implementation in "word2vec" software, is usually optimized by stochastic
gradient descent. However, the optimization of SGNS objective can be viewed as
a problem of searching for a good matrix with the low-rank constraint. The most
standard way to solve this type of problems is to apply Riemannian optimization
framework to optimize the SGNS objective over the manifold of required low-rank
matrices. In this paper, we propose an algorithm that optimizes SGNS objective
using Riemannian optimization and demonstrates its superiority over popular
competitors, such as the original method to train SGNS and SVD over SPPMI
matrix.Comment: 9 pages, 4 figures, ACL 201
Using Machine Learning for Handover Optimization in Vehicular Fog Computing
Smart mobility management would be an important prerequisite for future fog
computing systems. In this research, we propose a learning-based handover
optimization for the Internet of Vehicles that would assist the smooth
transition of device connections and offloaded tasks between fog nodes. To
accomplish this, we make use of machine learning algorithms to learn from
vehicle interactions with fog nodes. Our approach uses a three-layer
feed-forward neural network to predict the correct fog node at a given location
and time with 99.2 % accuracy on a test set. We also implement a dual stacked
recurrent neural network (RNN) with long short-term memory (LSTM) cells capable
of learning the latency, or cost, associated with these service requests. We
create a simulation in JAMScript using a dataset of real-world vehicle
movements to create a dataset to train these networks. We further propose the
use of this predictive system in a smarter request routing mechanism to
minimize the service interruption during handovers between fog nodes and to
anticipate areas of low coverage through a series of experiments and test the
models' performance on a test set
Emission-aware Energy Storage Scheduling for a Greener Grid
Reducing our reliance on carbon-intensive energy sources is vital for
reducing the carbon footprint of the electric grid. Although the grid is seeing
increasing deployments of clean, renewable sources of energy, a significant
portion of the grid demand is still met using traditional carbon-intensive
energy sources. In this paper, we study the problem of using energy storage
deployed in the grid to reduce the grid's carbon emissions. While energy
storage has previously been used for grid optimizations such as peak shaving
and smoothing intermittent sources, our insight is to use distributed storage
to enable utilities to reduce their reliance on their less efficient and most
carbon-intensive power plants and thereby reduce their overall emission
footprint. We formulate the problem of emission-aware scheduling of distributed
energy storage as an optimization problem, and use a robust optimization
approach that is well-suited for handling the uncertainty in load predictions,
especially in the presence of intermittent renewables such as solar and wind.
We evaluate our approach using a state of the art neural network load
forecasting technique and real load traces from a distribution grid with 1,341
homes. Our results show a reduction of >0.5 million kg in annual carbon
emissions -- equivalent to a drop of 23.3% in our electric grid emissions.Comment: 11 pages, 7 figure, This paper will appear in the Proceedings of the
ACM International Conference on Future Energy Systems (e-Energy 20) June
2020, Australi
cphVB: A System for Automated Runtime Optimization and Parallelization of Vectorized Applications
Modern processor architectures, in addition to having still more cores, also
require still more consideration to memory-layout in order to run at full
capacity. The usefulness of most languages is deprecating as their
abstractions, structures or objects are hard to map onto modern processor
architectures efficiently.
The work in this paper introduces a new abstract machine framework, cphVB,
that enables vector oriented high-level programming languages to map onto a
broad range of architectures efficiently. The idea is to close the gap between
high-level languages and hardware optimized low-level implementations. By
translating high-level vector operations into an intermediate vector bytecode,
cphVB enables specialized vector engines to efficiently execute the vector
operations.
The primary success parameters are to maintain a complete abstraction from
low-level details and to provide efficient code execution across different,
modern, processors. We evaluate the presented design through a setup that
targets multi-core CPU architectures. We evaluate the performance of the
implementation using Python implementations of well-known algorithms: a jacobi
solver, a kNN search, a shallow water simulation and a synthetic stencil
simulation. All demonstrate good performance
- …