1,172 research outputs found
Arithmetic on a Distributed-Memory Quantum Multicomputer
We evaluate the performance of quantum arithmetic algorithms run on a
distributed quantum computer (a quantum multicomputer). We vary the node
capacity and I/O capabilities, and the network topology. The tradeoff of
choosing between gates executed remotely, through ``teleported gates'' on
entangled pairs of qubits (telegate), versus exchanging the relevant qubits via
quantum teleportation, then executing the algorithm using local gates
(teledata), is examined. We show that the teledata approach performs better,
and that carry-ripple adders perform well when the teleportation block is
decomposed so that the key quantum operations can be parallelized. A node size
of only a few logical qubits performs adequately provided that the nodes have
two transceiver qubits. A linear network topology performs acceptably for a
broad range of system sizes and performance parameters. We therefore recommend
pursuing small, high-I/O bandwidth nodes and a simple network. Such a machine
will run Shor's algorithm for factoring large numbers efficiently.Comment: 24 pages, 10 figures, ACM transactions format. Extended version of
Int. Symp. on Comp. Architecture (ISCA) paper; v2, correct one circuit error,
numerous small changes for clarity, add reference
MARVELO: Wireless Virtual Network Embedding for Overlay Graphs with Loops
When deploying resource-intensive signal processing applications in wireless
sensor or mesh networks, distributing processing blocks over multiple nodes
becomes promising. Such distributed applications need to solve the placement
problem (which block to run on which node), the routing problem (which link
between blocks to map on which path between nodes), and the scheduling problem
(which transmission is active when). We investigate a variant where the
application graph may contain feedback loops and we exploit wireless networks?
inherent multicast advantage. Thus, we propose Multicast-Aware Routing for
Virtual network Embedding with Loops in Overlays (MARVELO) to find efficient
solutions for scheduling and routing under a detailed interference model. We
cast this as a mixed integer quadratically constrained optimisation problem and
provide an efficient heuristic. Simulations show that our approach handles
complex scenarios quickly.Comment: 6 page
Applicability and Challenges of Deep Reinforcement Learning for Satellite Frequency Plan Design
The study and benchmarking of Deep Reinforcement Learning (DRL) models has
become a trend in many industries, including aerospace engineering and
communications. Recent studies in these fields propose these kinds of models to
address certain complex real-time decision-making problems in which classic
approaches do not meet time requirements or fail to obtain optimal solutions.
While the good performance of DRL models has been proved for specific use cases
or scenarios, most studies do not discuss the compromises and generalizability
of such models during real operations. In this paper we explore the tradeoffs
of different elements of DRL models and how they might impact the final
performance. To that end, we choose the Frequency Plan Design (FPD) problem in
the context of multibeam satellite constellations as our use case and propose a
DRL model to address it. We identify 6 different core elements that have a
major effect in its performance: the policy, the policy optimizer, the state,
action, and reward representations, and the training environment. We analyze
different alternatives for each of these elements and characterize their
effect. We also use multiple environments to account for different scenarios in
which we vary the dimensionality or make the environment nonstationary. Our
findings show that DRL is a potential method to address the FPD problem in real
operations, especially because of its speed in decision-making. However, no
single DRL model is able to outperform the rest in all scenarios, and the best
approach for each of the 6 core elements depends on the features of the
operation environment. While we agree on the potential of DRL to solve future
complex problems in the aerospace industry, we also reflect on the importance
of designing appropriate models and training procedures, understanding the
applicability of such models, and reporting the main performance tradeoffs
Towards an HLA Run-time Infrastructure with Hard Real-time Capabilities
Our work takes place in the context of the HLA standard and its application in real-time systems context. The HLA standard is inadequate for taking into consideration the different constraints involved in real-time computer systems. Many works have been invested in order to providing real-time capabilities to Run Time Infrastructures (RTI) to run real time simulation. Most of these initiatives focus on major issues including QoS guarantee, Worst Case Transit Time (WCTT) knowledge and scheduling services provided by the underlying operating systems. Even if our ultimate objective is to achieve real-time capabilities for distributed HLA federations executions, this paper describes a preliminary work focusing on achieving hard real-time properties for HLA federations running on a single computer under Linux operating systems. Our paper proposes a novel global bottom up approach for designing real-time Run time Infrastructures and a formal model for validation of uni processor to (then) distributed real-time simulation with CERTI
Deep Learning Hydrodynamic Forecasting for Flooded Region Assessment in Near-Real-Time (DL Hydro-FRAN)
Hydrodynamic flood modeling improves hydrologic and hydraulic prediction of
storm events. However, the computationally intensive numerical solutions
required for high-resolution hydrodynamics have historically prevented their
implementation in near-real-time flood forecasting. This study examines whether
several Deep Neural Network (DNN) architectures are suitable for optimizing
hydrodynamic flood models. Several pluvial flooding events were simulated in a
low-relief high-resolution urban environment using a 2D HEC-RAS hydrodynamic
model. These simulations were assembled into a training set for the DNNs, which
were then used to forecast flooding depths and velocities. The DNNs' forecasts
were compared to the hydrodynamic flood models, and showed good agreement, with
a median RMSE of around 2 mm for cell flooding depths in the study area. The
DNNs also improved forecast computation time significantly, with the DNNs
providing forecasts between 34.2 and 72.4 times faster than conventional
hydrodynamic models. The study area showed little change between HEC-RAS' Full
Momentum Equations and Diffusion Equations, however, important numerical
stability considerations were discovered that impact equation selection and DNN
architecture configuration. Overall, the results from this study show that DNNs
can greatly optimize hydrodynamic flood modeling, and enable near-real-time
hydrodynamic flood forecasting.Comment: 21 pages, 8 figure
- …