129 research outputs found

    Graph-based Algorithm Unfolding for Energy-aware Power Allocation in Wireless Networks

    Full text link
    We develop a novel graph-based trainable framework to maximize the weighted sum energy efficiency (WSEE) for power allocation in wireless communication networks. To address the non-convex nature of the problem, the proposed method consists of modular structures inspired by a classical iterative suboptimal approach and enhanced with learnable components. More precisely, we propose a deep unfolding of the successive concave approximation (SCA) method. In our unfolded SCA (USCA) framework, the originally preset parameters are now learnable via graph convolutional neural networks (GCNs) that directly exploit multi-user channel state information as the underlying graph adjacency matrix. We show the permutation equivariance of the proposed architecture, which is a desirable property for models applied to wireless network data. The USCA framework is trained through a stochastic gradient descent approach using a progressive training strategy. The unsupervised loss is carefully devised to feature the monotonic property of the objective under maximum power constraints. Comprehensive numerical results demonstrate its generalizability across different network topologies of varying size, density, and channel distribution. Thorough comparisons illustrate the improved performance and robustness of USCA over state-of-the-art benchmarks.Comment: Published in IEEE Transactions on Wireless Communication

    Learning to Transmit with Provable Guarantees in Wireless Federated Learning

    Full text link
    We propose a novel data-driven approach to allocate transmit power for federated learning (FL) over interference-limited wireless networks. The proposed method is useful in challenging scenarios where the wireless channel is changing during the FL training process and when the training data are not independent and identically distributed (non-i.i.d.) on the local devices. Intuitively, the power policy is designed to optimize the information received at the server end during the FL process under communication constraints. Ultimately, our goal is to improve the accuracy and efficiency of the global FL model being trained. The proposed power allocation policy is parameterized using a graph convolutional network and the associated constrained optimization problem is solved through a primal-dual (PD) algorithm. Theoretically, we show that the formulated problem has zero duality gap and, once the power policy is parameterized, optimality depends on how expressive this parameterization is. Numerically, we demonstrate that the proposed method outperforms existing baselines under different wireless channel settings and varying degrees of data heterogeneity

    Mapping solar array location, size, and capacity using deep learning and overhead imagery

    Full text link
    The effective integration of distributed solar photovoltaic (PV) arrays into existing power grids will require access to high quality data; the location, power capacity, and energy generation of individual solar PV installations. Unfortunately, existing methods for obtaining this data are limited in their spatial resolution and completeness. We propose a general framework for accurately and cheaply mapping individual PV arrays, and their capacities, over large geographic areas. At the core of this approach is a deep learning algorithm called SolarMapper - which we make publicly available - that can automatically map PV arrays in high resolution overhead imagery. We estimate the performance of SolarMapper on a large dataset of overhead imagery across three US cities in California. We also describe a procedure for deploying SolarMapper to new geographic regions, so that it can be utilized by others. We demonstrate the effectiveness of the proposed deployment procedure by using it to map solar arrays across the entire US state of Connecticut (CT). Using these results, we demonstrate that we achieve highly accurate estimates of total installed PV capacity within each of CT's 168 municipal regions

    Fast wide-field quantum sensor based on solid-state spins integrated with a SPAD array

    Full text link
    Achieving fast, sensitive, and parallel measurement of a large number of quantum particles is an essential task in building large-scale quantum platforms for different quantum information processing applications such as sensing, computation, simulation, and communication. Current quantum platforms in experimental atomic and optical physics based on CMOS sensors and CCD cameras are limited by either low sensitivity or slow operational speed. Here we integrate an array of single-photon avalanche diodes with solid-state spin defects in diamond to build a fast wide-field quantum sensor, achieving a frame rate up to 100~kHz. We present the design of the experimental setup to perform spatially resolved imaging of quantum systems. A few exemplary applications, including sensing DC and AC magnetic fields, temperature, strain, local spin density, and charge dynamics, are experimentally demonstrated using an NV ensemble diamond sample. The developed photon detection array is broadly applicable to other platforms such as atom arrays trapped in optical tweezers, optical lattices, donors in silicon, and rare earth ions in solids

    Reinforcement learning-guided long-timescale simulation of hydrogen transport in metals

    Full text link
    Atomic diffusion in solids is an important process in various phenomena. However, atomistic simulations of diffusion processes are confronted with the timescale problem: the accessible simulation time is usually far shorter than that of experimental interests. In this work, we developed a long-timescale method using reinforcement learning that simulates diffusion processes. As a testbed, we simulate hydrogen diffusion in pure metals and a medium entropy alloy, CrCoNi, getting hydrogen diffusivity reasonably consistent with previous experiments. We also demonstrate that our method can accelerate the sampling of low-energy configurations compared to the Metropolis-Hastings algorithm using hydrogen migration to copper (111) surface sites as an example

    Methodology for analysis of TSV stress induced transistor variation and circuit performance

    Get PDF
    As continued scaling becomes increasingly difficult, 3D integration with through silicon vias (TSVs) has emerged as a viable solution to achieve higher bandwidth and power efficiency. Mechanical stress induced by thermal mismatch between TSVs and the silicon bulk arising during wafer fabrication and 3D integration, is a key constraint. In this work, we propose a complete flow to characterize the influence of TSV stress on transistor and circuit performance. First, we analyze the thermal stress contour near the silicon surface with single and multiple TSVs through both finite element analysis (FEA) and linear superposition methods. Then, the biaxial stress is converted to mobility and threshold voltage variations depending on transistor type and geometric relation between TSVs and transistors. Next, we propose an efficient algorithm to calculate circuit variation corresponding to TSV stress based on a grid partition approach. Finally, we discuss a TSV pattern optimization strategy, and employ a series of 17-stage ring oscillators using 40 nm CMOS technology as a test case for the proposed approach

    Statistical Modeling with the Virtual Source MOSFET Model

    Get PDF
    A statistical extension of the ultra-compact Virtual Source (VS) MOSFET model is developed here for the first time. The characterization uses a statistical extraction technique based on the backward propagation of variance (BPV) with variability parameters derived directly from the nominal VS model. The resulting statistical VS model is extensively validated using Monte Carlo simulations, and the statistical distributions of several figures of merit for logic and memory cells are compared with those of a BSIM model from a 40-nm CMOS industrial design kit. The comparisons show almost identical distributions with distinct run time advantages for the statistical VS model. Additional simulations show that the statistical VS model accurately captures non-Gaussian features that are important for low-power designs.Masdar Institute of Science and Technolog

    Robustness Verification of Tree-based Models

    Full text link
    We study the robustness verification problem for tree-based models, including decision trees, random forests (RFs) and gradient boosted decision trees (GBDTs). Formal robustness verification of decision tree ensembles involves finding the exact minimal adversarial perturbation or a guaranteed lower bound of it. Existing approaches find the minimal adversarial perturbation by a mixed integer linear programming (MILP) problem, which takes exponential time so is impractical for large ensembles. Although this verification problem is NP-complete in general, we give a more precise complexity characterization. We show that there is a simple linear time algorithm for verifying a single tree, and for tree ensembles, the verification problem can be cast as a max-clique problem on a multi-partite graph with bounded boxicity. For low dimensional problems when boxicity can be viewed as constant, this reformulation leads to a polynomial time algorithm. For general problems, by exploiting the boxicity of the graph, we develop an efficient multi-level verification algorithm that can give tight lower bounds on the robustness of decision tree ensembles, while allowing iterative improvement and any-time termination. OnRF/GBDT models trained on 10 datasets, our algorithm is hundreds of times faster than the previous approach that requires solving MILPs, and is able to give tight robustness verification bounds on large GBDTs with hundreds of deep trees.Comment: Hongge Chen and Huan Zhang contributed equall
    corecore