24 research outputs found

    Antenna allocation and pricing in virtualized massive MIMO networks via Stackelberg game

    Get PDF
    We study a resource allocation problem for the uplink of a virtualized massive multiple-input multiple-output (MIMO) system, where the antennas at the base station are priced and virtualized among the service providers (SPs). The mobile network operator (MNO) who owns the infrastructure decides the price per antenna, and a Stackelberg game is formulated for the net profit maximization of the MNO, while minimum rate requirements of SPs are satisfied. To solve the bi-level optimization problem of the MNO, we first derive the closed-form best responses of the SPs with respect to the pricing strategies of the MNO, such that the problem of the MNO can be reduced to a single-level optimization. Then, via transformations and approximations, we cast the MNO’s problem with integer constraints into a signomial geometric program (SGP), and we propose an iterative algorithm based on the successive convex approximation (SCA) to solve the SGP. Simulation results show that the proposed algorithm has performance close to the global optimum. Moreover, the interactions between the MNO and SPs in different scenarios are explored via simulations

    Fourth NASA Goddard Conference on Mass Storage Systems and Technologies

    Get PDF
    This report contains copies of all those technical papers received in time for publication just prior to the Fourth Goddard Conference on Mass Storage and Technologies, held March 28-30, 1995, at the University of Maryland, University College Conference Center, in College Park, Maryland. This series of conferences continues to serve as a unique medium for the exchange of information on topics relating to the ingestion and management of substantial amounts of data and the attendant problems involved. This year's discussion topics include new storage technology, stability of recorded media, performance studies, storage system solutions, the National Information infrastructure (Infobahn), the future for storage technology, and lessons learned from various projects. There also will be an update on the IEEE Mass Storage System Reference Model Version 5, on which the final vote was taken in July 1994

    Efficient Passive Clustering and Gateways selection MANETs

    Get PDF
    Passive clustering does not employ control packets to collect topological information in ad hoc networks. In our proposal, we avoid making frequent changes in cluster architecture due to repeated election and re-election of cluster heads and gateways. Our primary objective has been to make Passive Clustering more practical by employing optimal number of gateways and reduce the number of rebroadcast packets

    Age of Information Optimization for Timeliness in Communication Networks

    Get PDF
    With the emergence of technologies such as autonomous vehicular systems, holographic communications, remote surgery and high frequency automated trading, timeliness of information has become more important than ever. Most traditional performance metrics, such as delay or throughput, are not sufficient to measure timeliness. For that, age of information (AoI) has been introduced recently as a new performance metric to quantify the timeliness in communication networks. In this dissertation, we consider timely update delivery problems in communication networks under various system settings. First, we introduce the concept of soft updates, where different from the existing literature, here, updates are soft and begin reducing the age immediately but drop it gradually over time. Our setting models human interactions where updates are soft, and also social media interactions where an update consists of viewing and digesting many small pieces of information posted, that are of varying importance, relevance and interest to the receiver. For given total system duration, the number of updates, and the total allowed update duration, we find the optimum start times of the soft updates and their optimum durations to minimize the overall age. Then, we consider an information updating system where not only the timeliness but also the quality of the updates is important. Here, we use distortion as a proxy for quality, and model distortion as a decreasing function of processing time spent while generating the updates. Processing longer at the transmitter results in a better quality (lower distortion) update, but it causes the update to age in the process. We determine age-optimal policies by characterizing the update request times at the receiver and the update processing times at the transmitter subject to constant or age-dependent distortion constraints on each update. Next, different from most of the existing literature on AoI where the transmission times are based on a given distribution, by assigning codeword lengths for each status update, we design transmission times through source coding schemes. In order to further improve timeliness, we propose selective encoding schemes where only the most probable updates are transmitted. For the remaining least probable updates, we consider schemes where these updates are never sent, randomly sent, or sent by an empty symbol. For all these encoding schemes, we determine the optimal number of encoded updates and their corresponding age-optimal real-valued codeword lengths to minimize the average age at the receiver. Then, we study the concept of generating partial updates which carry less information compared to the original updates, but their transmission times are shorter. Our aim is to find the age-optimal partial update generation process and the corresponding age-optimal real-valued codeword lengths for the partial updates while maintaining a desired level of fidelity between the original and partial updates. Next, we consider information freshness in a cache updating system consisting of a source, cache(s) and a user. Here, the user may receive an outdated file depending on the freshness status of the file at the cache. We characterize the binary freshness metric at the end user and propose an alternating maximization based method to optimize the overall freshness at the end user subject to total update rate constraints at the cache(s) and the user. Then, we study a caching system with a limited storage capacity for the cache. Here, the user either gets the files from the cache, but the received files can be sometimes outdated, or gets fresh files directly from the source at the expense of additional transmission times which inherently decrease the freshness. We show that when the total update rate and the storage capacity at the cache are limited, it is optimal to get the frequently changing files and files with relatively small transmission times directly from the source, and store the remaining files at the cache. Next, we focus on information freshness in structured gossip networks where in addition to the updates obtained from the source, the end nodes share their local versions of the updates via gossiping to further improve freshness. By using a stochastic hybrid systems (SHS) approach, we determine the information freshness in arbitrarily connected gossip networks. When the number of nodes gets large, we find the scaling of information freshness in disconnected, ring and fully connected network topologies. Further, we consider clustered gossip networks where multiple clusters of structured gossip networks are connected to the source through cluster heads, and find the optimal cluster sizes numerically. Then, we consider the problem of timely tracking of multiple counting random processes via exponential (Poisson) inter-sampling times, subject to a total sampling rate constraint. A specific example is how a citation index such as Google Scholar should update citation counts of individual researchers to keep the entire citation index as up-to-date as possible. We model citation arrival profile of each researcher as a counting process with a different mean, and consider the long-term average difference between the actual citation numbers and the citation numbers according to the latest updates as a measure of timeliness. We show that, in order to minimize this difference metric, Google Scholar should allocate its total update capacity to researchers proportional to the square roots of their mean citation arrival rates. Next, we consider the problem of timely tracking of multiple binary random processes via sampling rate limited Poisson sampling. As a specific example, we consider the problem of timely tracking of infection status (e.g., covid-19) of individuals in a population. Here, a health care provider wants to detect infected and recovered people as quickly as possible. We measure the timeliness of the tracking process as the long term average difference between the actual infection status of people and their real-time estimate at the health care provider which is based on the most recent test results. For given infection and recovery rates of individuals, we find the exponentially applied testing rates for individuals to minimize this difference. We observe that when the total test rate is limited, instead of applying tests to everyone, only a portion of the population should be tested. Finally, we consider a communication system with multiple information sources that generate binary status updates, which in practical application may indicate an anomaly (e.g., fire) or infection status (e.g., covid-19). Each node exhibits an anomaly or infection with probability pp. In order to send the updates generated by these sources as timely as possible, we propose a group updating method inspired by group testing, but with the goal of minimizing the overall average age, as opposed to the average number of tests (updates). We show that when the probability pp is small, group updating method achieves lower average age than the sequential updating methods

    Volume I. Introduction to DUNE

    Get PDF
    The preponderance of matter over antimatter in the early universe, the dynamics of the supernovae that produced the heavy elements necessary for life, and whether protons eventually decay-these mysteries at the forefront of particle physics and astrophysics are key to understanding the early evolution of our universe, its current state, and its eventual fate. The Deep Underground Neutrino Experiment (DUNE) is an international world-class experiment dedicated to addressing these questions as it searches for leptonic charge-parity symmetry violation, stands ready to capture supernova neutrino bursts, and seeks to observe nucleon decay as a signature of a grand unified theory underlying the standard model. The DUNE far detector technical design report (TDR) describes the DUNE physics program and the technical designs of the single- A nd dual-phase DUNE liquid argon TPC far detector modules. This TDR is intended to justify the technical choices for the far detector that flow down from the high-level physics goals through requirements at all levels of the Project. Volume I contains an executive summary that introduces the DUNE science program, the far detector and the strategy for its modular designs, and the organization and management of the Project. The remainder of Volume I provides more detail on the science program that drives the choice of detector technologies and on the technologies themselves. It also introduces the designs for the DUNE near detector and the DUNE computing model, for which DUNE is planning design reports. Volume II of this TDR describes DUNE\u27s physics program in detail. Volume III describes the technical coordination required for the far detector design, construction, installation, and integration, and its organizational structure. Volume IV describes the single-phase far detector technology. A planned Volume V will describe the dual-phase technology

    Volume I. Introduction to DUNE

    Get PDF
    The preponderance of matter over antimatter in the early universe, the dynamics of the supernovae that produced the heavy elements necessary for life, and whether protons eventually decay—these mysteries at the forefront of particle physics and astrophysics are key to understanding the early evolution of our universe, its current state, and its eventual fate. The Deep Underground Neutrino Experiment (DUNE) is an international world-class experiment dedicated to addressing these questions as it searches for leptonic charge-parity symmetry violation, stands ready to capture supernova neutrino bursts, and seeks to observe nucleon decay as a signature of a grand unified theory underlying the standard model. The DUNE far detector technical design report (TDR) describes the DUNE physics program and the technical designs of the single- and dual-phase DUNE liquid argon TPC far detector modules. This TDR is intended to justify the technical choices for the far detector that flow down from the high-level physics goals through requirements at all levels of the Project. Volume I contains an executive summary that introduces the DUNE science program, the far detector and the strategy for its modular designs, and the organization and management of the Project. The remainder of Volume I provides more detail on the science program that drives the choice of detector technologies and on the technologies themselves. It also introduces the designs for the DUNE near detector and the DUNE computing model, for which DUNE is planning design reports. Volume II of this TDR describes DUNE\u27s physics program in detail. Volume III describes the technical coordination required for the far detector design, construction, installation, and integration, and its organizational structure. Volume IV describes the single-phase far detector technology. A planned Volume V will describe the dual-phase technology

    Volume I. Introduction to DUNE

    Full text link
    Artículo escrito por un elevado número de autores, solo se referencian el que aparece en primer lugar, los autores pertenecientes a la UAM y el nombre del grupo de colaboración, si lo hubiereThe preponderance of matter over antimatter in the early universe, the dynamics of the supernovae that produced the heavy elements necessary for life, and whether protons eventually decay-these mysteries at the forefront of particle physics and astrophysics are key to understanding the early evolution of our universe, its current state, and its eventual fate. The Deep Underground Neutrino Experiment (DUNE) is an international world-class experiment dedicated to addressing these questions as it searches for leptonic charge-parity symmetry violation, stands ready to capture supernova neutrino bursts, and seeks to observe nucleon decay as a signature of a grand unified theory underlying the standard model. The DUNE far detector technical design report (TDR) describes the DUNE physics program and the technical designs of the single- A nd dual-phase DUNE liquid argon TPC far detector modules. This TDR is intended to justify the technical choices for the far detector that flow down from the high-level physics goals through requirements at all levels of the Project. Volume I contains an executive summary that introduces the DUNE science program, the far detector and the strategy for its modular designs, and the organization and management of the Project. The remainder of Volume I provides more detail on the science program that drives the choice of detector technologies and on the technologies themselves. It also introduces the designs for the DUNE near detector and the DUNE computing model, for which DUNE is planning design reports. Volume II of this TDR describes DUNE's physics program in detail. Volume III describes the technical coordination required for the far detector design, construction, installation, and integration, and its organizational structure. Volume IV describes the single-phase far detector technology. A planned Volume V will describe the dual-phase technologyThis document was prepared by the DUNE collaboration using the resources of the Fermi National Accelerator Laboratory (Fermilab), a U.S. Department of Energy, Office of Science, HEP User Facility. Fermilab is managed by Fermi Research Alliance, LLC (FRA), acting under Contract No. DE-AC02-07CH11359. The DUNE collaboration also acknowledges the international, national, and regional funding agencies supporting the institutions who have contributed to completing this Technical Design Repor
    corecore