2,221 research outputs found

    Sharing of Unlicensed Spectrum by Strategic Operators

    Full text link
    Facing the challenge of meeting ever-increasing demand for wireless data, the industry is striving to exploit large swaths of spectrum which anyone can use for free without having to obtain a license. Major standards bodies are currently considering a proposal to retool and deploy Long Term Evolution (LTE) technologies in unlicensed bands below 6 GHz. This paper studies the fundamental questions of whether and how the unlicensed spectrum can be shared by intrinsically strategic operators without suffering from the tragedy of the commons. A class of general utility functions is considered. The spectrum sharing problem is formulated as a repeated game over a sequence of time slots. It is first shown that a simple static sharing scheme allows a given set of operators to reach a subgame perfect Nash equilibrium for mutually beneficial sharing. The question of how many operators will choose to enter the market is also addressed by studying an entry game. A sharing scheme which allows dynamic spectrum borrowing and lending between operators is then proposed to address time-varying traffic and proved to achieve perfect Bayesian equilibrium. Numerical results show that the proposed dynamic sharing scheme outperforms static sharing, which in turn achieves much higher revenue than uncoordinated full-spectrum sharing. Implications of the results to the standardization and deployment of LTE in unlicensed bands (LTE-U) are also discussed.Comment: To appear in the IEEE Journal on Selected Areas in Communications, Special Issue on Game Theory for Network

    Applications of time-series generative models and inference techniques

    Get PDF
    In this dissertation, we apply deep generative modelling, amortised inference and reinforcement learning methods to real-world, practical phenomenon, and we ask if these techniques can be used to predict complex system dynamics, model biologically plausible behaviour, and guide decision making. In the past, probabilistic modelling and Bayesian inference techniques have been successfully applied in a wide array of fields, achieving success in financial market prediction, robotics, and the natural sciences. However, the use of generative models in these contexts has usually required a rigid set of linearity constraints or assumptions about the distributions used for modelling. Furthermore, inference in non-linear models can be very difficult to scale to high-dimensional models. In recent years, deep learning has been a key innovation in enabling non-linear function approximation. When applied to probabilistic modelling, deep non-linear models have significantly improved the generative capabilities of computer vision models. While an important step towards general artificial intelligence, there remains a gap between the successes of these early single-time-step deep generative models and the temporal models that will be required to deploy machine learning in the real-world. We posit that deep non-linear time-series models and sequential inference are useful in a number of these complex domains. In order to test this hypothesis, we made methodological developments related to model learning and approximate inference. We then present experimental results, which address several questions about the application of deep generative models. First, can we train a deep temporal model learning complex dynamics to perform sufficiently accurate inference and predictions at run-time. Here, ``sufficient accuracy'' means that the predictions and inferences made using our model lead to stronger performance than that given by a heuristic approach on some downstream task performed in real-time. We specifically model large compute cluster hardware performance using a deep generative model in order to use the model to tackle the downstream task of improving the overall throughput of the cluster. Generally, this question is useful to answer for a number of wider applications similar to ours which may use such modelling techniques to intervene in real-time. For example, we may be interested in applying generative modelling and inference to come up with better trading algorithms with the goal of increasing returns. We may also wish to use a deep generative epidemiology model to determine government policies that help prevent the spread of disease. Simply put, we want to ask the question, "are deep generative models powerful enough to be useful?" Next, are deep state-space models important for the generative quality of animal-like behaviour? Given a perceptual dataset of animal behaviour, such as camera views of fruit-flies interactions or collections of human handwriting samples, can a deep generative model capture the latent variability underlying such behaviour. As a step towards artificial intelligence that mirrors human and other biological organisms, we must assess whether deep generative modelling is a viable approach to capture what may be one of the most stochastic and challenging phenomenon to model. Finally, is inference a useful perspective in decision making and reinforcement learning? If so, can we improve the uncertainty estimation of different quantities used in classic reinforcement learning to further take advantage of an inference perspective? Answering these questions may help us determine if a ``Reinforcement Learning as Inference'' framework coupled with a distributional estimate of the sum of future rewards can lead to better decision making under the control setting. Although our findings are positive in terms of these questions, they come with caveats for each. First, deep generative models must be accurate to be useful for downstream tasks. Second, modelling biologically plausible behaviour is difficult without additional partial supervision in the latent space. Third, while we have made orthogonal progress in using the inference perspective for policy learning and leveraging a distributional estimate in reinforcement learning, it remains unclear how to best combine these two approaches. This thesis presents the progress made in tackling these challenges

    Robust computation of linear models by convex relaxation

    Get PDF
    Consider a dataset of vector-valued observations that consists of noisy inliers, which are explained well by a low-dimensional subspace, along with some number of outliers. This work describes a convex optimization problem, called REAPER, that can reliably fit a low-dimensional model to this type of data. This approach parameterizes linear subspaces using orthogonal projectors, and it uses a relaxation of the set of orthogonal projectors to reach the convex formulation. The paper provides an efficient algorithm for solving the REAPER problem, and it documents numerical experiments which confirm that REAPER can dependably find linear structure in synthetic and natural data. In addition, when the inliers lie near a low-dimensional subspace, there is a rigorous theory that describes when REAPER can approximate this subspace.Comment: Formerly titled "Robust computation of linear models, or How to find a needle in a haystack

    Multi-Scale Matrix Sampling and Sublinear-Time PageRank Computation

    Full text link
    A fundamental problem arising in many applications in Web science and social network analysis is, given an arbitrary approximation factor c>1c>1, to output a set SS of nodes that with high probability contains all nodes of PageRank at least Δ\Delta, and no node of PageRank smaller than Δ/c\Delta/c. We call this problem {\sc SignificantPageRanks}. We develop a nearly optimal, local algorithm for the problem with runtime complexity O~(n/Δ)\tilde{O}(n/\Delta) on networks with nn nodes. We show that any algorithm for solving this problem must have runtime of Ω(n/Δ){\Omega}(n/\Delta), rendering our algorithm optimal up to logarithmic factors. Our algorithm comes with two main technical contributions. The first is a multi-scale sampling scheme for a basic matrix problem that could be of interest on its own. In the abstract matrix problem it is assumed that one can access an unknown {\em right-stochastic matrix} by querying its rows, where the cost of a query and the accuracy of the answers depend on a precision parameter ϵ\epsilon. At a cost propositional to 1/ϵ1/\epsilon, the query will return a list of O(1/ϵ)O(1/\epsilon) entries and their indices that provide an ϵ\epsilon-precision approximation of the row. Our task is to find a set that contains all columns whose sum is at least Δ\Delta, and omits any column whose sum is less than Δ/c\Delta/c. Our multi-scale sampling scheme solves this problem with cost O~(n/Δ)\tilde{O}(n/\Delta), while traditional sampling algorithms would take time Θ((n/Δ)2)\Theta((n/\Delta)^2). Our second main technical contribution is a new local algorithm for approximating personalized PageRank, which is more robust than the earlier ones developed in \cite{JehW03,AndersenCL06} and is highly efficient particularly for networks with large in-degrees or out-degrees. Together with our multiscale sampling scheme we are able to optimally solve the {\sc SignificantPageRanks} problem.Comment: Accepted to Internet Mathematics journal for publication. An extended abstract of this paper appeared in WAW 2012 under the title "A Sublinear Time Algorithm for PageRank Computations

    Color-octet scalars at the LHC

    Full text link
    Color-octet scalars, if present at the TeV scale, will be produced in abundance at the LHC. We discuss in some detail the phenomenology of scalars in the (8,2)_{1/2} representation, recently identified by Manohar and Wise as an addition to the standard-model Higgs sector consistent with the principle of minimal flavor violation. Couplings of this multiplet to the Higgs lift the mass degeneracy among its states, possibly allowing for two-body decays of a heavier colored scalar to a lighter one and a gauge boson. We perform a renormalization group analysis of these couplings and find that limits from Tevatron searches leave little room for these decays. This fact, and the assumption of minimal flavor violation, lead us to study the case where the octets decay to the heaviest kinematically accessible fermion pairs. Focusing on pair-production events leading to (t t-bar t t-bar), (b b-bar b b-bar), and (b b-bar t t-bar) final states, we find that discovery at the LHC should be possible up to masses exceeding 1 TeV.Comment: 15 pages, 6 figues; corrected typos and added discussion of decays to b b-ba
    corecore