288 research outputs found

    Living on the Edge: The Role of Proactive Caching in 5G Wireless Networks

    Full text link
    This article explores one of the key enablers of beyond 44G wireless networks leveraging small cell network deployments, namely proactive caching. Endowed with predictive capabilities and harnessing recent developments in storage, context-awareness and social networks, peak traffic demands can be substantially reduced by proactively serving predictable user demands, via caching at base stations and users' devices. In order to show the effectiveness of proactive caching, we examine two case studies which exploit the spatial and social structure of the network, where proactive caching plays a crucial role. Firstly, in order to alleviate backhaul congestion, we propose a mechanism whereby files are proactively cached during off-peak demands based on file popularity and correlations among users and files patterns. Secondly, leveraging social networks and device-to-device (D2D) communications, we propose a procedure that exploits the social structure of the network by predicting the set of influential users to (proactively) cache strategic contents and disseminate them to their social ties via D2D communications. Exploiting this proactive caching paradigm, numerical results show that important gains can be obtained for each case study, with backhaul savings and a higher ratio of satisfied users of up to 22%22\% and 26%26\%, respectively. Higher gains can be further obtained by increasing the storage capability at the network edge.Comment: accepted for publication in IEEE Communications Magazin

    Bayesian Dictionary Learning for Single and Coupled Feature Spaces

    Get PDF
    Over-complete bases offer the flexibility to represent much wider range of signals with more elementary basis atoms than signal dimension. The use of over-complete dictionaries for sparse representation has been a new trend recently and has increasingly become recognized as providing high performance for applications such as denoise, image super-resolution, inpaiting, compression, blind source separation and linear unmixing. This dissertation studies the dictionary learning for single or coupled feature spaces and its application in image restoration tasks. A Bayesian strategy using a beta process prior is applied to solve both problems. Firstly, we illustrate how to generalize the existing beta process dictionary learning method (BP) to learn dictionary for single feature space. The advantage of this approach is that the number of dictionary atoms and their relative importance may be inferred non-parametrically. Next, we propose a new beta process joint dictionary learning method (BP-JDL) for coupled feature spaces, where the learned dictionaries also reflect the relationship between the two spaces. Compared to previous couple feature spaces dictionary learning algorithms, our algorithm not only provides dictionaries that customized to each feature space, but also adds more consistent and accurate mapping between the two feature spaces. This is due to the unique property of the beta process model that the sparse representation can be decomposed to values and dictionary atom indicators. The proposed algorithm is able to learn sparse representations that correspond to the same dictionary atoms with the same sparsity but different values in coupled feature spaces, thus bringing consistent and accurate mapping between coupled feature spaces. Two applications, single image super-resolution and inverse halftoning, are chosen to evaluate the performance of the proposed Bayesian approach. In both cases, the Bayesian approach, either for single feature space or coupled feature spaces, outperforms state-of-the-art methods in comparative domains

    The State of the Art of Information Integration in Space Applications

    Get PDF
    This paper aims to present a comprehensive survey on information integration (II) in space informatics. With an ever-increasing scale and dynamics of complex space systems, II has become essential in dealing with the complexity, changes, dynamics, and uncertainties of space systems. The applications of space II (SII) require addressing some distinctive functional requirements (FRs) of heterogeneity, networking, communication, security, latency, and resilience; while limited works are available to examine recent advances of SII thoroughly. This survey helps to gain the understanding of the state of the art of SII in sense that (1) technical drivers for SII are discussed and classified; (2) existing works in space system development are analyzed in terms of their contributions to space economy, divisions, activities, and missions; (3) enabling space information technologies are explored at aspects of sensing, communication, networking, data analysis, and system integration; (4) the importance of first-time right (FTR) for implementation of a space system is emphasized, the limitations of digital twin (DT-I) as technological enablers are discussed, and a concept digital-triad (DT-II) is introduced as an information platform to overcome these limitations with a list of fundamental design principles; (5) the research challenges and opportunities are discussed to promote SII and advance space informatics in future

    Nichtparametrische Bayesianische Modelle

    Get PDF
    Die Analyse praktischer Fragestellungen erfordert oft Modelle, die robust und zugleich flexibel genug sind um AbhĂ€ngigkeiten in den Daten prĂ€zise darzustellen. Nichtparametrische Bayesianische Modelle erlauben die Konstruktion solcher Modelle und können daher fĂŒr komplexe Aufgaben herangezogen werden. Unter nichtparametrischen Modellen sind dabei solche mit undendlich vielen Parametern zu verstehen. Die vorliegende Doktorarbeit untersucht zwei Varianten solcher Modelle: zum einen Latent Class Models mit unendlich vielen latenten Klassen, und andererseits Discrete Latent Feature Models mit unendlich vielen latenten Merkmalen. FĂŒr erstere verwenden wir Dirichlet Prozess-Mixturen (Dirichlet Process Mixtures, DPM) und fĂŒr letztere den Indian Buffet-Prozess (IBP), eine Verallgemeinerung der DPM-Modelle. Eine analytische Behandlung der in dieser Arbeit diskutierten Modelle ist nicht möglich, was approximative Verfahren erforderlich macht. Bei solchen Verfahren kann die Verwendung geeigneter konjugierter a priori Verteilungen zu bedeutenden Vereinfachungen fĂŒhren. Im Rahmen komplexer Modelle stellen solche Verteilungen allerdings oft eine zu starke BeschrĂ€nkung dar. Ein Hauptthema dieser Arbeit sind daher Markov-Ketten Monte Carlo (MCMC) Verfahren zur approximativen Inferenz, die auch ohne konjugierte a priori Verteilung effizient einsetzbar sind. In Kapitel 2 definieren wir grundlegende Begriffe und erklĂ€ren die in dieser Arbeit verwendete Notation. Der Dirichlet-Prozess (DP) wird in Kapitel 3 eingefĂŒhrt, zusammen mit einigen unendlichen Mixturmodellen, welche diesen als a priori Verteilung verwenden. ZunĂ€chst geben wir einen Überblick ĂŒber bisherige Arbeiten zur Definition eines Dirichlet-Prozesses und beschreiben die MCMC Techniken, die zur Behandlung von DPM-Modellen entwickelt wurden. DP Mixturen von Gaußverteilungen (Dirichlet process mixtures of Gaussians, DPMoG) wurden vielfach zur DichteschĂ€tzung eingesetzt. Wir zeigen eine empirische Studie ĂŒber die AbwĂ€gung zwischen analytischer Einfachheit und ModellierungsfĂ€higkeit bei der Verwendung konjugierter a priori Verteilungen im DPMoG. Die Verwendung von bedingt konjugierten im Gegensatz zu konjugierten a priori Verteilungen macht weniger einschrĂ€nkende Annahmen, was ohne eine deutliche Erhöhung der Rechenzeit zu besseren SchĂ€tzergebnissen fĂŒhrt. In einem Faktor-Analyse-Modell wird eine Gaußverteilung durch eine spĂ€rlich parametrisierte Kovarianzmatrix reprĂ€sentiert. Wir betrachten eine Mixtur solcher Modelle (mixture of factor analyzers, MFA), wobei wiederum die Anzahl der Klassen nicht beschrĂ€nkt ist (Dirichlet Process MFA, DPMFA). Wir benutzen DPMFA, um Aktionspotentiale verschiedener Neuronen aus extrazellulĂ€ren Ableitungen zu gruppieren (spike sorting). Kapitel 4 behandelt Indian Buffet Prozesse (IBP) und unendliche latente Merkmalsmodelle mit IBPs als a priori Verteilungen. Der IBP ist eine Verteilung ĂŒber binĂ€re Matrizen mit unendlich vielen Spalten. Wir beschreiben verschiedene AnsĂ€tze zur Konstruktion von IBPs und stellen einige neue MCMC Verfahren zur approximativen Inferenz in Modellen dar, die den IBP als a priori Verteilung benutzen. Im Gegensatz zur etablierten Methode des Gibbs Sampling haben unsere Verfahren den Vorteil, dass sie keine konjugierten a priori Verteilungen voraussetzen. Bei einem vorgestellten empirischen Vergleich liefern sie dennoch ebenso gute Ergebnisse wie Gibbs Sampling. Wir zeigen außerdem, dass ein nichtkonjugiertes IBP Modell dazu in der Lage ist, die latenten Variablen handgeschriebener Ziffern zu lernen. Ferner benutzen wir eine IBP a priori Verteilung, um eine nichtparametrische Variante des Elimination-by-aspects (EBA) Auswahlmodells zu formulieren. Eine vorgestellte Paar-Vergleichs-Studie demonstriert dessen prĂ€zise Vorhersagen des menschlichen Auswahlverhaltens.The analysis of real-world problems often requires robust and flexible models that can accurately represent the structure in the data. Nonparametric Bayesian priors allow the construction of such models which can be used for complex real-world data. Nonparametric models, despite their name, can be defined as models that have infinitely many parameters. This thesis is about two types of nonparametric models. The first type is the latent class models (i.e. a mixture model) with infinitely many classes, which we construct using Dirichlet process mixtures (DPM). The second is the discrete latent feature models with infinitely many features, for which we use the Indian buffet process (IBP), a generalization of the DPM. Analytical inference is not possible in the models discussed in this thesis. The use of conjugate priors can often make inference somewhat more tractable, but for a given model the family of conjugate priors may not always be rich enough. Methodologically this thesis will rely on Markov chain Monte Carlo (MCMC) techniques for inference, especially those which can be used in the absence of conjugacy. Chapter 2 introduces the basic terminology and notation used in the thesis. Chapter 3 presents the Dirichlet process (DP) and some infinite latent class models which use the DP as a prior. We first summarize different approaches for defining the DP, and describe several established MCMC algorithms for inference on the DPM models. The Dirichlet process mixtures of Gaussians (DPMoG) model has been extensively used for density estimation. We present an empirical comparison of conjugate and conditionally conjugate priors in the DPMoG, demonstrating that the latter can give better density estimates without significant additional computational cost. The mixtures of factor analyzers (MFA) model allows data to be modeled as a mixture of Gaussians with a reduced parametrization. We present the formulation of a nonparametric form of the MFA model, the Dirichlet process MFA (DPMFA).We utilize the DPMFA for clustering the action potentials of different neurons from extracellular recordings, a problem known as spike sorting. Chapter 4 presents the IBP and some infinite latent feature models which use the IBP as a prior. The IBP is a distribution over binary matrices with infinitely many columns. We describe different approaches for defining the distribution and present new MCMC techniques that can be used for inference on models which use it as a prior. Empirical results on a conjugate model are presented showing that the new methods perform as well as the established method of Gibbs sampling, but without the requirement for conjugacy. We demonstrate the performance of a non-conjugate IBP model by successfully learning the latent features of handwritten digits. Finally, we formulate a nonparametric version of the elimination-by-aspects (EBA) choice model using the IBP, and show that it can make accurate predictions about the people’s choice outcomes in a paired comparison task
    • 

    corecore