1,691 research outputs found

    Content-Aware User Clustering and Caching in Wireless Small Cell Networks

    Full text link
    In this paper, the problem of content-aware user clustering and content caching in wireless small cell networks is studied. In particular, a service delay minimization problem is formulated, aiming at optimally caching contents at the small cell base stations (SCBSs). To solve the optimization problem, we decouple it into two interrelated subproblems. First, a clustering algorithm is proposed grouping users with similar content popularity to associate similar users to the same SCBS, when possible. Second, a reinforcement learning algorithm is proposed to enable each SCBS to learn the popularity distribution of contents requested by its group of users and optimize its caching strategy accordingly. Simulation results show that by correlating the different popularity patterns of different users, the proposed scheme is able to minimize the service delay by 42% and 27%, while achieving a higher offloading gain of up to 280% and 90%, respectively, compared to random caching and unclustered learning schemes.Comment: In the IEEE 11th International Symposium on Wireless Communication Systems (ISWCS) 201

    Towards Explainable and Language-Agnostic LLMs: Symbolic Reverse Engineering of Language at Scale

    Full text link
    Large language models (LLMs) have achieved a milestone that undenia-bly changed many held beliefs in artificial intelligence (AI). However, there remains many limitations of these LLMs when it comes to true language understanding, limitations that are a byproduct of the under-lying architecture of deep neural networks. Moreover, and due to their subsymbolic nature, whatever knowledge these models acquire about how language works will always be buried in billions of microfeatures (weights), none of which is meaningful on its own, making such models hopelessly unexplainable. To address these limitations, we suggest com-bining the strength of symbolic representations with what we believe to be the key to the success of LLMs, namely a successful bottom-up re-verse engineering of language at scale. As such we argue for a bottom-up reverse engineering of language in a symbolic setting. Hints on what this project amounts to have been suggested by several authors, and we discuss in some detail here how this project could be accomplished.Comment: Draft, preprin
    • …
    corecore