203 research outputs found

    AI-native Interconnect Framework for Integration of Large Language Model Technologies in 6G Systems

    Full text link
    The evolution towards 6G architecture promises a transformative shift in communication networks, with artificial intelligence (AI) playing a pivotal role. This paper delves deep into the seamless integration of Large Language Models (LLMs) and Generalized Pretrained Transformers (GPT) within 6G systems. Their ability to grasp intent, strategize, and execute intricate commands will be pivotal in redefining network functionalities and interactions. Central to this is the AI Interconnect framework, intricately woven to facilitate AI-centric operations within the network. Building on the continuously evolving current state-of-the-art, we present a new architectural perspective for the upcoming generation of mobile networks. Here, LLMs and GPTs will collaboratively take center stage alongside traditional pre-generative AI and machine learning (ML) algorithms. This union promises a novel confluence of the old and new, melding tried-and-tested methods with transformative AI technologies. Along with providing a conceptual overview of this evolution, we delve into the nuances of practical applications arising from such an integration. Through this paper, we envisage a symbiotic integration where AI becomes the cornerstone of the next-generation communication paradigm, offering insights into the structural and functional facets of an AI-native 6G network

    A subjective model for trustworthiness evaluation in the social Internet of Things

    Get PDF
    The integration of social networking concepts into the Internet of Things (IoT) has led to the so called Social Internet of Things (SIoT) paradigm, according to which the objects are capable of establishing social relationships in an autonomous way with respect to their owners. The benefits are those of improving scalability in information/service discovery when the SIoT is made of huge numbers of heterogeneous nodes, similarly to what happens with social networks among humans. In this paper we focus on the problem of understanding how the information provided by the other members of the SIoT has to be processed so as to build a reliable system on the basis of the behavior of the objects. We define a subjective model for the management of trustworthiness which builds upon the solutions proposed for P2P networks. Each node computes the trustworthiness of its friends on the basis of its own experience and on the opinion of the common friends with the potential service providers. We employ a feedback system and we combine the credibility and centrality of the nodes to evaluate the trust level. Preliminary simulations show the benefits of the proposed model towards the isolation of almost any malicious node in the network

    Towards Message Brokers for Generative AI: Survey, Challenges, and Opportunities

    Full text link
    In today's digital world, Generative Artificial Intelligence (GenAI) such as Large Language Models (LLMs) is becoming increasingly prevalent, extending its reach across diverse applications. This surge in adoption has sparked a significant increase in demand for data-centric GenAI models, highlighting the necessity for robust data communication infrastructures. Central to this need are message brokers, which serve as essential channels for data transfer within various system components. This survey aims to delve into a comprehensive analysis of traditional and modern message brokers, offering a comparative study of prevalent platforms. Our study considers numerous criteria including, but not limited to, open-source availability, integrated monitoring tools, message prioritization mechanisms, capabilities for parallel processing, reliability, distribution and clustering functionalities, authentication processes, data persistence strategies, fault tolerance, and scalability. Furthermore, we explore the intrinsic constraints that the design and operation of each message broker might impose, recognizing that these limitations are crucial in understanding their real-world applicability. Finally, this study examines the enhancement of message broker mechanisms specifically for GenAI contexts, emphasizing the criticality of developing a versatile message broker framework. Such a framework would be poised for quick adaptation, catering to the dynamic and growing demands of GenAI in the foreseeable future. Through this dual-pronged approach, we intend to contribute a foundational compendium that can guide future innovations and infrastructural advancements in the realm of GenAI data communication.Comment: 20 pages, 181 references, 7 figures, 5 table

    A non-deterministic approach to forecasting the trophic evolution of lakes

    Get PDF
    Limnologists have long recognized that one of the goals of their discipline is to increase its predictive capability. In recent years, the role of prediction in applied ecology escalated, mainly due to man\u27s increased ability to change the biosphere. Such alterations often came with unplanned and noticeably negative side effects mushrooming from lack of proper attention to long-term consequences. Regression analysis of common limnological parameters has been successfully applied to develop predictive models relating the variability of limnological parameters to specific key causes. These approaches, though, are biased by the requirement of a priori cause-relation assumption, oftentimes difficult to find in the complex, nonlinear relationships entangling ecological data. A set of quantitative tools that can help addressing current environmental challenges avoiding such restrictions is currently being researched and developed within the framework of ecological informatics. One of these approaches attempting to model the relationship between a set of inputs and known outputs, is based on genetic algorithms and programming (GP). This stochastic optimization tool is based on the process of evolution in natural systems and was inspired by a direct analogy to sexual reproduction and Charles Darwin\u27s principle of natural selection. GP works through genetic algorithms that use selection and recombination operators to generate a population of equations. Thanks to a 25-years long time-series of regular limnological data, the deep, large, oligotrophic Lake Maggiore (Northern Italy) is the ideal case study to test the predictive ability of GP. Testing of GP on the multi-year data series of this lake has allowed us to verify the forecasting efficacy of the models emerging from GP application. In addition, this non-deterministic approach leads to the discovery of non-obvious relationships between variables and enabled the formulation of new stochastic models

    Data Processing for Atmospheric Phase Interferometers

    Get PDF
    This paper presents a detailed discussion of calibration procedures used to analyze data recorded from a two-element atmospheric phase interferometer (API) deployed at Goldstone, California. In addition, we describe the data products derived from those measurements that can be used for site intercomparison and atmospheric modeling. Simulated data is used to demonstrate the effectiveness of the proposed algorithm and as a means for validating our procedure. A study of the effect of block size filtering is presented to justify our process for isolating atmospheric fluctuation phenomena from other system-induced effects (e.g., satellite motion, thermal drift). A simulated 24 hr interferometer phase data time series is analyzed to illustrate the step-by-step calibration procedure and desired data products
    corecore