11,532 research outputs found

    A Reference Model for Collaborative Business Intelligence Virtual Assistants

    Full text link
    Collaborative Business Analysis (CBA) is a methodology that involves bringing together different stakeholders, including business users, analysts, and technical specialists, to collaboratively analyze data and gain insights into business operations. The primary objective of CBA is to encourage knowledge sharing and collaboration between the different groups involved in business analysis, as this can lead to a more comprehensive understanding of the data and better decision-making. CBA typically involves a range of activities, including data gathering and analysis, brainstorming, problem-solving, decision-making and knowledge sharing. These activities may take place through various channels, such as in-person meetings, virtual collaboration tools or online forums. This paper deals with virtual collaboration tools as an important part of Business Intelligence (BI) platform. Collaborative Business Intelligence (CBI) tools are becoming more user-friendly, accessible, and flexible, allowing users to customize their experience and adapt to their specific needs. The goal of a virtual assistant is to make data exploration more accessible to a wider range of users and to reduce the time and effort required for data analysis. It describes the unified business intelligence semantic model, coupled with a data warehouse and collaborative unit to employ data mining technology. Moreover, we propose a virtual assistant for CBI and a reference model of virtual tools for CBI, which consists of three components: conversational, data exploration and recommendation agents. We believe that the allocation of these three functional tasks allows you to structure the CBI issue and apply relevant and productive models for human-like dialogue, text-to-command transferring, and recommendations simultaneously. The complex approach based on these three points gives the basis for virtual tool for collaboration. CBI encourages people, processes, and technology to enable everyone sharing and leveraging collective expertise, knowledge and data to gain valuable insights for making better decisions. This allows to respond more quickly and effectively to changes in the market or internal operations and improve the progress

    FedCache: A Knowledge Cache-driven Federated Learning Architecture for Personalized Edge Intelligence

    Full text link
    Edge Intelligence (EI) allows Artificial Intelligence (AI) applications to run at the edge, where data analysis and decision-making can be performed in real-time and close to data sources. To protect data privacy and unify data silos among end devices in EI, Federated Learning (FL) is proposed for collaborative training of shared AI models across devices without compromising data privacy. However, the prevailing FL approaches cannot guarantee model generalization and adaptation on heterogeneous clients. Recently, Personalized Federated Learning (PFL) has drawn growing awareness in EI, as it enables a productive balance between local-specific training requirements inherent in devices and global-generalized optimization objectives for satisfactory performance. However, most existing PFL methods are based on the Parameters Interaction-based Architecture (PIA) represented by FedAvg, which causes unaffordable communication burdens due to large-scale parameters transmission between devices and the edge server. In contrast, Logits Interaction-based Architecture (LIA) allows to update model parameters with logits transfer and gains the advantages of communication lightweight and heterogeneous on-device model allowance compared to PIA. Nevertheless, previous LIA methods attempt to achieve satisfactory performance either relying on unrealistic public datasets or increasing communication overhead for additional information transmission other than logits. To tackle this dilemma, we propose a knowledge cache-driven PFL architecture, named FedCache, which reserves a knowledge cache on the server for fetching personalized knowledge from the samples with similar hashes to each given on-device sample. During the training phase, ensemble distillation is applied to on-device models for constructive optimization with personalized knowledge transferred from the server-side knowledge cache.Comment: 14 pages, 6 figures, 9 tables. arXiv admin note: text overlap with arXiv:2301.0038
    • …
    corecore