249 research outputs found

    Split Federated Learning for 6G Enabled-Networks: Requirements, Challenges and Future Directions

    Full text link
    Sixth-generation (6G) networks anticipate intelligently supporting a wide range of smart services and innovative applications. Such a context urges a heavy usage of Machine Learning (ML) techniques, particularly Deep Learning (DL), to foster innovation and ease the deployment of intelligent network functions/operations, which are able to fulfill the various requirements of the envisioned 6G services. Specifically, collaborative ML/DL consists of deploying a set of distributed agents that collaboratively train learning models without sharing their data, thus improving data privacy and reducing the time/communication overhead. This work provides a comprehensive study on how collaborative learning can be effectively deployed over 6G wireless networks. In particular, our study focuses on Split Federated Learning (SFL), a technique recently emerged promising better performance compared with existing collaborative learning approaches. We first provide an overview of three emerging collaborative learning paradigms, including federated learning, split learning, and split federated learning, as well as of 6G networks along with their main vision and timeline of key developments. We then highlight the need for split federated learning towards the upcoming 6G networks in every aspect, including 6G technologies (e.g., intelligent physical layer, intelligent edge computing, zero-touch network management, intelligent resource management) and 6G use cases (e.g., smart grid 2.0, Industry 5.0, connected and autonomous systems). Furthermore, we review existing datasets along with frameworks that can help in implementing SFL for 6G networks. We finally identify key technical challenges, open issues, and future research directions related to SFL-enabled 6G networks

    Performance Improvements of EventIndex Distributed System at CERN

    Get PDF
    El trabajo de esta tesis se enmarca dentro del proyecto EventIndex del experimento ATLAS, un gran detector de partı́culas del LHC (Gran Colisionador de Hadrones) en el CERN. El objetivo del proyecto es catalogar todas las colisiones de partı́culas, o eventos, registrados en el detector ATLAS y también simulados a lo largo de sus años de funcionamiento. Con este catálogo se pueden caracterizar los datos a nivel de evento para su búsqueda y localización por parte de los usuarios finales. También se pueden realizar comprobaciones en la cadena de registro y reprocesado de los datos, para comprobar su corrección y optimizar futuros procesos. Debido al incremento en las tasas y volumen de datos esperados en el Run 3 (2022-2025) y el HL-LHC (finales de la década del 2020), se requiere un sistema escalable y que simplifique implementaciones anteriores. En esta tesis se presentan las contribuciones al proyecto en las áreas de recolección de datos distribuida, almacenamiento de cantidades masivas de datos y acceso a los mismos. Una pequeña cantidad de información (metadatos) por evento es indexada en el CERN (Tier-0), y de forma distribuida en el grid en todos los centros de computación que forman parte del experimento ATLAS (10 Tier-1, y del orden de 70 Tier-2). En esta tesis se presenta un nuevo modelo de recolección de datos en el grid basado en un object store como almacenamiento temporal, y con selección dinámica de datos para su ingestión en el almacén de datos final. También se presentan las contribuciones a una nueva solución en un único y gran almacén de datos basado en tecnologı́as de macrodatos (Big Data) como HBase/Phoenix, capaz de sostener las tasas y volumen de ingestión de datos requeridos, y que simplifica y soluciona los problemas de las anteriores soluciones hı́bridas. Finalmente, se presenta un marco de computación y herramientas basadas en Spark para el acceso a los datos y la resolución de cargas de trabajo analı́ticas que acceden a grandes cantidades de datos, como el cálculo del solapado (overlaps) entre eventos de distintos datasets, o el cálculo de eventos duplicados.The work presented in this thesis is framed in the context of the EventIndex project of the ATLAS experiment, a big particle detector of the LHC (Large Hadron Collider) at CERN. The objective of the project is to catalog all the particle collisions, or events, recorded at the ATLAS detector and also simulated over the duration of the experiment. With this catalog, data can be characterized at event granularity, important for searching and locating events by the end users. Other automatic checkings can be done in the data reprocessing chain, in order to assure its correctness and optimize future processings. Due to the rise in the production rates and total volume of the data expected for Run 3 (2022-2025) and the HL-LHC (end of the 2020 decade), a scalable system is required also to simplify previous implementations. In this thesis we present the contributions to the project in the areas of distributed data collection, storage of massive volumes of data and access to them. A small quantity of information (metadata) by event is collected from CERN (Tier-0), and distributedly worldwide in the grid in all the computing centers part of the ATLAS Experiment (10 Tier-1, and around 70 Tier-2). We present a new pull model for data collection in the grid with an object store as a temporary store, from where the data can be dynamically retrieved to be ingested at the final backend. We also present the contributions to a big data store using HBase/Phoenix, able to sustain the required data rates and total volume of data, and that simplifies the limitations of the previous hybrid solutions. Finally, we present a computing framework and tools using Spark for the data access, and solving the analytic use cases that access large amounts of data, such as overlaps or duplicate events detection

    Using Foresight to develop eHealth intervention implementation strategy

    Get PDF
    One of the key focus areas of the National Dementia Strategy, released by the Canadian government in 2019, is improving informal caregivers' quality of life through better support. While an array of services are available to support them, it’s usually up to caregivers to find them and navigating through a fragmented health and social support system can be challenging, time-consuming, frustrating, and often ineffective. Innovative approaches and eHealth interventions that can provide easy, timely, and need-based access to knowledge resources, enhances and safeguards care capacity among informal caregivers, reducing stress and depression levels, delaying nursing home placements, improving mood and their quality of life (Brodaty & Donkin, 2009). Innovations in technology are becoming a crucial element in improving support for and the well-being of family caregivers but a number of social, cultural, ethical, and technical issues complicate the rapid emergence of new technologies which affects its adoption, implementation, and scalability. Using a participatory foresight approach, this research project speculates futures, 15 years from now, to explore and envision an implementation model for eHealth services for informal Dementia caregivers in Ontario. At a time when technology innovations present significant challenges and opportunities, the purpose is to identify leverage points that will inspire and inform organizations, developers, researchers, healthcare providers, and innovators interested in translating knowledge into practice by designing sustainable and resilient eHealth interventions. This has been accomplished by understanding the needs of informal caregivers, implications of emerging technologies, and factors affecting implementation of eHealth solutions that support informal caregivers

    24th Nordic Conference on Computational Linguistics (NoDaLiDa)

    Get PDF

    Machine learning as a service for high energy physics (MLaaS4HEP): a service for ML-based data analyses

    Get PDF
    With the CERN LHC program underway, there has been an acceleration of data growth in the High Energy Physics (HEP) field and the usage of Machine Learning (ML) in HEP will be critical during the HL-LHC program when the data that will be produced will reach the exascale. ML techniques have been successfully used in many areas of HEP nevertheless, the development of a ML project and its implementation for production use is a highly time-consuming task and requires specific skills. Complicating this scenario is the fact that HEP data is stored in ROOT data format, which is mostly unknown outside of the HEP community. The work presented in this thesis is focused on the development of a ML as a Service (MLaaS) solution for HEP, aiming to provide a cloud service that allows HEP users to run ML pipelines via HTTP calls. These pipelines are executed by using the MLaaS4HEP framework, which allows reading data, processing data, and training ML models directly using ROOT files of arbitrary size from local or distributed data sources. Such a solution provides HEP users non-expert in ML with a tool that allows them to apply ML techniques in their analyses in a streamlined manner. Over the years the MLaaS4HEP framework has been developed, validated, and tested and new features have been added. A first MLaaS solution has been developed by automatizing the deployment of a platform equipped with the MLaaS4HEP framework. Then, a service with APIs has been developed, so that a user after being authenticated and authorized can submit MLaaS4HEP workflows producing trained ML models ready for the inference phase. A working prototype of this service is currently running on a virtual machine of INFN-Cloud and is compliant to be added to the INFN Cloud portfolio of services

    Cyber-Human Systems, Space Technologies, and Threats

    Get PDF
    CYBER-HUMAN SYSTEMS, SPACE TECHNOLOGIES, AND THREATS is our eighth textbook in a series covering the world of UASs / CUAS/ UUVs / SPACE. Other textbooks in our series are Space Systems Emerging Technologies and Operations; Drone Delivery of CBNRECy – DEW Weapons: Emerging Threats of Mini-Weapons of Mass Destruction and Disruption (WMDD); Disruptive Technologies with applications in Airline, Marine, Defense Industries; Unmanned Vehicle Systems & Operations On Air, Sea, Land; Counter Unmanned Aircraft Systems Technologies and Operations; Unmanned Aircraft Systems in the Cyber Domain: Protecting USA’s Advanced Air Assets, 2nd edition; and Unmanned Aircraft Systems (UAS) in the Cyber Domain Protecting USA’s Advanced Air Assets, 1st edition. Our previous seven titles have received considerable global recognition in the field. (Nichols & Carter, 2022) (Nichols, et al., 2021) (Nichols R. K., et al., 2020) (Nichols R. , et al., 2020) (Nichols R. , et al., 2019) (Nichols R. K., 2018) (Nichols R. K., et al., 2022)https://newprairiepress.org/ebooks/1052/thumbnail.jp

    PICT-DPA: A Quality-Compliance Data Processing Architecture to Improve the Performance of Integrated Emergency Care Clinical Decision Support System

    Get PDF
    Emergency Care System (ECS) is a critical component of health care systems by providing acute resuscitation and life-saving care. As a time-sensitive care operation system, any delay and mistake in the decision-making of these EC functions can create additional risks of adverse events and clinical incidents. The Emergency Care Clinical Decision Support System (EC-CDSS) has proven to improve the quality of the aforementioned EC functions. However, the literature is scarce on how to implement and evaluate the EC-CDSS with regard to the improvement of PHOs, which is the ultimate goal of ECS. The reasons are twofold: 1) lack of clear connections between the implementation of EC-CDSS and PHOs because of unknown quality attributes; and 2) lack of clear identification of stakeholders and their decision processes. Both lead to the lack of a data processing architecture for an integrated EC-CDSS that can fulfill all quality attributes while satisfying all stakeholders’ information needs with the goal of improving PHOs. This dissertation identified quality attributes (PICT: Performance of the decision support, Interoperability, Cost, and Timeliness) and stakeholders through a systematic literature review and designed a new data processing architecture of EC-CDSS, called PICT-DPA, through design science research. The PICT-DPA was evaluated by a prototype of integrated PICT-DPA EC-CDSS, called PICTEDS, and a semi-structured user interview. The evaluation results demonstrated that the PICT-DPA is able to improve the quality attributes of EC-CDSS while satisfying stakeholders’ information needs. This dissertation made theoretical contributions to the identification of quality attributes (with related metrics) and stakeholders of EC-CDSS and the PICT Quality Attribute model that explains how EC-CDSSs may improve PHOs through the relationships between each quality attribute and PHOs. This dissertation also made practical contributions on how quality attributes with metrics and variable stakeholders could be able to guide the design, implementation, and evaluation of any EC-CDSS and how the data processing architecture is general enough to guide the design of other decision support systems with requirements of the similar quality attributes
    corecore