5,899 research outputs found
5Growth: An end-to-end service platform for automated deployment and management of vertical services over 5G networks
This article introduces the key innovations of the 5Growth service platform to empower vertical industries with an AI-driven automated 5G end-to-end slicing solution that allows industries to achieve their service requirements. Specifically, we present multiple vertical pilots (Industry 4.0, transportation, and energy), identify the key 5G requirements to enable them, and analyze existing technical and functional gaps as compared to current solutions. Based on the identified gaps, we propose a set of innovations to address them with: (i) support of 3GPP-based RAN slices by introducing a RAN slicing model and providing automated RAN orchestration and control; (ii) an AI-driven closed-loop for automated service management with service level agreement assurance; and (iii) multi-domain solutions to expand service offerings by aggregating services and resources from different provider domains and also enable the integration of private 5G networks with public networks.This work has been partially supported by EC H2020 5GPPP 5Growth project (Grant 856709)
Progress and prospects for accelerating materials science with automated and autonomous workflows
Accelerating materials research by integrating automation with artificial intelligence is increasingly recognized as a grand scientific challenge to discover and develop materials for emerging and future technologies. While the solid state materials science community has demonstrated a broad range of high throughput methods and effectively leveraged computational techniques to accelerate individual research tasks, revolutionary acceleration of materials discovery has yet to be fully realized. This perspective review presents a framework and ontology to outline a materials experiment lifecycle and visualize materials discovery workflows, providing a context for mapping the realized levels of automation and the next generation of autonomous loops in terms of scientific and automation complexity. Expanding autonomous loops to encompass larger portions of complex workflows will require integration of a range of experimental techniques as well as automation of expert decisions, including subtle reasoning about data quality, responses to unexpected data, and model design. Recent demonstrations of workflows that integrate multiple techniques and include autonomous loops, combined with emerging advancements in artificial intelligence and high throughput experimentation, signal the imminence of a revolution in materials discovery
OmniForce: On Human-Centered, Large Model Empowered and Cloud-Edge Collaborative AutoML System
Automated machine learning (AutoML) seeks to build ML models with minimal
human effort. While considerable research has been conducted in the area of
AutoML in general, aiming to take humans out of the loop when building
artificial intelligence (AI) applications, scant literature has focused on how
AutoML works well in open-environment scenarios such as the process of training
and updating large models, industrial supply chains or the industrial
metaverse, where people often face open-loop problems during the search
process: they must continuously collect data, update data and models, satisfy
the requirements of the development and deployment environment, support massive
devices, modify evaluation metrics, etc. Addressing the open-environment issue
with pure data-driven approaches requires considerable data, computing
resources, and effort from dedicated data engineers, making current AutoML
systems and platforms inefficient and computationally intractable.
Human-computer interaction is a practical and feasible way to tackle the
problem of open-environment AI. In this paper, we introduce OmniForce, a
human-centered AutoML (HAML) system that yields both human-assisted ML and
ML-assisted human techniques, to put an AutoML system into practice and build
adaptive AI in open-environment scenarios. Specifically, we present OmniForce
in terms of ML version management; pipeline-driven development and deployment
collaborations; a flexible search strategy framework; and widely provisioned
and crowdsourced application algorithms, including large models. Furthermore,
the (large) models constructed by OmniForce can be automatically turned into
remote services in a few minutes; this process is dubbed model as a service
(MaaS). Experimental results obtained in multiple search spaces and real-world
use cases demonstrate the efficacy and efficiency of OmniForce
Autonomic care platform for optimizing query performance
Background: As the amount of information in electronic health care systems increases, data operations get more complicated and time-consuming. Intensive Care platforms require a timely processing of data retrievals to guarantee the continuous display of recent data of patients. Physicians and nurses rely on this data for their decision making. Manual optimization of query executions has become difficult to handle due to the increased amount of queries across multiple sources. Hence, a more automated management is necessary to increase the performance of database queries. The autonomic computing paradigm promises an approach in which the system adapts itself and acts as self-managing entity, thereby limiting human interventions and taking actions. Despite the usage of autonomic control loops in network and software systems, this approach has not been applied so far for health information systems.
Methods: We extend the COSARA architecture, an infection surveillance and antibiotic management service platform for the Intensive Care Unit (ICU), with self-managed components to increase the performance of data retrievals. We used real-life ICU COSARA queries to analyse slow performance and measure the impact of optimizations. Each day more than 2 million COSARA queries are executed. Three control loops, which monitor the executions and take action, have been proposed: reactive, deliberative and reflective control loops. We focus on improvements of the execution time of microbiology queries directly related to the visual displays of patients' data on the bedside screens.
Results: The results show that autonomic control loops are beneficial for the optimizations in the data executions in the ICU. The application of reactive control loop results in a reduction of 8.61% of the average execution time of microbiology results. The combined application of the reactive and deliberative control loop results in an average query time reduction of 10.92% and the combination of reactive, deliberative and reflective control loops provides a reduction of 13.04%.
Conclusions: We found that by controlled reduction of queries' executions the performance for the end-user can be improved. The implementation of autonomic control loops in an existing health platform, COSARA, has a positive effect on the timely data visualization for the physician and nurse
Flight Simulator Model Integration for Supporting Pilot-in-the-Loop Testing in Model-Based Rotorcraft Design
Model-Based Design (MBD) enables iterative design practices and boosts the agility of the air vehicle development programs. Flight simulators are extensively employed in these programs for evaluating the handling qualities of the designed platforms. In order to keep up with the agility provided by the MBD, integration of the air vehicle models in fairly complex flight simulators needs to be addressed. The AVES Software Development Kit (SDK), which is the simulation software suite of DLR Air Vehicle Simulator (AVES), enables tackling the model integration starting from the modelerâs desktop. Additionally, 2Simulate, which is the enabling real-time simulation infrastructure of AVES SDK, provides automated model integration workflow for MATLAB/Simulink models using Simulink Coder code generation facilities. This paper presents the successful employment of AVES SDK and the 2Simulate model integration workflow for addressing integration challenges for Pilot-in-the-Loop Testing in AVES
Machine Learning for Microcontroller-Class Hardware -- A Review
The advancements in machine learning opened a new opportunity to bring
intelligence to the low-end Internet-of-Things nodes such as microcontrollers.
Conventional machine learning deployment has high memory and compute footprint
hindering their direct deployment on ultra resource-constrained
microcontrollers. This paper highlights the unique requirements of enabling
onboard machine learning for microcontroller class devices. Researchers use a
specialized model development workflow for resource-limited applications to
ensure the compute and latency budget is within the device limits while still
maintaining the desired performance. We characterize a closed-loop widely
applicable workflow of machine learning model development for microcontroller
class devices and show that several classes of applications adopt a specific
instance of it. We present both qualitative and numerical insights into
different stages of model development by showcasing several use cases. Finally,
we identify the open research challenges and unsolved questions demanding
careful considerations moving forward.Comment: Accepted for publication at IEEE Sensors Journa
SliceOps: Explainable MLOps for Streamlined Automation-Native 6G Networks
Sixth-generation (6G) network slicing is the backbone of future
communications systems. It inaugurates the era of extreme ultra-reliable and
low-latency communication (xURLLC) and pervades the digitalization of the
various vertical immersive use cases. Since 6G inherently underpins artificial
intelligence (AI), we propose a systematic and standalone slice termed SliceOps
that is natively embedded in the 6G architecture, which gathers and manages the
whole AI lifecycle through monitoring, re-training, and deploying the machine
learning (ML) models as a service for the 6G slices. By leveraging machine
learning operations (MLOps) in conjunction with eXplainable AI (XAI), SliceOps
strives to cope with the opaqueness of black-box AI using explanation-guided
reinforcement learning (XRL) to fulfill transparency, trustworthiness, and
interpretability in the network slicing ecosystem. This article starts by
elaborating on the architectural and algorithmic aspects of SliceOps. Then, the
deployed cloud-native SliceOps working is exemplified via a latency-aware
resource allocation problem. The deep RL (DRL)-based SliceOps agents within
slices provide AI services aiming to allocate optimal radio resources and
impede service quality degradation. Simulation results demonstrate the
effectiveness of SliceOps-driven slicing. The article discusses afterward the
SliceOps challenges and limitations. Finally, the key open research directions
corresponding to the proposed approach are identified.Comment: 8 pages, 6 Figure
Recent Advances in Machine Learning for Network Automation in the O-RAN
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY), https://creativecommons.org/licenses/by/4.0/The evolution of network technologies has witnessed a paradigm shift toward open and intelligent networks, with the Open Radio Access Network (O-RAN) architecture emerging as a promising solution. O-RAN introduces disaggregation and virtualization, enabling network operators to deploy multi-vendor and interoperable solutions. However, managing and automating the complex O-RAN ecosystem presents numerous challenges. To address this, machine learning (ML) techniques have gained considerable attention in recent years, offering promising avenues for network automation in O-RAN. This paper presents a comprehensive survey of the current research efforts on network automation using ML in O-RAN. We begin by providing an overview of the O-RAN architecture and its key components, highlighting the need for automation. Subsequently, we delve into O-RAN support for ML techniques. The survey then explores challenges in network automation using ML within the O-RAN environment, followed by the existing research studies discussing application of ML algorithms and frameworks for network automation in O-RAN. The survey further discusses the research opportunities by identifying important aspects where ML techniques can benefit.Peer reviewe
Internet of robotic things : converging sensing/actuating, hypoconnectivity, artificial intelligence and IoT Platforms
The Internet of Things (IoT) concept is evolving rapidly and influencing newdevelopments in various application domains, such as the Internet of MobileThings (IoMT), Autonomous Internet of Things (A-IoT), Autonomous Systemof Things (ASoT), Internet of Autonomous Things (IoAT), Internetof Things Clouds (IoT-C) and the Internet of Robotic Things (IoRT) etc.that are progressing/advancing by using IoT technology. The IoT influencerepresents new development and deployment challenges in different areassuch as seamless platform integration, context based cognitive network integration,new mobile sensor/actuator network paradigms, things identification(addressing, naming in IoT) and dynamic things discoverability and manyothers. The IoRT represents new convergence challenges and their need to be addressed, in one side the programmability and the communication ofmultiple heterogeneous mobile/autonomous/robotic things for cooperating,their coordination, configuration, exchange of information, security, safetyand protection. Developments in IoT heterogeneous parallel processing/communication and dynamic systems based on parallelism and concurrencyrequire new ideas for integrating the intelligent âdevicesâ, collaborativerobots (COBOTS), into IoT applications. Dynamic maintainability, selfhealing,self-repair of resources, changing resource state, (re-) configurationand context based IoT systems for service implementation and integrationwith IoT network service composition are of paramount importance whennew âcognitive devicesâ are becoming active participants in IoT applications.This chapter aims to be an overview of the IoRT concept, technologies,architectures and applications and to provide a comprehensive coverage offuture challenges, developments and applications
- âŠ