6,272 research outputs found
Reinforcement Learning for Automatic Test Case Prioritization and Selection in Continuous Integration
Testing in Continuous Integration (CI) involves test case prioritization,
selection, and execution at each cycle. Selecting the most promising test cases
to detect bugs is hard if there are uncertainties on the impact of committed
code changes or, if traceability links between code and tests are not
available. This paper introduces Retecs, a new method for automatically
learning test case selection and prioritization in CI with the goal to minimize
the round-trip time between code commits and developer feedback on failed test
cases. The Retecs method uses reinforcement learning to select and prioritize
test cases according to their duration, previous last execution and failure
history. In a constantly changing environment, where new test cases are created
and obsolete test cases are deleted, the Retecs method learns to prioritize
error-prone test cases higher under guidance of a reward function and by
observing previous CI cycles. By applying Retecs on data extracted from three
industrial case studies, we show for the first time that reinforcement learning
enables fruitful automatic adaptive test case selection and prioritization in
CI and regression testing.Comment: Spieker, H., Gotlieb, A., Marijan, D., & Mossige, M. (2017).
Reinforcement Learning for Automatic Test Case Prioritization and Selection
in Continuous Integration. In Proceedings of 26th International Symposium on
Software Testing and Analysis (ISSTA'17) (pp. 12--22). AC
Usage of Network Simulators in Machine-Learning-Assisted 5G/6G Networks
Without any doubt, Machine Learning (ML) will be an important driver of
future communications due to its foreseen performance when applied to complex
problems. However, the application of ML to networking systems raises concerns
among network operators and other stakeholders, especially regarding
trustworthiness and reliability. In this paper, we devise the role of network
simulators for bridging the gap between ML and communications systems. In
particular, we present an architectural integration of simulators in ML-aware
networks for training, testing, and validating ML models before being applied
to the operative network. Moreover, we provide insights on the main challenges
resulting from this integration, and then give hints discussing how they can be
overcome. Finally, we illustrate the integration of network simulators into
ML-assisted communications through a proof-of-concept testbed implementation of
a residential Wi-Fi network
Scenarios for the development of smart grids in the UK: literature review
Smart grids are expected to play a central role in any transition to a low-carbon energy future, and much research is currently underway on practically every area of smart grids. However, it is evident that even basic aspects such as theoretical and operational definitions, are yet to be agreed upon and be clearly defined. Some aspects (efficient management of supply, including intermittent supply, two-way communication between the producer and user of electricity, use of IT technology to respond to and manage demand, and ensuring safe and secure electricity distribution) are more commonly accepted than others (such as smart meters) in defining what comprises a smart grid.
It is clear that smart grid developments enjoy political and financial support both at UK and EU levels, and from the majority of related industries. The reasons for this vary and include the hope that smart grids will facilitate the achievement of carbon reduction targets, create new employment opportunities, and reduce costs relevant to energy generation (fewer power stations) and distribution (fewer losses and better stability). However, smart grid development depends on additional factors, beyond the energy industry. These relate to issues of public acceptability of relevant technologies and associated risks (e.g. data safety, privacy, cyber security), pricing, competition, and regulation; implying the involvement of a wide range of players such as the industry, regulators and consumers.
The above constitute a complex set of variables and actors, and interactions between them. In order to best explore ways of possible deployment of smart grids, the use of scenarios is most adequate, as they can incorporate several parameters and variables into a coherent storyline. Scenarios have been previously used in the context of smart grids, but have traditionally focused on factors such as economic growth or policy evolution. Important additional socio-technical aspects of smart grids emerge from the literature review in this report and therefore need to be incorporated in our scenarios. These can be grouped into four (interlinked) main categories: supply side aspects, demand side aspects, policy and regulation, and technical aspects.
A Survey on Explainable AI for 6G O-RAN: Architecture, Use Cases, Challenges and Research Directions
The recent O-RAN specifications promote the evolution of RAN architecture by
function disaggregation, adoption of open interfaces, and instantiation of a
hierarchical closed-loop control architecture managed by RAN Intelligent
Controllers (RICs) entities. This paves the road to novel data-driven network
management approaches based on programmable logic. Aided by Artificial
Intelligence (AI) and Machine Learning (ML), novel solutions targeting
traditionally unsolved RAN management issues can be devised. Nevertheless, the
adoption of such smart and autonomous systems is limited by the current
inability of human operators to understand the decision process of such AI/ML
solutions, affecting their trust in such novel tools. eXplainable AI (XAI) aims
at solving this issue, enabling human users to better understand and
effectively manage the emerging generation of artificially intelligent schemes,
reducing the human-to-machine barrier. In this survey, we provide a summary of
the XAI methods and metrics before studying their deployment over the O-RAN
Alliance RAN architecture along with its main building blocks. We then present
various use-cases and discuss the automation of XAI pipelines for O-RAN as well
as the underlying security aspects. We also review some projects/standards that
tackle this area. Finally, we identify different challenges and research
directions that may arise from the heavy adoption of AI/ML decision entities in
this context, focusing on how XAI can help to interpret, understand, and
improve trust in O-RAN operational networks.Comment: 33 pages, 13 figure
Adoption of Big Data and AI methods to manage medication administration and intensive care environments
Artificial Intelligence (AI) has proven to be very helpful in different areas, including the
medical field. One important parameter for healthcare professionals’ decision-making
process is blood pressure, specifically mean arterial pressure (MAP). The application
of AI in medicine, more specifically in Intensive Care Units (ICU) has the potential to
improve the efficiency of healthcare and boost telemedicine operations with access to
real-time predictions from remote locations. Operations that once required the presence
of a healthcare professional, can be done at a distance, which facing the recent COVID-19
pandemic, proved to be crucial.
This dissertation presents a solution to develop an AI system capable of accurately
predicting MAP values. Many ICU patients suffer from sepsis or septic shock, and they
can be identified by the need for vasopressors, such as noradrenaline, to keep their MAP
above 65 mm Hg. The presented solution facilitates early interventions, thereby minimising
the risk to patients.
The current study reviews various machine learning (ML) models, training them to
predict MAP values. One of the challenges is to see how the different models behave
during their training process and choose the most promising one to test in a controlled
environment. The dataset used to train the models contains identical data to the one
generated by bedside monitors, which ensures that the models’ predictions align with
real-world scenarios. The medical data generated is processed by a separate component
that performs data cleaning, after which is directed to the application responsible for
loading, classifying the data and utilising the ML model. To increase trust between
healthcare professionals and the system to be developed, it is also intended to provide
insights into how the results are achieved.
The solution was integrated, for validation, with one of the telemedicine hubs deployed
by the European project ICU4Covid through its CPS4TIC component.A Inteligência Artificial (IA) é muito útil em diferentes áreas, incluindo a saúde. Um
parâmetro importante para a tomada de decisão dos profissionais de saúde é a pressão
arterial, especificamente a pressão arterial média (PAM). A aplicação da IA na medicina,
mais especificamente nas Unidades de Cuidados Intensivos (UCI), tem o potencial de
melhorar a eficiência dos cuidados de saúde e impulsionar operações de telemedicina com
acesso a previsões em tempo real a partir de locais remotos. As operações que exigiam a
presença de um profissional de saúde, podem ser feitas à distância, o que, face à recente
pandemia da COVID-19, se revelou crucial.
Esta dissertação apresenta como solução um sistema de IA capaz de prever valores
de PAM. Muitos pacientes nas UCI sofrem de sepse ou choque séptico, e podem ser
identificados pela necessidade de vasopressores, como a noradrenalina, para manter a
sua PAM acima dos 65 mm Hg. A solução apresentada facilita intervenções antecipadas,
minimizando o risco para doentes.
O estudo atual analisa vários modelos de machine learning (ML), e treina-os para
preverem valores de PAM. Um desafio é ver o desempenho dos diferentes modelos durante
o seu treino, e escolher o mais promissor para testar num ambiente controlado. O
dataset utilizado para treinar os modelos contém dados idênticos aos gerados por monitores
de cabeceira, o que assegura que as previsões se alinhem com cenários realistas. Os
dados médicos gerados são processados por um componente separado responsável pela
sua limpeza e envio para a aplicação responsável pelo seu carregamento, classificação e
utilização do modelo ML. Para aumentar a confiança entre os profissionais de saúde e o
sistema, pretende-se também fornecer uma explicação relativa à previsão dada.
A solução foi integrada, para validação, com um dos centros de telemedicina implantado
pelo projeto europeu ICU4Covid através da sua componente CPS4TIC
RLOps:Development Life-cycle of Reinforcement Learning Aided Open RAN
Radio access network (RAN) technologies continue to witness massive growth,
with Open RAN gaining the most recent momentum. In the O-RAN specifications,
the RAN intelligent controller (RIC) serves as an automation host. This article
introduces principles for machine learning (ML), in particular, reinforcement
learning (RL) relevant for the O-RAN stack. Furthermore, we review
state-of-the-art research in wireless networks and cast it onto the RAN
framework and the hierarchy of the O-RAN architecture. We provide a taxonomy of
the challenges faced by ML/RL models throughout the development life-cycle:
from the system specification to production deployment (data acquisition, model
design, testing and management, etc.). To address the challenges, we integrate
a set of existing MLOps principles with unique characteristics when RL agents
are considered. This paper discusses a systematic life-cycle model development,
testing and validation pipeline, termed: RLOps. We discuss all fundamental
parts of RLOps, which include: model specification, development and
distillation, production environment serving, operations monitoring,
safety/security and data engineering platform. Based on these principles, we
propose the best practices for RLOps to achieve an automated and reproducible
model development process.Comment: 17 pages, 6 figrue
- …