152,216 research outputs found
Quality Assurance in MLOps Setting: An Industrial Perspective
Today, machine learning (ML) is widely used in industry to provide the core
functionality of production systems. However, it is practically always used in
production systems as part of a larger end-to-end software system that is made
up of several other components in addition to the ML model. Due to production
demand and time constraints, automated software engineering practices are
highly applicable. The increased use of automated ML software engineering
practices in industries such as manufacturing and utilities requires an
automated Quality Assurance (QA) approach as an integral part of ML software.
Here, QA helps reduce risk by offering an objective perspective on the software
task. Although conventional software engineering has automated tools for QA
data analysis for data-driven ML, the use of QA practices for ML in operation
(MLOps) is lacking. This paper examines the QA challenges that arise in
industrial MLOps and conceptualizes modular strategies to deal with data
integrity and Data Quality (DQ). The paper is accompanied by real industrial
use-cases from industrial partners. The paper also presents several challenges
that may serve as a basis for future studies.Comment: Accepted in ISE2022 of the 29th Asia-Pacific Software Engineering
Conference (APSEC 2022
Lightweight and static verification of UML executable models
Executable models play a key role in many software development methods by facilitating the (semi)automatic implementation/execution of the software system under development. This is possible because executable models promote a complete and fine-grained specification of the system behaviour. In this context, where models are the basis of the whole development process, the quality of the models has a high impact on the final quality of software systems derived from them. Therefore, the existence of methods to verify the correctness of executable models is crucial. Otherwise, the quality of the executable models (and in turn the quality of the final system generated from them) will be compromised. In this paper a lightweight and static verification method to assess the correctness of executable models is proposed. This method allows us to check whether the operations defined as part of the behavioural model are able to be executed without breaking the integrity of the structural model and returns a meaningful feedback that helps repairing the detected inconsistencies.Peer ReviewedPostprint (author's final draft
On the Role of Primary and Secondary Assets in Adaptive Security: An Application in Smart Grids
peer-reviewedAdaptive security aims to protect valuable assets
managed by a system, by applying a varying set of security
controls. Engineering adaptive security is not an easy task. A
set of effective security countermeasures should be identified.
These countermeasures should not only be applied to (primary)
assets that customers desire to protect, but also to other
(secondary) assets that can be exploited by attackers to harm
the primary assets. Another challenge arises when assets vary
dynamically at runtime. To accommodate these variabilities, it
is necessary to monitor changes in assets, and apply the most
appropriate countermeasures at runtime. The paper provides
three main contributions for engineering adaptive security.
First, it proposes a modeling notation to represent primary
and secondary assets, along with their variability. Second,
it describes how to use the extended models in engineering
security requirements and designing required monitoring functions.
Third, the paper illustrates our approach through a set
of adaptive security scenarios in the customer domain of a
smart grid. We suggest that modeling secondary assets aids
the deployment of countermeasures, and, in combination with
a representation of assets variability, facilitates the design of
monitoring function
Assisted assignment of automotive safety requirements
ISO 26262, a functional-safety standard, uses Automotive Safety Integrity Levels (ASILs) to assign safety requirements to automotive-system elements. System designers initially assign ASILs to system-level hazards and then allocate them to elements of the refined system architecture. Through ASIL decomposition, designers can divide a function & rsquo;s safety requirements among multiple components. However, in practice, manual ASIL decomposition is difficult and produces varying results. To overcome this problem, a new tool automates ASIL allocation and decomposition. It supports the system and software engineering life cycle by enabling users to efficiently allocate safety requirements regarding systematic failures in the design of critical embedded computer systems. The tool is applicable to industries with a similar concept of safety integrity levels. © 1984-2012 IEEE
- …