13 research outputs found
Search-based Software Testing Driven by Automatically Generated and Manually Defined Fitness Functions
Search-based software testing (SBST) typically relies on fitness functions to
guide the search exploration toward software failures. There are two main
techniques to define fitness functions: (a) automated fitness function
computation from the specification of the system requirements and (b) manual
fitness function design. Both techniques have advantages. The former uses
information from the system requirements to guide the search toward portions of
the input domain that are more likely to contain failures. The latter uses the
engineers' domain knowledge. We propose ATheNA, a novel SBST framework that
combines fitness functions that are automatically generated from requirements
specifications and manually defined by engineers. We design and implement
ATheNA-S, an instance of ATheNA that targets Simulink models. We evaluate
ATheNA-S by considering a large set of models and requirements from different
domains. We compare our solution with an SBST baseline tool that supports
automatically generated fitness functions, and another one that supports
manually defined fitness functions. Our results show that ATheNA-S generates
more failure-revealing test cases than the baseline tools and that the
difference between the performance of ATheNA-S and the baseline tools is not
statistically significant. We also assess whether ATheNA-S could generate
failure-revealing test cases when applied to a large case study from the
automotive domain. Our results show that ATheNA-S successfully revealed a
requirement violation in our case study
SAFER-HRC: Safety analysis through formal vERification in human-robot collaboration
Whereas in classic robotic applications there is a clear segregation between robots and operators, novel robotic and cyber-physical systems have evolved in size and functionality to include the collaboration with human operators within common workspaces. This new application field, often referred to as Human-Robot Collaboration (HRC), raises new challenges to guarantee system safety, due to the presence of operators. We present an innovative methodology, called SAFER-HRC, centered around our logic language TRIO and the companion bounded satisfiability checker Zot, to assess the safety risks in an HRC application. The methodology starts from a generic modular model and customizes it for the target system; it then analyses hazards according to known standards, to study the safety of the collaborative environment
Statistical Model Checking of Human-Robot Interaction Scenarios
Robots are soon going to be deployed in non-industrial environments. Before
society can take such a step, it is necessary to endow complex robotic systems
with mechanisms that make them reliable enough to operate in situations where
the human factor is predominant. This calls for the development of robotic
frameworks that can soundly guarantee that a collection of properties are
verified at all times during operation. While developing a mission plan, robots
should take into account factors such as human physiology. In this paper, we
present an example of how a robotic application that involves human interaction
can be modeled through hybrid automata, and analyzed by using statistical
model-checking. We exploit statistical techniques to determine the probability
with which some properties are verified, thus easing the state-space explosion
problem. The analysis is performed using the Uppaal tool. In addition, we used
Uppaal to run simulations that allowed us to show non-trivial time dynamics
that describe the behavior of the real system, including human-related
variables. Overall, this process allows developers to gain useful insights into
their application and to make decisions about how to improve it to balance
efficiency and user satisfaction.Comment: In Proceedings AREA 2020, arXiv:2007.1126
Mission Specification Patterns for Mobile Robots: Providing Support for Quantitative Properties
With many applications across domains as diverse as logistics, healthcare, and agriculture, service robots are in increasingly high demand. Nevertheless, the designers of these robots often struggle with specifying their tasks in a way that is both human-understandable and sufficiently precise to enable automated verification and planning of robotic missions. Recent research has addressed this problem for the functional aspects of robotic missions through the use of mission specification patterns. These patterns support the definition of robotic missions involving, for instance, the patrolling of a perimeter, the avoidance of unsafe locations within an area, or reacting to specific events. Our paper introduces a catalog of QUantitAtive RoboTic mission spEcificaTion patterns (QUARTET) that tackles the complementary and equally important challenge of specifying the reliability, performance, resource use, and other key quantitative properties of robotic missions. Identified using a methodology that included the analysis of 73 research papers published in 17 leading software engineering and robotics venues between 2014–2021, our 22 QUARTET patterns are defined in a tool-supported domain-specific language. As such, QUARTET enables: (i) the precise definition of quantitative robotic-mission requirements; and (ii) the translation of these requirements into probabilistic reward computation tree logic (PRCTL), and thus their formal verification and the automated planning of robotic missions. We demonstrate the applicability of QUARTET by showing that it supports the specification of over 95% of the quantitative robotic mission requirements from a systematically selected set of recent research papers, of which 75% can be automatically translated into PRCTL for the purposes of verification through model checking and mission planning
A Deployment Framework for Formally Verified Human-Robot Interactions
In the future, assistive robots will spread to everyday settings and regularly interact with humans. This paper introduces a deployment approach for assistive robotic applications where human-robot interaction is the main element. The deployment infrastructure hinges on a model-to-code transformation technique and a ROS-based middleware layer and enables deployment in real life or simulation in a virtual environment. The approach fits into a model-driven framework for the formal verification of interactive scenarios. At design-time, the application analyst estimates the most likely outcome of the robotic mission through Statistical Model Checking of a Stochastic Hybrid Automata network modeling the scenario. We introduce an innovative approach to convert a specific subset of Stochastic Hybrid Automata into executable code to control the robot and respond to human actions. Deploying or simulating the application allows analysts to validate the results obtained at design time or to refine the formal model based on runs in the real or the virtual scene. The methodology’s effectiveness is tested via simulation of use cases from the healthcare setting, which can significantly benefit from this kind of approach thanks to its innovative features related to human physiology and autonomous behavior
Formally-based Model-Driven Development of Collaborative Robotic Applications
The development of Human Robot Collaborative (HRC) systems faces many challenges. First, HRC systems should be adaptable and re-configurable to support fast production changes. However, in the development of HRC applications safety considerations are of paramount importance, as much as classical activities such as task programming and deployment. Hence, the reconfiguration and reprogramming of executing tasks might be necessary also to fulfill the desired safety requirements. Model-based software engineering is a suitable means for agile task programming and reconfiguration. We propose a model-based design-to-deployment toolchain that simplifies the routine of updating or modifying tasks. This toolchain relies on (i) UML profiles for quick model design, (ii) formal verification for exhaustive search for unsafe situations (caused by intended or unintended human behavior) within the model, and (iii) trans-coding tools for automating the development process. The toolchain has been evaluated on a few realistic case studies. In this paper, we show a couple of them to illustrate the applicability of the approach
Model-driven Risk Analysis for the Design of Safe Collaborative Robotic Applications
In human-robot collaboration (HRC), humans and robots share the same workspace while executing hybrid tasks. Their close proximity imposes higher possibility of contacts that could potentially be dangerous. Hence, physical safety and risk analysis become of utmost importance during system design.In this paper, we propose a tool-supported interactive technique that facilitates the design of safe HRC systems for designers by performing iterative risk analysis and suggesting risk reduction measures (RRMs) to mitigate unsafe physical contacts
Proceedings of the Second Workshop on Agents and Robots for reliable Engineered Autonomy
This volume contains the proceedings of the Second Workshop on Agents and
Robots for reliable Engineered Autonomy (AREA 2022), co-located with the 31st
International Joint Conference on Artificial Intelligence and the 25th European
Conference on Artificial Intelligence (IJCAI-ECAI 2022). The AREA workshop
brings together researchers from autonomous agents, software engineering and
robotic communities, as combining knowledge coming from these research areas
may lead to innovative approaches that solve complex problems related with the
verification and validation of autonomous robotic systems