The explosion in the performance of Machine Learning (ML) and the potential
of its applications are strongly encouraging us to consider its use in
industrial systems, including for critical functions such as decision-making in
autonomous systems. While the AI community is well aware of the need to ensure
the trustworthiness of AI-based applications, it is still leaving too much to
one side the issue of safety and its corollary, regulation and standards,
without which it is not possible to certify any level of safety, whether the
systems are slightly or very critical.The process of developing and qualifying
safety-critical software and systems in regulated industries such as aerospace,
nuclear power stations, railways or automotive industry has long been well
rationalized and mastered. They use well-defined standards, regulatory
frameworks and processes, as well as formal techniques to assess and
demonstrate the quality and safety of the systems and software they develop.
However, the low level of formalization of specifications and the uncertainties
and opacity of machine learning-based components make it difficult to validate
and verify them using most traditional critical systems engineering methods.
This raises the question of qualification standards, and therefore of
regulations adapted to AI. With the AI Act, the European Commission has laid
the foundations for moving forward and building solid approaches to the
integration of AI-based applications that are safe, trustworthy and respect
European ethical values. The question then becomes "How can we rise to the
challenge of certification and propose methods and tools for trusted artificial
intelligence?