- This event has passed.
TECoSA Seminar – Addressing Uncertainty in the Safety Assurance of Machine-Learning
April 6, 15:00 – 16:00
We aim to bring you a TECoSA Seminar on the first Thursday of each month during term-time. For Spring 2023, the talks will be on-line or hybrid. All are welcome to attend and we look forward to some lively discussions. Members can accept the invitations, non-members can email firstname.lastname@example.org to register.
Our April seminar is with Prof Simon Burton, Scientifc Director at Fraunhofer IKS. This will be a Zoom-only event, and is jointly organized by TECoSA and Digital Futures. The Zoom link is https://kth-se.zoom.us/j/66857695267.
ABSTRACT: There is increasing interest in the application of machine learning (ML) technologies to safety-critical cyber-physical systems, with the promise of increased levels of autonomy due to their potential for solving complex perception and planning tasks. However, demonstrating the safety of ML is seen as one of the most challenging hurdles to their widespread deployment for such applications. In this presentation I explore the factors which make the safety assurance of ML such a challenging task. In particular we address the impact of uncertainty on the confidence in ML safety assurance arguments. I show how this uncertainty is related to complexity in the ML models as well as the inherent complexity of the tasks that they are designed to implement. Based on definitions of categories and severity of uncertainty as well as an exemplary assurance argument structure, we examine possible defeaters to the assurance claims and consequently how the assurance argument can be made more convincing. The analysis combines an understanding of insufficiencies in machine learning models, their causes and mitigating measures with a systematic analysis of the types of asserted context, asserted evidence and asserted inference within the assurance argument.
This leads to a systematic identification of requirements on the assurance argument structure as well as supporting evidence. A combination of qualitative arguments combined with quantitative evidence are required to build a robust argument for safety-related properties of ML functions that is continuously refined to reduce residual and emerging uncertainties in the arguments after the function has been deployed.
The presentation ends with an outlook on both developments in the standardisation of the safety of AI/ML, in particular ISO PAS 8800 Road Vehicles – Safety and AI, as well as open research topics.
BIO: Prof. Dr. Simon Burton graduated in computer science at the University of York, where he also achieved his Phd on the topic of the verification of safety-critical software in 2001. Simon has a background in a number of industries but has spent the last two decades mainly focusing on the automotive sector, working in research and development projects as well as leading consulting, engineering service and product organisations. Most recently, he held the role of Director of Vehicle Systems Safety at Robert Bosch GmbH where, amongst other things, his efforts were focused on developing strategies for ensuring the safety of automated driving systems.
In September 2020, he joined Fraunhofer IKS in the role of scientific director where he steers research strategy into “safe intelligence”. His own personal research interests include the safety assurance of complex, autonomous systems, and the safety of machine learning. In addition to his role within Fraunhofer IKS, he has the role of honorary visiting professor at the University of York where he supports a number of research activities and interdisciplinary collaborations. He is also an active member in various standardization committees, and is the convenor of the ISO working group ISO/TC 22/SC 32/WG14 with the responsibility for developing an international standard on Safety and AI for road vehicles.