Edge computing opens an opportunity for real-time processing of large amounts of data and real-time decision making in a wide range of systems. This is associated with a high degree of uncertainty.
Challenges and goals
Edge computing opens an opportunity for real-time processing of large amounts of data and real-time decision making in a wide range of systems including multi-agent and human-in-the-loop systems. Typical applications of edge computing are associated with a high degree of uncertainty, not only due to the complexity of the application scenario itself, but also due to the fact that machine learning algorithms (which are extremely well-fitted for implementation at the edge) do not come with performance guarantees.
Applying worst-case design principles in such settings is not a viable approach. We need:
- new techniques to analyze safety in systems that use machine learning as one of their key computational concepts
- new safety architectures, safety monitors and risk and reliability models to enhance safety at runtime
- new techniques to enhance safety in human-in-the-loop systems. Furthermore, we need to deploy safety assurance models to evaluate the overall safety achieved through the three above-mentioned approaches.
Tasks and Methodologies
The project will focus on three tightly coupled objectives.
The first objective of the project focuses on robustifying machine learning algorithms. Domain-specific classification through supervised machine learning allows to more reliably detect features and reduce the likelihood of false positives and negatives. We will develop novel methods and domainspecific modeling languages to allow engineers to declaratively express probabilistic models; to state what the model means, without specifying how it will be checked or executed.
Furthermore, we will focus on reinforcement learning-one of promising approaches to decision making under uncertainty-in a safety-critical context. We intend to overcome the challenge of ensuring safe exploration in the physical world by utilizing probabilistic model checking-based correct-by-synthesis methods to suppress decisions potentially leading to unsafe states. The research challenges includes both theoretical aspects (learning method and semantics of modeling languages) and practical aspects (efficient compilation and runtime systems).
The second objective relates to schemes by which safety-critical applications based on machine learning can be externally monitored to introduce further safety-enhancing features. We will develop safety monitors for edge-based systems and applications which can reason about certain safety properties of the systems, and potentially throttle the system down in case of the detection of a critical behavior. Such monitoring and risk-reducing approaches must be accompanied by safety architectures controlling the system mode of operations, considering the edge and its context for proper error handling including graceful degradation. Next to devising them,we aim to validate their fault tolerance, e.g. using systematic fault injection in simulations. This objective will leverage system modeling including fault-models (the so called fault-hypothesis), and structural and behavioral models of reliability.
Finally, the third objective comprises of investigating the relationship between safety and system usability, in particular if the safety-critical context includes interaction with humans. We are interested in the relationship between providing safety-related feedback and the human perception of the system. Safe systems may not necessarily be perceived as safe, whereas unsafe systems might be perceived as such depending on the type, form and structure of the perceptual feedback provided. Understanding these trade-offs is vital to good system design.
Finally, with respect to human-robot collaboration, safety can drastically be increased by the realtime detection of the human actions and intention recognition. However, capturing multimodal signals from the human that can feed such representations is far from trivial. We are interested in the effectiveness of different approaches and how they can increase safety and usability in edge systems and applications.
Focus area manager
Formal methods, Artificial Intelligence
Focus area co-manager
Human-machine interaction, Machine learning
Focus area co-manager
Systems & safety engineering, Embedded control systems
Focus area co-manager
Programming models, Security SW Eng., Machine learning