Workshop on Safety Verification , 22 Nov 2024, Tutorial Room 21, the Arc
Date: Friday, 22 Nov
Time: 2 pm to 6 pm
Venue: Tutorial Room 21, the Arc, https://maps.ntu.edu.sg/#/ntu/d386ffa80e4e46f286d17f08/poi/details/c67f6b1cb45c48ffa57dfbf3
Talk Schedule:
2:00 – 2:40
Speaker: Diptarko Roy, Research Fellow, University of Oxford & University of Birmingham
Title: Stochastic Omega-Regular Verification and Control with Supermartingales
Abstract: We present a supermartingale certificate for omega-regular specifications. We use the Robbins & Siegmund convergence theorem to define supermartingale certificates for almost-sure acceptance of a Streett condition defined on a general state-space stochastic process, which we call Streett supermartingales. This enables verification and control of discrete-time stochastic dynamical models with infinite state-space under omega-regular and linear temporal logic specifications. Our proof rule generalises prior supermartingale proof rules for (almost-sure) reachability, safety, reach-avoidance, persistence and recurrence specifications, while remaining applicable to discrete-time stochastic dynamical models and probabilistic programs with discrete (countable) and continuous (uncountably infinite) state spaces and distributions. We provide a sound and relatively complete synthesis algorithm that reduces the synthesis of control parameters and Streett supermartingale proof certificates (for omega-regular objectives) to satisfiability modulo theories, for linear stochastic systems and templates, and illustrate a prototype implementation using a number of examples.
2:40 – 3:20
Speaker: Djordje Zikelic, Assistant Professor, Singapore Management University
Title: Neural Controller Synthesis and Verification with Guarantees
Abstract: Learning-based methods such as reinforcement learning are receiving increasing attention for solving challenging control tasks. However, the lack of safety assurances about learned controllers poses a significant barrier to their practical deployment. In this talk, we will present a framework for learning and/or formally verifying neural controllers. The framework is applicable to stochastic dynamical systems, thus also taking into account environment uncertainty. Given a property and a probability bound, the framework jointly learns and formally verifies a controller together with a formal certificate of the property being satisfied with at least the specified probability, both parametrized as neural networks. Certificates are martingale-like objects that can be effectively used to formally reason about stochastic systems in a fully automated fashion. We will show how the framework can be applied to solve reachability, safety, reach-avoidance and stability tasks, as well as a compositional framework allowing us to reason about a more expressive logic of probabilistic properties.
3:20-4:00 BREAK
4:00 – 4:40
Speaker: Peixin Wang, Research Fellow, Nanyang Technological University
Title: Unifying Qualitative and Quantitative Safety Verification of DNN-Controlled Systems
Abstract: The rapid advance of deep reinforcement learning techniques enables the oversight of safety-critical systems through the utilization of Deep Neural Networks (DNNs). This underscores the pressing need to promptly establish certified safety guarantees for such DNN-controlled systems. Most of the existing verification approaches rely on qualitative approaches, predominantly employing reachability analysis. However, qualitative verification proves inadequate for DNN-controlled systems as their behaviors exhibit stochastic tendencies when operating in open and adversarial environments. In this paper, we propose a novel framework for unifying both qualitative and quantitative safety verification problems of DNN-controlled systems. This is achieved by formulating the verification tasks as the synthesis of valid neural barrier certificates (NBCs). Initially, the framework seeks to establish almost-sure safety guarantees through qualitative verification. In cases where qualitative verification fails, our quantitative verification method is invoked, yielding precise lower and upper bounds on probabilistic safety across both infinite and finite time horizons. To facilitate the synthesis of NBCs, we introduce their 𝑘-inductive variants. We also devise a simulation-guided approach for training NBCs, aiming to achieve tightness in computing precise certified lower and upper bounds. We prototype our approach into a tool called UniQQ and showcase its efficacy on four classic DNN-controlled systems.
If time allows, I will introduce our ongoing work, which aims to extend barrier-like functions to quantitative analysis of LTL or omega-regular properties in stochastic DNN-controlled systems.
4:40 – 5:20
Speaker: Dominik Wagner, Research Fellow, Nanyang Technological University
Title: Reinforcement Learning with LTL and ω-Regular Objectives via Optimality-Preserving Translation to Average Rewards
Abstract: Linear temporal logic (LTL) and, more generally, ω-regular objectives are alternatives to the traditional discount sum and average reward objectives in reinforcement learning (RL), offering the advantage of greater comprehensibility and hence explainability. In this work, we study the relationship between these objectives. Our main result is that each RL problem for ω-regular objectives can be reduced to a limit-average reward problem in an optimality-preserving fashion, via (finite-memory) reward machines. Furthermore, we demonstrate the efficacy of this approach by showing that optimal policies for limit-average problems can be found asymptotically by solving a sequence of discount-sum problems approximately. Consequently, we resolve an open problem: optimal policies for LTL and ω-regular objectives can be learned asymptotically.
Time permitting, I will also talk about ongoing work on zero-shot RL, where the aim is to learn a representation of the environment such that optimal policies for a class of ω-regular objectives can be obtained without training for specific objectives.
5:20-5:40 pm
Speaker: Yong Kiam Tan, Assistant Professor, Nanyang Technological University
Title: Formally Certified Approximate Model Counting
Abstract: Approximate model counting is the task of approximating the number of solutions to an input Boolean formula. The state-of-the-art approximate model counter for formulas in conjunctive normal form (CNF), ApproxMC, provides a scalable means of obtaining model counts with probably approximately correct (PAC)-style guarantees. Nevertheless, the validity of ApproxMC’s approximation relies on a careful theoretical analysis of its randomized algorithm and the correctness of its highly optimized implementation, especially the latter’s stateful inter-actions with an incremental CNF satisfiability solver capable of natively handling parity (XOR) constraints. We present the first certification framework for approximate model counting with formally verified guarantees on the quality of its output approximation. Our approach combines: (i) a static, once-off, formal proof of the algorithm’s PAC guarantee in the Isabelle/HOL proof assistant; and (ii) dynamic, per-run, verification of ApproxMC’s calls to an external CNF-XOR solver using proof certificates. We detail our general approach to establish a rigorous connection between these two parts of the verification, including our blueprint for turning the formalized, randomized algorithm into a verified proof checker, and our design of proof certificates for both ApproxMC and its internal CNF-XOR solving steps. Experimentally, we show that certificate generation adds little overhead to an approximate counter implementation, and that our certificate checker is able to fully certify 84.7% of instances with generated certificates when given the same time and memory limits as the counter.