projects
-
Lifted Model Checking for Relational MDPs
An efficient formal verification framework for Safety in AI
-
Safe Reinforcement Learning via Probabilistic Logic Shield
A deep reinforcement learning framework that ensures safety of the learning agent by applying logical constraints on the neural policy.