Home | Committee | CFP | Conduct |
1st Call for Papers: March 11, 2024
Paper submission: May 2, 2024 AoE May 9, 2024 AoE
Author notification: May 16, 2024, AoE May 23, 2024 AoE
Workshop: July 22-23, 2024
Deep learning has become state-of-the-art for many human-like tasks, such as computer vision or translation.
The persistent perception remains that deep neural networks cannot be applied in computer-aided verification tasks due to the complex symbolic reasoning involved.
Recently, this perception has started to shift: massive leaps in architecural design enabled the successful application of deep neural networks to various formal reasoning and automatic verification tasks (Examples include SAT and QBF solving, higher-order theorem proving, LTL satisfiability and synthesis, symbolic differentiation, autoformalization, and termination analysis).
The workshop on Deep Learning-aided Verification (DAV) aims to cover this unexplored research area in all its facets.
We cover the recent highlights and upcoming ideas in the intersection
between computer-aided verification and deep learning research. The workshop provides a platform to bring together industry and academic researchers from both communities, attract and
motivate young talent, and raise awareness of new technologies
Computer-aided verification research will benefit from developing
hybrid algorithms that combine the best of both worlds (efficiency and correctness), and machine
learning researchers will gain novel application domains to study architectures and a model's
generalization and reasoning capabilities.
Topics of interest include, but are not limited to:
The workshop focuses on how to use deep learning in verification, not to verify neural networks.
The workshop is a two-day event consisting of a mix of invited and contributed talks as well as tutorials from both research communities. In particular, we will encourage the exchange of ideas to form novel research vectors and collaborations that are interesting to both research domains, including common challenges such as acquiring large amounts of symbolic training data or developing architectural designs that ensure reliable reasoning.
Alex Sanchez-Stern — University of Massachusetts Amherst, MA, USA
Aishwarya Sivaraman — Meta Platforms, Inc., USA
Tobias Hecking — German Aerospace Center (DLR), Institute for Software Technology, Cologne, Germany
Alexander Weinert — German Aerospace Center (DLR), Institute for Software Technology, Cologne, Germany