A Systematic Review of Formal Verification Models for Safety of Autonomous AI Agents: Methods, Architectures, and Future Research Directions

Main Article Content

A. G. Lewis
B. Horváth
R. Costa

Abstract

Autonomous AI agents are increasingly deployed in safety-critical domains such as autonomous driving, robotics, healthcare, and defense systems. These systems operate with minimal human intervention and make real-time decisions in complex and uncertain environments. Ensuring their safety, reliability, and correctness is therefore of paramount importance. Formal verification models provide mathematically rigorous techniques to analyze and guarantee system behavior against predefined specifications, offering stronger assurances than traditional testing and simulation approaches. This paper presents a systematic review of formal verification models applied to autonomous AI agents, focusing on verification methods, system architectures, and emerging research directions. It examines key approaches such as model checking, theorem proving, runtime verification, and hybrid verification frameworks. Additionally, the study explores architectural paradigms including agent-based systems, cyber-physical systems, and learning-enabled systems. A structured literature review of studies published between 2018 and 2023 is conducted, analyzing 30 significant contributions in the field. The review identifies trends such as the integration of formal methods with machine learning, the use of temporal logic for specifying safety properties, and the development of scalable verification techniques for multi-agent systems. Despite advancements, challenges remain in scalability, model accuracy, and verification of learning-based components. The findings highlight the need for hybrid verification approaches, improved tool support, and the incorporation of explainability and adaptability into verification frameworks. The paper concludes by outlining future research directions, including post-quantum verification, explainable AI safety, and real-time adaptive verification systems.

Article Details

How to Cite
Lewis, A. G., Horváth, B., & Costa, R. (2025). A Systematic Review of Formal Verification Models for Safety of Autonomous AI Agents: Methods, Architectures, and Future Research Directions. International Journal on Advanced Computer Theory and Engineering, 14(2), 99–107. Retrieved from https://journals.mriindia.com/index.php/ijacte/article/view/2094
Section
Articles

Most read articles by the same author(s)

Similar Articles

<< < 3 4 5 6 7 8 9 10 11 12 > >> 

You may also start an advanced similarity search for this article.