Article contents
Autonomous Quality Assurance: Leveraging Generative AI Agents for Functional Testing of Cloud-Native Applications
Abstract
This paper examines the transformative potential of Generative AI (GenAI) agents in functional testing for cloud-native applications. While effective, traditional quality assurance approaches often create significant engineering overhead and scalability challenges in distributed systems. The integration of Large Language Models (LLMs) through agentic workflows presents a novel paradigm that automates test generation, maintenance, and execution across multiple testing layers. Through the analysis of architectural frameworks, implementation methodologies, and organizational impacts findings indicate substantial improvements in both efficiency metrics and operational agility. The transformation of quality assurance roles from tactical execution to strategic oversight represents a fundamental shift in how enterprise-scale quality assurance can be conducted, suggesting that GenAI-driven testing approaches offer not merely technical optimization but a strategic competitive advantage in modern software development environments. Furthermore, the cross-domain applicability of these technologies surpasses conventional testing borders into adjacent quality issues including security validation, performance optimization, and user experience assurance, therefore providing a cohesive quality framework that tackles the whole spectrum of cloud-native application concerns while allowing hitherto unheard of scalability in quality operations suited to the rising complexity of dispersed architectures.
Article information
Journal
Journal of Computer Science and Technology Studies
Volume (Issue)
7 (7)
Pages
747-754
Published
Copyright
Open access

This work is licensed under a Creative Commons Attribution 4.0 International License.