Quality assurance (QA) testing is crucial for software development, but conventional methods often struggle with rapid development cycles and complex user interfaces. Many organizations still combine manual testing with script-based automation frameworks like Selenium or Playwright. However, a significant amount of time is spent maintaining existing test automation rather than creating new tests because traditional automation is prone to breaking with UI changes, requires specialized programming skills, and may not offer complete coverage across various browsers and devices. As more organizations explore AI-driven testing, current approaches are proving insufficient.
This post examines how agentic QA automation addresses these challenges. It provides a practical example using Amazon Bedrock AgentCore Browser and Amazon Nova Act to automate testing for a sample retail application.
Benefits of agentic QA testing
Agentic AI transforms QA testing from rule-based automation to intelligent, autonomous testing systems. Unlike traditional automation that follows predefined scripts, agentic AI can observe, learn, adapt, and make decisions in real time. Key advantages include autonomous test generation through UI observation and dynamic adaptation to UI changes, which significantly reduces the maintenance overhead that often consumes QA teams’ time. These systems emulate human interaction patterns, ensuring testing occurs from a genuine user perspective rather than through rigid, scripted pathways.
AgentCore Browser for large-scale agentic QA testing
To fully leverage agentic AI testing at an enterprise scale, organizations require robust infrastructure capable of supporting intelligent, autonomous testing agents. AgentCore Browser, a built-in tool of Amazon Bedrock AgentCore, fulfills this need by offering a secure, cloud-based browser environment specifically designed for AI agents to interact with websites and applications.
AgentCore Browser includes essential enterprise security features such as session isolation, built-in observability through live viewing, AWS CloudTrail logging, and session replay capabilities. Each browser instance operates within a containerized ephemeral environment and can be shut down after use, ensuring clean testing states and optimal resource management. For large-scale QA operations, AgentCore Browser can run multiple browser sessions concurrently, allowing organizations to parallelize testing across different scenarios, environments, and user journeys simultaneously.
Agentic QA with the Amazon Nova Act SDK
The infrastructure capabilities of AgentCore Browser become even more powerful when combined with an agentic SDK like Amazon Nova Act. Amazon Nova Act is an AWS service that assists developers in building, deploying, and managing fleets of reliable AI agents for automating production UI workflows. With this SDK, developers can break down complex testing workflows into smaller, reliable commands while retaining the ability to call APIs and perform direct browser manipulation as needed. This approach allows for seamless integration of Python code throughout the testing process. Developers can interleave tests, breakpoints, and assertions directly within the agentic workflow, providing enhanced control and debugging capabilities. This combination of the AgentCore Browser cloud infrastructure with the Amazon Nova Act agentic SDK creates a comprehensive testing ecosystem that redefines how organizations approach quality assurance.
Practical implementation: Retail application testing
To illustrate this transformation, consider developing a new application for a retail company. A mock retail web application was created to demonstrate the agentic QA process, assuming it is hosted on AWS infrastructure within a private enterprise network during development and testing phases.
To simplify test case creation, Kiro, an AI-powered coding assistant, is used to automatically generate UI test cases by analyzing the application’s codebase. Kiro examines the application structure, reviews existing test patterns, and creates comprehensive test cases following the JSON schema format required by Amazon Nova Act. By understanding the application’s features—including navigation, search, filtering, and form submissions—Kiro generates detailed test steps with actions and expected results that are immediately executable through AgentCore Browser. This AI-assisted approach significantly accelerates test creation while ensuring comprehensive coverage. The following demonstration shows Kiro generating 15 ready-to-use test cases for the QA testing demo application.
After generation, the test cases are placed in the test data directory, where pytest automatically discovers and executes them. Each JSON test file functions as an independent test that pytest can run in parallel. The framework uses pytest-xdist to distribute tests across multiple worker processes, automatically utilizing available system resources for optimal performance.
During execution, each test receives its own isolated AgentCore Browser session via the Amazon Nova Act SDK. The Amazon Nova Act agent reads the test steps from the JSON file and executes them, performing actions like clicking buttons or filling forms, then validating that expected results occur. This data-driven approach allows teams to create comprehensive test suites by simply writing JSON files, eliminating the need to write Python code for each test scenario. The parallel execution architecture significantly reduces testing time. Tests that would typically run sequentially can now execute simultaneously across multiple browser sessions, with pytest managing the distribution and aggregation of results. An HTML report is automatically generated using pytest-html and the pytest-html-nova-act plugin, providing test outcomes, screenshots, and execution logs for complete visibility into the testing process.

A powerful capability of AgentCore Browser is its ability to run multiple browser sessions concurrently, enabling true parallel test execution at scale. When pytest distributes tests across worker processes, each test initiates its own isolated browser session in the cloud. This means an entire test suite can execute simultaneously rather than waiting for each test to complete sequentially.
The AWS Management Console offers complete visibility into these parallel sessions. As shown in the following video, active browser sessions can be viewed running concurrently, allowing monitoring of their status and tracking of resource utilization in real time. This observability is crucial for understanding test execution patterns and optimizing testing infrastructure.

Beyond monitoring session status, AgentCore Browser provides live view and session replay features to observe exactly what Amazon Nova Act is doing during and after test execution. For an active browser session, the live view can be opened to watch the agent interact with the application in real time—clicking buttons, filling forms, navigating pages, and validating results. When session replay is enabled, recorded events can be reviewed by replaying the session. This allows for validation of test results even after test execution is complete. These capabilities are invaluable for debugging test failures, understanding agent behavior, and building confidence in the automated testing process.
For complete deployment instructions and access to the sample retail application code, AWS CloudFormation templates, and the pytest testing framework, refer to the accompanying GitHub repository. The repository includes the necessary components to deploy and test the application in an AWS environment.
Conclusion
This post demonstrated how AgentCore Browser can facilitate parallel agentic QA testing for web applications. An agent like Amazon Nova Act can perform automated agentic QA testing with high reliability.


