Hi, Philippa Merrill, I'm one of the technical consultants here at Keysight. Having worked as an automation tester for a large software development team, I can offer some first-hand experience when it comes to test strategies, processes, and best practices. At Keysight my expertise lies mainly in system integration, particularly with Eggpant DAI with various CI/CD project and test management tools. In this session today, we'll be going over continuous test automation, what it is, and how you can align your test strategies with it.
Let's start off with what continuous test automation actually means. Well, it's a software testing methodology that makes use of automated tests at every point in the software delivery lifecycle. So that might involve, for example, some unit tests after the developer has written some code and prior to them pushing it or creating a pull request. Once that pull request is made, there may be some Smote test to follow and maybe then some more specific integration of each test of the application before merging Regression testing. The main goal is to ensure that the release candidate is constantly being assessed for quality and improvement as it moves through the delivery pipeline. This highlights defects early and gives you time to fix them while the code is still fresh in their mind.
Considerations. What are some of the aspects to consider? In terms of benefits, continuous test automation allows testing to happen essentially in tandem with development. As soon as the code is merged, it can be tested and relatively instant feedback given. Traditionally, testing involves relatively siloed teams, whereby the devs mark an issue as merged or ready for testing, and a separate QA team will conduct these. Continuous testing on the other hand, not only speeds up the delivery of code but gives teams a common focus on quality and promotes greater communication between QA and development teams. Further, companies are increasing the breadth of their software which requires shipping things like mobile and desktop apps, cloud, and microservices. If these can be tested as soon as they're built, issues are highlighted and fixed sooner without having a rush at the end of a release cycle to get things fixed. In terms of drawbacks, there are some things to be aware of, such as changing mindsets. They're often very distinct teams, and it may require some cultural shifts to get the teams to work together. You must also consider which test packs to run, of which time, and how long these take to run. A few smoke tests might be sufficient for most mergers, while a 3-hour regression pack may not be as it would cause bottlenecks in the pipeline.
For this demo, I'm going to be showing you how to trigger a DAI test case using Jenkins, and have the results of that test case posted to a test management tool in JIRA. We're going to be kicking off the test 20 test case associated with this regression test model, and have the results posted to Xray in JIRA. The name and external ID of this test case match the Xray issue key, id, and test plan that it's a part of. For this, I'll be making use of two executables we have who the DAI runner and the DAI service. Over here, I have a Jenkins instance. Now, this is running locally on the machine but could be running from any server you like, whether that's a cloud server hosted by AWS, Azure, or an internal server. As long as your server running, Jenkins is able to access your Eggplant DAI instance, whether it's through the same network or through port forwarding, you can use this method. If I show you the configuration before I run it, you'll see that first of all, I'll be building a pipeline to build my code. Only if that build is successful would I trigger the pipeline to run my test run with DAI. And to trigger the run I'm making use of the executable code to the DAI runner. This links into the Eggplant DAI API and triggers your model test case or script run using your JSON parameters. It can be called from anywhere that is capable of running a command line script.
This could be things like GitHub Actions or Azure DevOps pipelines for example. Here you can see I'm simply executing a shell command to start the DAI runner passing in various information required by the API, such as DAI instance username, model, group, AI agent, and test case. Also, if you're running a high volume of tests, you have to have a stable test environment that can handle the load.
Comments are closed.