Remember, in the end, automation is a marathon, not a sprint. It's an ongoing cycle that we want to insert as early as possible into the life cycle and then keep repeating throughout that life cycle to achieve greater and greater benefits. We want to set clear, achievable milestones. Don't try to boil the ocean. You can't eat the entire elephant at once. Define KPIs that are measurable. There's no point in creating KPIs for quality that you cannot measure and cannot improve on. These are just standard rules for improvement, nothing specific to GameDriver.
Choose what to automate based on the measurable ROI. You don't want to spend 80% of your time automating 1% of your functionality. You want to test those things that are going to yield the greatest results first, then start to whittle those things down until you get to the more complex interactions.
When you talk about an automated cycle, where we check in code, execute builds, provision environments, deploy those environments, execute and run test cases, and then analyze results, it all feeds back into those changes that are being made. This allows us to execute or check in better code and continue the process.
Some of the common test automation use cases you might be wondering about include build verification testing, where teams these days are looking to support multiple platforms with the same project. Consider Fortnite or Mihoyo's entire universe, where they don't just support PCs and consoles, but also mobile devices. It's all the same game, and so being able to write a test once and execute it anywhere becomes a huge time saver. As I said in the beginning, a tester only has two thumbs, but an automated test can be run an infinite number of times. It's only limited by your budget and time.
Secondly, we have platform validation. Are we getting the same experience? Not just build verification that it's working on those platforms, but are we getting the same consistent experience across those platforms? We test for things like performance. Are we getting the same look and feel from a PlayStation 4 versus a PlayStation 5? There have been many examples of where that wasn't the case.
Finally, we have LiveOps testing. We're seeing more and more games these days move to a seasonal model, where functionality is being added and changed with every release. However, at the core of it, the game has basic features that don't change significantly enough from release to release. These things can be automated, freeing up our testers to do more with the limited time they have between releases, between seasons.
You can combine these types of tests to get much greater value than if you were to focus your attention on any one area. But start small, as I mentioned, start in one area and expand from there.