How do we put it all together? We've talked about it a lot of different concepts today. But really important to know how to take this from concept to reality in making sure that you can automate your project and get a positive ROI from that experience. A couple of things here we see a very sort of linear progression in our customers and our users, and basically the same that we've seen in automated testing for the last 25 years. First, you want to define your success criteria. It's sometimes the case where people go into this sort of headlong and say, here's the 200 tests that we've been running, every single sprint, every release, and we're going to automate those from start to finish, start to finish and then see how that goes. Every single time that happens, there might be some benefit to doing that. And then that benefit is immediately lost when major changes come down in that application. We want to avoid that. You want to define your success criteria? You want to approach building your automation in a very sort of strategic manner. Let's say we have ten tests that we want to run, but all of those tests share the component of login, for example, instead of performing the login for each of those tests directly. We would create one log-in test and then reuse that log-in test for each of the test executions that are being run. We don't want to modify ten tests if the log-in changes, we want to modify the one component that contains the log-in piece if that changes. We're reducing the complexity of the implementation. Now, understanding what shared components are and how to automate those things so that we can work with them more effectively can be tricky. It sometimes takes a little bit of planning, but once we've done that, the benefits are that we don't need to keep revisiting the same functionality over and over again. Then we can begin defining test cases and identifying and satisfying the dependencies. Another pitfall that we see a lot is people rushing headlong into automation without understanding what are the things that they depend on. Do they know their input systems? Do they know which project dependencies exist so that testers can easily reproduce those? And if they don't exist, how are they going to work with the project? Sometimes we're asked, how do we work with Oculus or MetaQuest 2 versus HoloLens 2 and what's the configuration? How do we set those things up? Do the testers understand those dependencies on getting those things working before rushing into the automation exercise themselves?

 Next, start with basic tests. Don't try and boil the ocean. You want to start with basic tests doing basic things that we will then grow and add additional features on top of. I mentioned the log-in piece, that's an important one because logging is almost in every application. How do we satisfy the logging requirement in a way that can be reused by our test automation? We don't need to do that again. And then we can add additional layers on top of that. How do we register a new user? How do we register a user with different settings? How do we data drive that so we can touch all of the permutations of the customization of our character creation or something like that. But it all starts from a very basic test. Then we can get into things like inputs and interactions in the UI, what are the distinct pieces of functionality that we want to test before we start putting things together? Start small, build simple tests, and then you can start to combine those things using some of those helper methods that I demonstrated in both the 3D test and in the XR 1 briefly and finally start running your tests in the editor. One thing that we see often is that new teams or teams that are starting out will say, Well, we want to test on our Android devices or Android and iOS or we want to test on consoles and that's great. You'll get there, but don't start there. Start by running your tests in the editor, making sure that you can run them consistently, and repeatedly that your components are working together from one user to the next, for example. It's great that I can run tests but can make my colleagues run tests using the same assets in the same project? Can I run those tests from the automated CI system? And then we can start to bring in more complexity like mobile devices. And when we go into mobile, for example, we want to test can I run the test on a mobile device connected locally? And then can I run tests on a mobile device connected remotely? On an Appium server or hosted in the cloud? Don't start with the cloud option because you're going to spend a lot of cycles getting there when you might not even have consistency in the test execution itself. So start in the editor. Start small.

And then you can move on to testing and real devices. Wiring up automated test execution from the CI system, from the command line, taking your tests that are run in the editor or using Visual Studio like I demonstrated today, and moving those over to your Jenkins or Circle CI or equivalent build system in order to run those headless. And then are you getting the right information back from those tests? So we talked about some of the logging, some of the assertions, and how those are handled. But each of these test frameworks, NUnit, and SpecFlow come with a reporting component which has plug-ins for the presentation of those results. How are those wired up into your build system so that you can see how tests are passing or failing? And are we getting the right information back when they're failing, such as screenshots? Next, you want to start creating test libraries, create reusable assets that your testers can share amongst themselves, and build more complex scenarios on top of. We started small using basic tests and inputs and interactions. How can we combine those things into create much more complex scenarios? And that's through the use of test libraries. Instead of having a test definition that says test order zero, we might say, okay, here's a piece of functionality that handles login, here's another piece of test functionality that handles account creation. Here's another one that handles character customization. And now we're going to combine those things in a test and pass in parameters for the different types of positive, negative, or even destructive test cases that we want to run. And then we're going to abstract the implementation details. So I talked a bit about SpecFlow, Cucumber, and then there are custom frameworks where you can hide all of the complexity of your GameDriver test beneath a very simple-to-understand, easy-to-read, easy-to-implement type of test where even your non-technical testers can just assemble tests using plain English. And that's not something that's unique to GameDriver. That's standard in the industry but something that you can very much take advantage of using GameDriver. But as I mentioned automation execution from the CI, moving all of this into an automated execution flow. We don't want to just create tests. We want to create tests that run themselves. We want tests to run automatically, tests that the developer can run to verify that their changes didn't break anything. And you might even in certain quality gates to say if these tests don't pass, if these core pieces of functionality don't pass, then the build is failed. And we're going to go back to the developers of that. Now, you might say that sounds a bit extreme, and in some cases it is, but that's exactly when we want the feedback to go to the developers. When a change is made that's broken a test, we want to make sure that that's caught early and that we're getting that feedback to the developer so that we can fix it before the developer moves on to something else and forgets what they were working on.

Lastly, we can start focusing on things like reporting and analytics, being able to aggregate the results across multiple test runs, and multiple releases to see where are hotspots, where are tests flaky, and where are things failing a lot? How can we improve those tests? Because a lot of time gets spent in the creation of tests, but not enough time gets spent looking backward in tests and saying, maybe we don't need a thousand tests, maybe we need a few hundred tests that have a different focus or run in a certain way that would reduce the amount of time it takes to execute. I've been in situations where the functional automated tests would take 8 hours to run after a build, and that's not a situation that you want to be in. You want your test to be executed quickly and touch the functionality that's important and cover the most without wasting any time. And so reducing that waste and repetition is key. You might find that that over time you're testing the same thing a bunch of times in different places in different ways. And then you can start to expand the scope of testing. What things can we test that maybe we weren't before? Can we look for boundary collisions? Can we test that the map doesn't have any holes in it? Are the enemies behaving the way that they should be the type of things? And very, very complex scenarios come after all of this. After we've dealt with the basics, the things that are tying us up and eating up most of our time.

Comments are closed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}