Okay. So we see there's a lot of output here, a lot of inputs that were captured by the GameDriver agent. And I'm just going to explain some of this. And then we're going to take a recording like this into our test class, where we're going to do some magic on it and make it replayable here. Now, again, the goal of the recording is not to provide a 1 to 1 record and replay. There are a number of factors why that's quite difficult. Games are often non-deterministic and difficult to take a record and replay without any sort of modification. But our goal here is more can we isolate this behavior? Can we make sure that the things that are captured can be performed? In this case, I'm going to change very little. In fact, I'm only looking at where did we perform inputs and making sure that those are captured appropriately, like the triggers that I provided on the key and the draw, etc..

Let's go about this huge file for output for working with MetaQuest 2. And I can save this directly. This little button here in the recorder allows me to save this. I'm going to put it on my desktop and then it just creates a file there that I can then open up in my Visual Studio or other development environment. Now, taking a look here, I have actually a few such recordings that I've taken some time to modify, but very little here. And I'll explain what was modified before I rerun this test. Essentially, the output was just the raw recording. And as you can see from this example, we have a few thousand lines of input. Right. So a lot of data being captured here. So I accidentally inserted a keystroke into my editor, but that's okay. And the first thing that's done is there's a tracked device tracking state which is set to this value and toggling it on. It's what this is basically saying is that for the headset and the left device in the right device, we're setting the tracking state. Now, if you come back to the recording here for a second, you'll know that the path to these objects was Oculus Quest center eye position, Oculus Quest Center eye rotation and et cetera. Each of these paths included Oculus Touch or Oculus Quest, or as my modified version does not. What I've done is I've replaced these with the objects that I'm going to create in order to execute these tests. One of the challenges of testing in VR is having to always put on that headset. What we're trying to do is reduce the amount of time that you spend wearing the headset so that you can focus more on gameplay mechanics and features and eliminating the need to test very repetitive types of functionality like do the buttons work? Does the key go into the lock and can I open the drawer and that sort of thing, which is not a very high-value test and can easily be automated.

And so what I've done here is I've replaced these paths with those of the device that I'm creating, which is done in the main test itself. Over here in our test class, where we do the standard test fixture, the definition where we use the logic to determine my readiness locally, remotely or on a physical headset or what have you? And then we're going to talk about some of the things that are new in this case because we're going to be simulating those devices now. And when we do that through the GameDriver Agent. The editor sees that as a connected device. There's no difference between the GameDriver simulator device and the physical device, except for obviously the physicality of it. And so a couple of things here that we're doing is I'm providing the path in a format here that I can recognize. This could be modified to suit your needs. But I've got instead of the Oculus headset, I've got GDIOHMD, GDIO being the GameDriver.io that we've adopted. Leftpath, which is my left controller GDIO left hand and my rightpath is GDIO right hand. Also captured the height which I've noted in my own experiences, roughly my own height in this game. It's useful for me to capture these values that I can then use for replay later. But we're not going to be worried about that for now. We go through the same sort of connect methods that we do for everything else, and then down below, just before we're finished with the one-time set up, you'll see something interesting here. We're going to be creating devices now using the GameDriver API, we're creating the headset, we're creating the definition of the Oculus HMD with the device name of GDIO Oculus HMD, and then the tags of GDIO HMD. This is the format that is used for creating devices, and I'm inserting this dev path which is useful when we're testing on standalone when testing on the physical device because it's actually going to be looking for a little more information there when we do that versus running in standalone. But we can ignore this for now.

We're creating the HMD, a left controller, a right controller, and then I'm going to be outputting those paths for, again, my own purposes. And that's it. Now you'll know there are a couple of things here that I didn't really touch on. There's the input system and then there's the legacy recording GameDriver supports both, and this test is actually used to test both of those things for us. But they're very similar in their implementation. This is basically saying if input equals new for testing the new input system community, then this is what we're going to do. Otherwise, we're just going to be saying enable hooks for legacy XR which is very simple. Another way to create these devices is using a command called load device from description. So sorry, create device from description. So in create device from description, we can actually take a JSON file from the editor itself or one that we create from the editor and give that a name. So rather than defining this device, if you have a bespoke device or something very specific with features that you would like to use that are not generic in their implementation like we have here, then you can in the editor open up the analysis menu under window and then input debugger. The input debugger I'll bring here into my editor view will show us all of the connected devices right so we've got unsupported devices that are connected to my computer. We've got disconnected devices. You can see here my headset, which is still plugged in, but in a disconnected state because it's no longer tracking. This is the definition of those devices. Rather than loading up the generic description that I did with my create device methods, I can right-click one of these. Not in this view, but down in the layouts. I can find my specific devices for Oculus track devices if I recall correctly. And we've got a whole bunch to choose from Oculus headset, Oculus headset with controls, with on headset controls, Oculus Touch controllers, etc. Open XR HMD, which is just a generic sort of representation of a headset, etc. I can take any one of these and say, copy a layout to JSON. Right. So then I would copy this. I would open up some tool like the Visual Studio Code and say, create a new file we're going to paste this definition in of this device from Unity. And you can see a lot of the features and controls that are defined here and we can capture this, save this as a JSON, and then load that in my device description. This is another useful way of testing in XR. But today, we're going to focus on just the generic approach here. I'm going to remove this. And we're just going to use this generic description. Next, we're going to do some basic tests. Are we in the right scene? Can I find an object, etc.? And then the test that we're really interested in here is the input test. Our focus today is on the new input system, which if you're not familiar, there's the legacy input system in Unity and then the nuances input system package, which really became more widely adopted maybe one or two years ago to 2020, 2021. And is now kind of the de facto standard where we can create mappings for actions and then assign those mappings to any number of devices. And so this has become the sort of standard way of working in VR now, but there are still a lot of projects out there that are working with the legacy input system. But in this case, we're just going to be working with the input system. I've saved my input system recording that file the output from the recorder into a new class. And I've given it a method which accepts the API client, a path to each of the devices that I'm going to use, and then it executes based on that. We're basically just taking the input of the API client for our test. An HMD, a left controller, a right controller. And then we're just taking the base recording. The only thing that was changed here, besides the substituted path that you see here, is where I'm taking the left path or the head path, which is from the declaration of this method was the input on the grip itself. What we found is that there was a difference because of the frames per second of the headset, which runs at about 90, and the computer that we're doing this recording on, which is in the hundreds, again, that we had to tweak the duration of those inputs. And so if I look for grippressed, for example, this number needed to be increased by one, for example, and basically taking the frames per second, multiplying it by three and a bit, I didn't change the float number just increased. I think it was 2 to 3 and that was the only change. We did this I think two or three times. But the other 2000 inputs that were captured by the recorder weren't changed at all. Now we're ready to run this test. And essentially I'm going to hit play and we're going to watch as the GameDriver creates simulated devices. But it puts the editor in play mode, creates the simulated devices, and then proceeds to go through the same sort of actions that I did during my recording. Important to note that I did disconnect the headset in executing this test because the default in the editor when the headset is present is to run in the headset. Now you can run tests against the headset. I do not recommend wearing the headset while you're doing so because the movement can be very jarring and will cause nausea. I promise that. But you can see as the recording replays that I'm moving around and looking at my hands and yes, again, trying not to bump into my desk as I grabbed the key and inserted in the lock, etc.. And this is a pretty complex behavior that was recorded using the same method that we just demonstrated to perform actions using VR controllers and headsets in a 3D space. But you can see it's pretty precise in terms of how we're executing that. I mean, it looks like a user's performing these actions. Other than that, it's pretty straightforward. We're taking the same concepts that we did in the 2D and 3D examples, and we're applying them in XR with the addition of the recorder tool, which gives us a little more granularity on how we're performing those actions and with a different focus because no longer is it moving a character around a room or space to perform actions. It's moving ourselves around that room and making sure that those recorded inputs can be performed consistently in replay as well.

Comments are closed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}