Let's take a look at what it would look like for me to run this test case.
I'm going to head over to test execution. In our test execution section.
We're going to be able to see the structure that we have already created with the releases and under each release. I have created a cycle for the type of tests that we will be conducting under that release. In here, we can also create what we call a test suite.
And in my case, I have separated my test suites by the application that is under test. So now very quickly, I can find all the unit tests and for what application they were run.
All of my functional tests and for application, what applications they were run. And again, this comes in handy later on when we want to do all of our reporting. I have already created some functional tests and that add them to this test suite. If I open up this test suite, I can see that there are five tests that make up the suite. Three of them have been executed and two of them still need to be executed.
Let's walk through a basic workflow of what a tester would do when they're ready to execute one of these tests. In this case, I am going to use my enter vehicle data test. I'm going to select it and I'm going to run. The Testpad. Testpad will open. Again, we have or precondition.
Let's go ahead and navigate to it. This will open up that form that we need to fill out. Let me go ahead and put those side to side so we can have a better look. And now I can go through each of these steps. In this case, we're asked to choose a vehicle make. I can go in here and choose Mercedes Benz as an example. I can set this as passed. I could log any documentation or evidence that I was able to pass this step as well as log an actual result.
All right. Now I can move to the second step and maybe I noticed that there is no field to choose a vehicle model. In this case, as a tester, I can go ahead and field that step and maybe I want to log a defect. Directly from a Testpad.
I can click on my defect tab. I'll go ahead and include all of the tests steps up until this step that it failed and go ahead and create a new defect. This will pull up the defect tab. And I can quickly go ahead and select the severity of my defect. Let's call this major. We can select a priority. I'm going to set it to high. And we can also assign it to a specific dev or tester to keep an eye on. In this case, this is a new defect.
Let's go ahead and save and close the defect.
And now we can see that with that step. We have a defect associated that we can track and trace back to this execution. Let's go ahead and select this overall test as failed and go ahead and save. I can close out, out of here, and let's close out my test tab. When we're back into qTest, now we can see the executions and we can see that the latest execution that we just ran for this test appears as failed.
And also, we have a new defect that is logged within it for the overall suite. Now, I can see that three out of those five executions have been executed. Two have passed, one has failed. And we have two tests that we still need to execute.
We can always dive deeper into that execution. If we want to see that execution history down here, we can see that we have run this test two times. The first one, we actually were able to pass it, but the second one, we encountered that defect that we logged and trace back to this execution. Down here, we can also see the details of the latest run detailed by step and we can see the specific step where we found a defect. If we had logged any evidence with it, we would also see those screenshots appear in here for review, for managers to be able to identify that defect as well.
Let's close out of here now.
That we have seen how we can run a manual test case. It is worth mentioning that qTest can also centralize all of your automated executions, and we'll be able to see how we can bring in automation results in a variety of different ways -- thanks to Simon, which will walk us through the process of bringing automation.
But we can see examples of automation that we have brought from Jenkins and also from Postman.
These are automated executions that run outside of qTest. But we were able to orchestrate and schedule directly from the qTest UI as well as consume those results back into qTest automatically.
This way we have a centralized tool where we can have all of our manual executions and automated executions. Whether those are testing for functionality, for performance, or unit tests, you name it, we can have all of it centralized in one easy UI. If we click on the top project, we can also see a brief summary of all of the testing activities that have happened in qTest.
So we can see that there are 26 different runs in this project. 24 have been executed, and we're missing two for execution, as well as a breakdown of those automated executions and the releases that have had tests conducted on them.
With that being said, I want to pass it over to Simon to cover automation and how we can bring those results into qTest.
Comments are closed.