Hello everyone, and welcome to this boot camp. Today we're going to be covering two main topics surrounding how to expand your test automation. Firstly, as you know, the world is expanding and the increasing complexity and how Eggplant solved this problem. Secondly, with increasingly faster and faster release cycles, how do we as a testing organization keep up?
Ethan Chung: Let's talk about the first thing first. Increasing complexity.
As software development has changed for the last decades. What typically used to be under the control of one organization has shifted into an incredibly complex environment, with long release cycles across multiple platforms. One of the first things that you'll notice is what used to be in waterfall and monolithic release structures, which be updates generally one big release, whether it be quarterly or even yearly in the past.
Now, shifting to potentially daily releases, what that means is testing has to keep up to meet these kinds of cycles and secondly, the actual environments that you're building. Historically, most companies were built to manage their own environment modules, and services are all built in-house. Now, with Course software services online, everything's shifted. It can be from anywhere, from open-source ones to cloud provider-cost solutions.
All of this means that you no longer have to control these environments for testing.
So how do we deal with this one? And combined with the blast impact of delivering on a platform. Historically these simple environments would be delivered against one environment. For example, you build something that is just going to be for desktop, is just going to be for Android, or just going to be for iOS. However, now you can build with one solution and deliver on multiple platforms, whether that is a browser, a thin client, a thick client, or even our mobile on iOS and Android, or through one framework shared across it, which is fantastic for the development perspective, but it's given the challenge of scalability. You're going to be chasing those releases to make sure you can test all these potential outputs and all these outputs across devices, and OSes that are growing exponentially every single day.
Let's talk about how we previously used to do this and where we're moving into. What we usually do within testing is focus around. Firstly, you can use manual testing. It's intelligent. You have people in between the systems that can manually click go through your system and understand when you have issues across the system, even if it's not tested, you know that's part of the problem. You can report it and highlight the issue to the product. However, one of the big problems is that this doesn't actually share any business insight. Your users, all this information they get that's knowledgeable is not supposed to be shared across the organization and other people in a team. Another problem is that humans, while intelligent and fantastic at understanding these complex issues, it's incredibly time-consuming. Having ultimately needed someone within working hours, sitting on a desk, clicking through, and understanding the system is both time-consuming to train and time-consuming to maintain. Ultimately, as people, we need to break, we need to rest. And so there will be always a barrier to continuous delivery. You can't expect a 24-hour release cycle if you don't have 24 hours of validation that your environment is working.
The second method of testing has been through automated regression tests. These automated regression tests are effectively script-created tests that allow for a pack of actions being done through. Now, historically, these were generally built on key items like epics with journey mapping through, what's possible user movements, and this built on further based on product ads or from historic bugs has been discovered. These historic bugs and the user assumption then instantly lock down the range of testing available. What do effectively does is test the bugs that we're expecting to find and where it could be, but not intelligently testing. If you don't build on these scripts to test specifically what you need, you're going to miss the issue. Furthermore, one of these problems of the tests is generally built around a component-based focus, not around UX. What that means is if you build a new component, let's say a button in your application, it may work that you can find an object, you can interact with the object layer. However, what you can't do is understand how that application doing from a UX perspective, if the button is hidden to find the right, you can still interact with the objects. However, from a user perspective, it's still broken. If the button is red on a red background, it's broken. From a user perspective, you can't actually see it, but from an object perspective is still working. And again, it's the same problem as the manual testing. This isn't build a continuous business insight to drive quality on your processes.
Lastly, we have soloed monitoring and testing. This is the main problem is not learning from experience. You have these multiple siloed groups of people testing that don't share the information across. And generally, a lot of the monitoring frameworks are built squarely around the coding language itself. That means you have a chicken egg-chasing many of these monitoring solutions to make sure that what you're monitoring is up to date with the newest version.
Furthermore, without the focus of the user or the automation regression tested in between, you can't actually test on the behaviors going through the application. So unnatural mimicking a user moving through the application. How are we solving this problem? From the Eggplant's perspective purely on the increasing complexity, we're able to automate the full digital experience. First thing we do is focus on the model of the digital twin. The digital twin maps the user behaviors across these applications.
So that way in the same way as a map where historically, if you were creating a direction for a user, what you would be doing is saying, left of the church, walk down the street into four right and then turn right, and that there are notes of details, but that's how we would do manual testing and how would we do regression testing. What we now building is effectively having Google Maps where we can have a wide view of the environment. We know exactly where all roads can lead to and automatically at a press of a pin in A to pin to 2, pin to B, we're able to pin down exactly where it needs to. And we can use our automation engine to create a path for our testing to go down. All this interacts with our testing fusion engine to then detect these bugs and defects faster, to give you faster MTTI, as well as automatically pass that up with business intelligence for better MTTI. And also, this gives you an impact to understanding how your testing takes place. With this AI-driven testing approach gives you an extremely flexible testing model across your environment, leveraging our internal AI, we can effectively build these regression packs for you to optimize your user journey through a low code no code approach all the way to a full code language underneath as required for your power users. As mentioned before, the increasing complexity, one of the biggest problems of complexity is the environments that are in currently.
Eggplant has seen a massive boom in technology since its existence in mobile phones, which iPhone alone historically used to be just one iPhone, one system. Now, we're coming in multiple versions with different resolutions and versions. And Android is a plethora of different devices, brands, and OSes. Same with any other technology with browsers, OSes versions, and different materials coming out constantly. It means it's incredibly different. Furthermore, with the object layer APIs, difference close black box systems, even servers that physical users can't connect to and interact with Eggplant's fusion engine effectively allows you to interact with all these both as a user and as an interaction directly the system itself, which effectively allows you to be able to test anything from Walmart's checkout systems to NASA, or the U.S Army.
What massive wide range of use cases which Eggplant as a sole central point is able to test. Here what I'm going to show you for 30s is a couple of different examples of Eggplant in action. As you can see here, Eggplant is working around mobile testing. We're going to test with the thick client applications even mainframes and many more physical interactions of systems.
One of the main benefits of this is you're no longer needing to have increasingly complex testing tools and many, many siloed equipment. You can all have one pit stop for all your testing solutions within Eggplant, reducing your complexity in your testing. So that way, as your environment grows more complex, it doesn't mean there is a knock-on effect on your testing requirements.
Comments are closed.