1. What Makes a “Good” Automated Test
Dmytro lays the foundation with five essential properties:
Readable
Tests should tell a clear story:
“Given X, when Y, then Z” — without digging into helpers to understand what’s going on.Easier to debug and reason about failures.
Fast
He explicitly pushes back on “8-hour nightly builds”:
long, monolithic suites are a bad tactic, not a badge of honor.Aim for tests that can be run often and in parallel.
Independent & Atomic
Each test does one thing (Single Responsibility Principle for tests).
No reliance on state created by previous tests.
Repeatable
Tests should pass or fail the same way regardless of environment/order, assuming the system is the same.
Robust
Minor UI or data changes shouldn’t cause half the suite to fail.
Clean abstractions and targeted assertions help with this.
👉 As you watch, ask: Do my tests actually meet these five? Where do they fall short?
2. Clean Test Code: Fixtures & Separation of Concerns
He then moves into practical patterns and anti-patterns, starting with setup/teardown and base classes.
Anti-pattern: Tests doing too much (pre + test + cleanup)
Example (simplified):
renameBooktest:Creates a book
Opens the page
Renames the book
Verifies via API
Deletes the book
Problems:
Test name says “rename book” but it also creates and deletes.
If the rename step fails, the cleanup may never run, leaving dirty state.
Violates “do one thing” and mixes precondition, scenario, and postcondition in one method.
Same issues in deleteBook:
Duplicate setup logic: create book + open page repeated in multiple tests.
Refactor: Fixture/Base Class + Proper Pre/Post
He shows a better approach:
Create a Fixture base class:
Initialize REST client, pages, shared fields.
Provide
@BeforeClassfor login.Maintain a “generated data list” to track created objects.
Each test class (
CreateBookTests,ManageBookActionsTests) extends Fixture.Move common steps into:
@BeforeMethod→ per-test preconditions (e.g., create a book, open book page).@AfterMethodor list-based cleanup → per-test postconditions.
Important nuance:
Don’t put “delete all books” in the base class — it’s dangerous with parallel runs:
One class might be deleting books while another still needs them.
Instead:
Track data created in that test class instance and clean up only its own data.
Key takeaway:
Precondition, test scenario, and postcondition should be explicitly separated, and shared logic should live in appropriately designed fixtures — not shoehorned into every test body.
3. Atomic Tests & The Dangers of Dependencies
This section is gold if you’ve ever been bitten by @dependsOnMethods or priority-based ordering.
a) One test = one behavior
He shows an example where one test:
Renames a book,
Asserts it was renamed,
Deletes the book,
Asserts it was deleted.
If rename fails:
The test stops.
You never find out whether delete works or not.
Business might decide rename bug is “not critical,” but a deletion bug would be — and you miss it.
Fix:
Split into two atomic tests:
testRenameBooktestDeleteBook
Each with its own setup.
b) Explicit dependencies (dependsOnMethods)
Example:
createBookrenameBookdepends oncreateBookdeleteBookdepends onrenameBook
Problems:
If
createBookfails:renameBookanddeleteBookare skipped.You don’t know if they’re broken or not — you just don’t run them.
Debugging:
You see “deleteBook failed/skipped,” but to reproduce you must run the whole chain.
Running
deleteBookalone doesn’t work because it needs upstream state.
c) Hidden dependencies via priorities
Another subtle trap:
Use of priority annotations instead of explicit dependencies:
@Test(priority = 1) createBook@Test(priority = 2) renameBook@Test(priority = 3) deleteBook
This looks independent, but logically, each test is relying on the previous one’s state.
Result:
Running the full class: green.
Running
deleteBookalone: fails (“no book to delete”).You must read the code to discover the hidden dependency.
Core lesson:
Tests should be atomic and self-contained. Each test must prepare its own preconditions instead of relying on the side effects of other tests or execution order.
4. Explicit Assignment & Avoiding Hidden Side Effects
He highlights a subtle but common smell:
Anti-pattern: Methods that “secretly” assign shared fields
Example:
Method
createBookSmell():Creates a book and assigns it to a shared field inside the class.
Returns
void.
From the outside:
You call
createBookSmell(), then later use somebookfield.It’s not obvious when and where that field was set.
If another call reassigns the same field, you can easily end up using the wrong instance.
Better pattern:
Methods that create something should return it.
Use explicit assignment in the test:
Book book = bookClient.createBook();
Benefits:
You can see exactly which book you’re working with.
No hidden state updates, easier debugging, and clearer ownership.
5. Don’t Rely on List Order (get(0) Rule)
He calls this the “get(0) rule”.
Example:
Tests that always use
books.get(0)expecting the first book to be the one they just created.
Why this is fragile:
Other tests may create books too.
Sorting in the UI or API can change.
A rename or update might move your book down in the list.
He demonstrates:
One test that uses
get(0)to find the book.Another test that finds the book by predicate (by name).
When names change, the
get(0)test breaks; the predicate-based test still passes.
Core lesson:
Never rely on element position as a proxy for identity.
Always search for objects explicitly (e.g., by ID, name, or other unique property).
6. When to Use Soft Assertions
He introduces soft assertions vs. regular (“hard”) assertions.
Hard assert:
Fails on the first mismatch.
Good for simple checks.
Soft assert:
Collects all assertion failures,
Reports them together at the end.
Where soft asserts shine:
Checking many fields of a DTO
e.g., book’s
title,id,author,createdBy, etc.With hard assert, you see only the first mismatch and may miss other issues.
Verifying multiple UI elements on a page
Ensure that all key elements are present after a load.
With soft asserts, you see every missing element in one run.
He shows:
A page check with hard asserts → stops at first missing icon.
Same check with soft asserts → reports both missing “chapter icon” and “page icon”.
Important caution:
Make sure the page is fully loaded (no loaders, JS still running, etc.) before using soft asserts — otherwise you’ll just collect a list of false negatives.
7. Readable Assertions, var Usage, and Assertion Messages
He ends with a few smaller but important style practices.
a) var usage
Don’t mix
varwhen the underlying types are different (API vs UI, etc.) in a way that hides intent.It’s fine to use
varwhen the type is obvious from context, e.g.:
Book adminBook = ...
var userBook = ...
Keep readability and clarity as top priorities.
b) Avoid nesting function calls inside assertions
Anti-pattern:
assertEquals(
someService.getBooks().filter(this::isActive).findFirst().orElseThrow().getTitle(),
"Expected Title"
);
Better:
Book activeBook = someService.getActiveBook();
String title = activeBook.getTitle();
assertEquals(title, "Expected Title", "Active book title doesn’t match");
Why:
Easier to read & debug.
You can log intermediate values.
Stack traces are clearer.
c) Always include assertion messages
Bad:
assertEquals(1, bookCount);
→ Failure: “expected 1 but was 0” — but what is this counting?
Worse:
assertTrue(flag);
→ Failure: “expected true but was false” — no context at all.
Better:
assertEquals(1, bookCount, "Book count after creation should be 1");assertTrue(userLoggedIn, "User should be logged in after successful sign-in");
This saves time when reading reports and avoids diving into stack traces just to understand what the assertion was checking.
8. Why This All Matters: Team & Process Level
In his recap, Dmytro ties everything back to the bigger picture:
Maintainability
Clean code = less effort to adapt tests to new behavior.
Reliability
Atomic, independent tests = fewer flaky, order-dependent failures.
Team Collaboration
If tests follow conventions, any team member can read, debug, and fix them.
You’re not blocked by “the one person who understands that suite.”
Scalability & Speed
Stable, well-structured tests can run in parallel and be expanded safely.
Adding new tests feels like plugging into a clean system, not hacking on a mess.
He suggests practical steps:
Define coding standards as a team (and involve everyone in creating them).
Encourage continuous learning (training, workshops).
Build a strong code review culture and mentorship for both authors and reviewers.
Assign ownership for parts of the framework (e.g., UI, API, modules), so people care about quality and consistency.
And he shares resources to deepen your practice:
Books:
Clean Code – Robert C. Martin
Refactoring – Martin Fowler
Design Patterns – GoF
Blogs:
Martin Fowler
Refactoring.Guru
Conventions:
Official Java code style (Oracle) for Java projects.
