Menu Close

On Automating Testing

I would like to discuss importance of tests in a software test suite.

For the software development world, tests are how a developer (and the whole company) can know the software is working.

This can refer to manual tests.  Look at the website, is it displaying the last blog post? That’s a manual test. I have worked in places with a similar, manual QA process.  It’s painful for the QA people and the developers.

A software developer desires to automate everything. She automated the production code; therefore, she automates the tests. That way a large number of tests can run in a short time. In a really crazy outcome, she can run a suite of automated tests to see if every small change broke anything.

Better , she can write a small test; verify it failed; then make it pass with a small change without breaking any other test. This is the core of Test Driven Development(TDD).

Of course, “automated” means that tests are code, she stores the tests in a code repository along with the production code.

Which tests are the most important ones in our test suite? The tests which fail are the most important.
They keep her from writing code which would break them.

How do we keep track which ones are breaking most often? We could record from our continuous build system which ones failed, but she always codes TDD-style and never checks in failing code.
The tests running on the build system should always pass.

We record test failures on her laptop, and collect them into a centralized database of test failures with tests run on other developers’ laptops.

This collection allows us to run statistics on the test failure records.

How do we collect into a centralized database? Have a process to collect them?
Checking them into git? How do we deal with merge conflicts?

Can we ask the build system (such as maven or bazel) to stop at the first failure?
Can we reorder tests based on the change, running tests most likely to fail first?

Can we deduce, based on which code lines changed which tests are most likely to be affected?

Those questions assume test order is unimportant and tests are always deterministic whether they’ll pass or not.   Experience says neither of those assumptions are true. Can we detect non-deterministic(flaky) tests?  Can we deduce which tests are passing (or failing) merely due to the order of tests around them?

I like to think the answer is yes, we can do all that, using an ACID database process to update rows in the table.

Do you have ideas? Links to pages on how to improve automated testing? Links to projects that already do collect failing tests? Comment below!