Skip to content

Don’t Write Automated Tests

Increase quality without sinking money into low-return tests

A few years ago, I wrote a blog about ways to approach test automation. I touched on some considerations to help promote a successful automation project; however, I never explicitly dove into when other approaches may be better than automated tests. Specifically, I want to address an anti-pattern I’ve seen of writing large suites of automated tests to manage things with more appropriate resolutions.

Should You Automate?

I advocate for automating as many processes as feasible. At the very least, well-designed automated processes are typically more reliable than manual ones. However, not all automated processes are well designed, and not all automated tests are the best solution to certain problems. To take “shift left” as far as possible—to consider quality from the start of a project—an automated test may not be the best or only option in several scenarios. Ask yourself the following (non-exhaustive) questions before you write your tests:

  1. Are you writing tests to replace manual testers?
  2. Do you consistently get software that doesn’t do what you expect?
  3. Do you often have regression issues, and those issues make it to production even when you do have automated tests?
  4. Are you writing tests for vendor/off-the-shelf code?
  5. Is your biggest problem “why is system ‘x’ down”?
  6. Do you want to write tests, but your system is so complex that it doesn’t seem possible?

Replacing Manual Testers

“There are as many manual testers as developers.” “Test and release cycles are still long, with high costs and missed defects.” These are some reasons I’ve heard to implement automated tests and cut manual testers. However, replacing manual testers smells of a misunderstanding. I’m not saying, “never cut staff!” Certainly, having a large group of dedicated testers is often unnecessary when using modern development practices. However, manual tests and automated tests have different strengths.

One goal of automated tests should be to provide high levels of repeatability and consistency, demonstrating that functionality is operating exactly as the test expects (and failing when it is not operating exactly as expected.) This function frees your manual testers to focus on areas requiring more thought and consideration. A manager may be able to reduce staff at that point if an acceptable level of risk is being mitigated. On the other hand, not having manual testers at all (either as a dedicated role or a shared role,) can increase the risk that new functionality is not behaving as expected; this may not be a big deal for a small business website that deals primarily with customers who call in, but a big deal if your software can result in deaths or financial issues.

Exploratory testing, look and feel testing, usability testing, and other activities performed by a manual tester empower them to use their investigative skills to discover information about the application. This is still extremely important. These activities can even result in automated tests (to help the tester during the activity, and after to help catch regression issues.) A decision should be made on risk mitigation and testing needs before a decision is made to throw automation at something in place of manual testers.

Software That Doesn’t Do What You Expect

If you are regularly getting software that left you surprised (maybe a feature didn’t act how you wanted, or there was a completely missing feature, etc.) there may be a communication or reporting challenge to overcome. Sure, we could write automated tests for every conceivable scenario as the software is developed, but there is a more important smell here. If the team and stakeholder are not in alignment, there is no amount of testing that will consistently help. How do we deal with this?

There are many reasons this sort of challenge can manifest, and each may have their own resolutions. PBIs/specs with conflicting asks, the right people not speaking either due to interpersonal challenges or team/organization size, too many hands working features, etc.

Ensuring retrospectives are open and honest, leveraging reporting tools in an unobtrusive way, not ignoring people management skills, investigating development scaling approaches, using logic modeling or other techniques for complex applications, or doing an occasional root cause analysis could be the best solutions for many of these challenges. The main goal is to ensure that there is an opportunity to identify and remediate why information isn’t making it to the right people at the right time instead of trying to come up with tests that catch the problem after it’s already happened.

Regression Issues

Parts of your application that have been stable or untouched just break. The common mantra is “we didn’t have test coverage there,” often with the follow-up that more tests should be added to more areas. Sometimes more testing is the right answer. However, looking at what and how you’re testing is often a better first step. Maybe your suite isn’t testing what you care about, is inconsistent, or takes so long people forget about it. Most companies do not have unlimited budgets for testing, so we almost always have some tradeoffs to make.

The idea that everything must be tested is a challenge not unlike the coastal paradox. Particularly with significantly complex software, when you start looking at combinations of inputs and outputs, getting “full coverage” is essentially impossible. Maybe with the promises from machine learning, we can close that gap. I’ve yet to see a tool that can achieve that with consistent confidence (some of the most effective I’ve seen in this space are around fuzzing – a testing technique that uses random or invalid data as inputs – which can produce interesting results.)

Taking a risk-based approach to testing and putting effort into the areas you really care about tends to result in the best ROI for testing. That could mean a smaller, more focused test suite. It could mean focusing on getting consistency in the most critical areas. It may even mean shifting some testing to manual testing while automated tests are written. This could also be an area to use machine learning to help analyze prior issues and help focus testing instead of blindly adding more tests to more areas.

There are also other implications here; a poorly designed system could result in what you think is good test coverage for a feature, but a change to that feature results in a break somewhere completely unexpected. You may start to hear things like “Well, if we change this class, we may want to run our full regression.”

In this scenario, we could have brittle, poorly architected code. If possible, a good refactor adhering to best practices and trying different design approaches would be the best option. Automated testing will likely follow, but a refactor of that caliber would presumably include robust unit testing that can work in tandem with API and UI tests for better coverage without just adding tests to chase breaks in the brittle codebase.

Off-the-Shelf Software

A new project is in progress, and your folks are struggling to test an application from a vendor. There’s little domain knowledge, not enough time, and/or it’s difficult to write automated tests for. Sometimes the vendor makes breaking changes without warning, so your staff is testing the vendor application more frequently. The problem with this approach is that teams often do not have a direct line to the vendor and may not be experts in something a vendor is providing.

It’s never a bad idea to test any customizations or integrations with other software, along with some sanity checks to ensure the software is working as sold (which could surface the need to do additional testing.) My philosophical problem with extensively testing vendor code is that if you pay a vendor, that vendor should do it better than you. I’ve seen success when going through initial statements of work and planning by having open conversations about testing expectations and responsibilities. If a vendor or implementor is not willing to test the deliverable, you might want to revisit why you picked that vendor and/or software. This can help inform contract negotiations and RFPs to ensure that one of your teams is not shouldering an additional burden in software they are not experts in (or at the very least, are partnering with the folks who know the software being implemented.)

Systems Being Inoperable

Systems are down when you don’t expect it. A transaction suddenly takes 10 seconds when you swore it used to take 3 seconds. If you’re constantly wondering what the status is of a system or workflow, you could technically capture that with tests run on a regular cadence (every hour, five times a day, etc.) However, that seems more like a monitoring issue, and tools exist (AppDynamics, Splunk, AWS Cloudwatch, Azure Monitor, etc.) to solve that problem. These tools can monitor and proactively alert your team if a system is experiencing degraded performance and sometimes even offer insights into why.

Systems too Complex to Write Tests for

 A large, integrated solution went live. Things seemed okay, but then the defects started rolling in. The architecture diagram is a tangled web of multiple systems doing complex things. It can seem daunting to write tests for complex systems. As discussed above, the more you investigate the more tests you can write, and you may not even be testing the most important areas. From a test perspective, teams can often see success by focusing on constituent parts first and slowly building out to broader scenarios. This allows the testers to learn how the system works while testing.

However, tools and approaches are available that let us check our work before beginning development. Having a skilled analyst or developer use a model checking tool like TLA+ or using logic modeling tooling and exercises like Critical Logic Model-Based Testing with IQM Studio Model (which also generates tests to check the implementation) can all help uncover ambiguities and conflicts. (Disclosure: I used to work for Critical Logic. They have not paid me for this plug, but I had such success using their tooling and approach that I still recommend them.) These approaches attempt to uncover inconsistencies and smooth through errors in system logic before implementation to minimize issues that can get more difficult to catch later.

Wrapping Up

If there’s any common root themes here, they’re probably along the lines of:

  1. Check your work early and often using appropriate strategies, not just with traditional testing.
  2. Make sure any testing you do follows a well-thought-out approach; don’t just throw as many tests at something as possible and assume that it will always work.
  3. Look for opportunities to improve the team and entire code base. As the old saying goes: “you can’t inspect quality into the product.”

Tests will still play a part in the delivery process, but these points can help shift some burdens to avoiding issues instead of always trying to catch them.

Want More?

Check out Microsoft MVP Kevin Bost’s article, A Guide to Practical Unit Testing – Shift Left

What other methods have you used or seen that can help the team proactively avoid issues? How do you handle cases where bugs can feed back into improving the team? Let me know in the comments below!

Contact the AZ-400 Cohort Team

Name(Required)
Email(Required)
This field is for validation purposes and should be left unchanged.