That which can’t be tested doesn’t exist… or must it?

That which can't be tested...

When I first read Martin Fowler’s Refactoring: Improving the Design of Existing Code, I stumbled on the phrase “That which can’t be tested doesn’t exist.”

In the years that followed, it has been something of a mantra to me, though I’ve found many things that made testing simply impractical.

What I want to do in this post is flip my earlier quote ON ITS HEAD!

Here’s a new category for ya!

In the category of not-really-a-corollary-but-it-makes-your-head-spin-to-think-of-it (dare me to add THAT to my blog categories!), Martin Weigel wrote in Restating The Wearingly Obvious, “You can’t test what doesn’t exist.”

That is flippin’ AWESOME! What he calls “wearingly obvious”, he describes in much more detail to be somewhat of a side effect of common sense thinking that we have ignored (in his case in the advertising industry) for over 30 years!

Here are a couple of key quotes and ideas from his writing that I see as relevant to the software industry:

  1. The idea that we can test “rough ideas” allows people to “abdicate their personal judgement and taste”, in other words, to “pass the buck” – what happens to the agile project without Metaphor or increasingly refined requirements?
  2. “Until you actually have finished work you are not testing creative solutions” – Hmm… sounds familiar (but not the same)?
  3. “Incomplete ideas evoke rough, incomplete responses from people” – even though he is writing about advertising, isn’t it important that what we test in agile is “true”, and that the reliability of our tests is related to some notion of completeness in the scenarios we anticipate?
  4. “Any research is therefore as authoritative and as fallible as the individual judgements and opinions of those involved in the [advertising] process.” Amen to that!

From Martin's post: did he identify an Archimedles?

Software development is creative work

And then he ends with a couple of really great sentences:

“There is no methodology in the world that can remove the element of uncertainty from the development of creative work. However much some might wish that to be the case.

“All creative decisions ultimately involve a leap of faith. And that requires more than simple courage. It requires people with taste and vision. People with the imagination and sensitivity to think ahead and be able to picture what has not yet been created.

“If you are unable to grasp these simple truths, if you slavishly believe rough responses to rough work to be The Truth, and treat them as directives for creative development, if you are uncomfortable with the notion of taking a leap of faith… then you are not one of those people. And you are very probably in fact, in the way.”

Wow! I hated quoting so much of his material here, but my challenge is that you look beyond the advertising domain he is writing about and think about the implications to software development.

So how does this trigger you? Let me know with a comment.

Advertisements

About ken
Creative insights, passion and technical adrenaline - strategist, agile coach and marketer, providing a good life for wife of 20 years & 2 awesome teenagers!

13 Responses to That which can’t be tested doesn’t exist… or must it?

  1. bfmooz says:

    This automatically reminds me of “Begin with the end in mind”. In my experience, starting with a real vision is one of the biggest challenges we face. We can only test what we know, but a lot of times we don’t completely know what to test.

    • ken says:

      Another line of thinking is what makes a vision “real”? Sometimes we can scratch out ideas, and swear they are the greatest ideas… and if only we had acted on them ten years ago they would have been huge.

      The capacity to make an idea into something we can work on gives it real properties… for the purposes of this post, that suggests the idea can be “tested”. More pragmatically, we can debate its merits, refine our understanding, make decisions and recurrently mold it into the coordinated action needed to produce its outcomes.

      The further along we go, the more “testable” it is. Meanwhile, the more specific it is, the more fixed, permanent and objective we tend to see it… and the less debatable it seems to us. We begin to test against what must be the case and we lose some capacity to test what might otherwise be.

      And, by the way, neither situation is “good” or “bad”.

  2. bfmooz says:

    Agreed. I think the approach is quite good. I think the real key is to get people who do not understand the grand scheme of vision to understand vision. Lots of times the customer will focus on the minutiae. I want this app to do this one thing. They believe they have a vision, but as we start peeling back the layers we find that 90% of the rest of the customer base uses it in a different way or for a different purpose. This is the value we can have. To push our customers to what can be and not necessarily what is. This way when we start narrowing the testable boundaries, it’s real and reflective of how the business as a whole wants to use this new thing.

    I do completely agree that neither situation is good or bad, but I do also say that in many situations starting at the details can be detrimental if not dangerous.

    • ken says:

      Ah… therein lies the rub. We really can’t get anyone to understand vision. Everybody has their own interpretation. What I can say as clear as day (from my perspective) may be confusing as hell to you.

      I mean, you didn’t ask me what an “Archimedles” is (look at the caption of the second image)… and some who read this might have thought I spelled it wrong… which still wouldn’t make sense to them, right? Could Martin have really discovered an “Archimedes”?

      What we work to do through reciprocation, recurrence and recursion (like blogging and commenting) is to refine our individual interpretations of meaning to the point we assess that we “share vision”… but we never really can. (Because we don’t share our brains, whether or not Laura thinks so. You and I know the truth!)

      Knowing that we cannot teach or truly “share” vision produces an obligation on leaders, those with positional authority like our bosses and those with knowledge like the guy who administers the network, to bear additional cost in clarifying distinctions, making powerful requests, asserting that we hear things accurately and making declarations that we have fulfilled promises… so that we can avoid the negative consequences of poor communications.

      • bfmooz says:

        Ooh…interesting. I hadn’t thought about the ownership aspect. I do like that angle.

        • ken says:

          Sure… and it’s not just convenient, either… the marketplace is indifferent to whether or not we choose to accept that everyone has their own interpretation. When business is dissatisfied with IT production, for example, they are not “wrong”. And when we fail to deliver against what we might consider an unrealistic expectation, there may be no “bad guy”, but at least one (if not two) dissatisfied parties nevertheless.

          Wherever we want to partner in an effort, we must accept a HUGE obligation to communicate… it is so much more than what people commonly think of when they say they want a “shared vision” or a “consensus”.

  3. Pingback: The immortal words of Socrates: I drank what? « kenfaw.com

  4. Steven Gordon says:

    I am particularly wary of corollaries that involve some concept of completeness for software (#2 and #3).

    The soft nature of software means it is never truly complete (except when it has died – when nobody is asking for any additional functionality is when your software is dead).

    Agile and other iterative approaches take advantage of this nature of software by implementing user stories (small extensions to the functionality of the software in small time boxes). These individual user stories are usually not even complete features, but rather slices of features that allow us to get concrete feedback with short latency times and also allow the product owner to continuously steer a project using the latest information (user feedback, market conditions, development progress, etc.).

    These small increments of working software functionality are well enough defined to have acceptance criteria that can be tested (not just manually, but automatedly), yet individually most are not complete functionality in any sense.

    Let’s not throw away the ability to implement small, incomplete pieces of software functionality. Just because they are incomplete does not mean they cannot be tested. They do exist and implementing and testing them s an excellent way to grow software.

    Steven Gordon

    • ken says:

      Steven, thanks for your comment. I agree with you completely about the concept of completeness and often have to work with customers on what is “done enough” to begin producing value in the hands of the user community. Meanwhile, even the small slices that we complete have to fit into some concrete purposes of our clients. They are as complete as they need to be, and no complete-er… and because they can be assessed as valuable, they can be tested.

      From the advertising point of view, Martin was asking more of the question of where the boundaries of the most creative vision-building meet up with the point at which we can really test an idea. These are the “real properties” that we actually require in software to write tests in the first place. As long as an idea remains short of those real properties, we could say it is just an idea… one that we might refine and debate and develop, but not one that we could really implement in software until its real properties take better shape.

    • ken says:

      Let me expand a little more on #2, because I speculate it is the one that triggered you the most. If you read into Martin’s post, he was suggesting that the notion of testing an advertising solution was really testing the thinking processes that would lead to the solution. In the domain of advertising, testing the solution means testing actual responses to advertising.

      What Martin suggested was that during creative processes in advertising, what you are testing is stimulus material… the inputs to developing the advertising program, not the program itself. Thus, he points out that we cannot test what does not exist (the program)… we are testing what goes into the program, until the program has already occurred.

      For us, I am suggesting this could be like the scenario where all of our testing of the software is successful, but the software does not actually form a “solution” in the market. To know what we can really assert about what we have tested leaves open that possibility.

      I apologize that I left that big hole open without this additional explanation. It made so much sense to me at the time, but I can see where I did not pull enough context of Martin’s article into my writing to make the connection.

  5. My variation is “If it is not tested, it doesn’t exist”. And what doesn’t exist, will be removed from the source code control system.

    This is why you want to keep an eye on code coverage. After a heavy dose of refactoring, it is not uncommon to find classes with 0% coverage. Or classes that are only used by unit tests. When I find them, I simply remove them…if nothing breaks, I just saved time and money by eliminating dead code.

    In some cases people have been so uncomfortable with the idea of removing unused “working” code that I have designated a graveyard area where unused code is moved. How many times have I resurrected code from the graveyard? Never.

    • ken says:

      Codermalfunctionerror, thanks for your comment.

      I accept your variation, and I think it is very powerful… though I had wanted to steer a little to the side of the code coverage conversation, which I have unfortunately seen used inappropriately in some environments.

      Back in 2001, my team was clocked at 6-sigma by an MIT consultant for an automated call center implementation in which we had 100% code coverage and a fast and very exhaustive automated suite of tests… it was very cool. Meanwhile, we weren’t targeting any level of code coverage at all.

      A few years later, I had the opportunity to work in a different environment with two people who had been associated with that project… and they had a standard of 80% code coverage they required of all teams. What I found on entering that environment was how much developers gamed the system to get the code coverage required for deployment… TDD was dead and the value of the test harness was questionable at best.

      Today, Salesforce.com requires at least 75% code coverage in custom logic prior to production deployment… and it is amazing the patterns I have seen in code. Developers (even those of Salesforce.com Labs) exercise the lines of code without performing assertions, test lines are commented out and left in the code base, and huge test methods that cover several scenarios at once demonstrate that the test were written to test the code after it was written (rather than testing requirements).

      I would call those testing anti-patterns, as would many people I know, and yet they are a byproduct of people focusing on code coverage… which I see as the natural outcome of good tests, not a target to reach by making your tests meaningless.

      So from your comment it sounds like you enforce powerful practices that allow code coverage to trigger that something is wrong and needs to be explored. I love deleting untested code… and I also routinely delete people’s commented code (and encourage others to as well). As long as you make sure the philosophy behind code coverage is understood to your team, it really is a wonderful signal.

      –k

  6. I’ll put it bluntly: amateurs play games with coverage numbers. Professionals use coverage to verify that the tests do what they are supposed to do: passing tests are supposed to provide a reasonable degree of confidence that code works as designed.

    And if you have ever experienced the horror of a minor recoverable error snowballing into an unrecoverable error because of a defect in error processing, you will pay extra attention to catch clauses.

    For me coverage is also a design tool: if I cannot attain high code coverage, my design is likely to be faulty. Everything should be testable.

%d bloggers like this: