The “real” costs of developing using agile methods

What is "money", really?

Have you considered what goes into all the agile disciplines we love and espouse?

Though I missed last Monday’s post due to Independence Day, it’s time to come back to the economics thread.

If you want to catch up, these are the first two posts:

  1. Mankiw’s “10 Principles of Economics” and information technology
  2. Principles of IT and software à la Mankiw (which includes the first principle, “People face trade-offs“)

The second of Greg Mankiw’s fundamental economic principles is:

The cost of something is what you give up to get it.

Costs, expenses and money in software development

First, my simple distinction between expenses and costs is that expenses deal with consumption and costs deal with investment.

When we bear expenses, they are for one-time use of a resource. When we accept costs, they give us access to assets.

So from a financial accounting perspective, we refer to the “cost of goods sold” (which generate gross profits) and we refer to “general and administrative expense” (which keeps the lights on and the bills paid).

Now, we most often think of expenses and costs in terms of “money”, or more appropriately, “currency”. Currency is what we often use in transaction, but we can use other things as well. So “money” in economic terms does not refer only to currency, but to any capacity that we have to transact.

So we can speak of “costs” in terms of time, money (including currency), energy and other lost opportunities. As an investment, we would want to trade those costs for something we decide is worth it.

Have you ever gone into a marathon meeting with a dozen people and wondered how much the meeting was costing for those endless hours of indecision?

What you give up to get it

I use the term “budget” during the planning games of my projects, to refer to how much effort we plan to expend in a given release or iteration, based on a rolling team velocity and the duration of the coming iteration. The budget sets constraints on the “costs” we can bear in terms of the effort available to the iteration.

We work with our customers in planning what we can do during that time interval, and  trade-offs (principle #1) and costs (principle #2) confront us almost immediately. We can’t add hours to a day or days to a week, but we must get real work done in the time we have.

We have limited resources at our disposal, and a choice to include one feature in the iteration involves a choice NOT to include some other feature. We consume all the time, energy and money required to complete the first feature, which is no longer available for working on some other feature at the same time. The first feature better be worth it.

During development, a choice to diverge from plan due to a critical shift in priorities produces a cost of shifting the coordination of perhaps the whole team… and the cost is not zero in terms of the impact to the project.

The cost of multiple, rapid context switches by developers is also real, so if a team cannot keep up direction and focus or if a customer cannot set priorities or produce consistent and reliable requirements, there is a cost that is real… and better be worth it.

What other costs can you think of in process of writing software? What opportunity costs come to mind? Do you think they are always accounted for? What about agile principles and the cost of following (or not following) them?


About ken
Creative insights, passion and technical adrenaline - strategist, agile coach and marketer, providing a good life for wife of 20 years & 2 awesome teenagers!

2 Responses to The “real” costs of developing using agile methods

  1. The shorter the sprint, the higher the relative cost of sprint ceremonies.

    It has been my (curious) observation that sprint ceremonies (planning, grooming, tasking, retro etc.) appear to take almost the same amount of absolute time regardless of sprint length.

    So 1 day of planning activity would take 20% of a 1-week sprint but only 10% of a 2-week sprint. Most people seem to adore short sprints but making sprints too short can be counterproductive.

    • ken says:

      Codermalfunctionerror, that’s a great observation.

      I have been in some environments that adopt a 1-week sprint standard, and I find there is so much overhead and thrashing… perhaps they feel effective in that environment, but I prefer a longer sprint.

      To me, one week is also a short time to take on any but the most decomposed and simple tasks. When you throw in planning for the next sprint, you lose what little time you have. It is like the context switching that happens in single-CPU, multi-threaded environments. The shorter Q, the greater overhead for context-switching.

      …oh, and humans don’t switch contexts as cleanly as computers, which don’t appear to be as easily distracted or politically sensitive.

      In the context of this post, the cost of context switching could be worth it in a very volatile situation, perhaps. If things are really changing so fast (or if some notion of tolerances has to be very tight) that you need to steer the ship in such small increments. But I haven’t been on a project that couldn’t afford two weeks for an iteration length.

      Thanks for jumping in.


%d bloggers like this: