In ÆLOGICA-flavored Agile development, we approach the problem of project cost by starting with a project budget and working backwards. We adjust scope and other parameters to develop a working system or deliver desired functionality within budgetary constraints. This stands in contrast to the traditional error-prone model of detailed up-front cost estimation.
With internal development we wanted to understand and analyze the cost of different features. Rather than forward-looking estimation, this requires a backward-looking cost accounting. We wanted to know, “What did this feature actually cost us?” or “How much did we spend on this module of the system?” Even if we did detailed up-front estimation, this type of retrospective budget-vs-actual analysis would be desirable. It is however, rarely performed because the record keeping is almost always inadequate to the task.
We do not currently associate timekeeping records with individual features. We do not possess the granularity of data that would make answering these questions straightforward. An additional caveat is that the team devoted to these internal projects varies from week to week and month to month with changes to our staffing and client-project load. However, we do have a measure of relative complexity in the form of point-scores of different features, and we do know the time logged.
We devised a method of calculating an imputed cost of a feature using 4 variables:
- Date range (Iteration #)
- Total time logged for that Iteration
- Feature ID #
- Point score for that feature
In our case, we had the feature ID #, point score, and iteration start and end dates were logged in a report from Pivotal Tracker while the dates and hours were available only in a Harvest time report we use to log hours.
The first hurdle was determining a common variable to link the 2 reports together. Because we don’t log our time by the feature – as that would surely disrupt flow – we instead summed the hours for that week and used the date range to determine which features were completed that week and their corresponding point scores. We don’t have enough information to tell us how much time was spent on a specific feature but we know the total hours spent on that project for that week and which features were accepted that week.
With the feature ID#, total hours for the week, and point for each feature in one worksheet, I calculated the imputed hours:
The imputed cost of a feature is just the imputed hours multiplied by an average cost per hour.
Certainly, we make a fairly large assumption that each point linearly translates into time and that all developers who worked on the project average out in terms of cost and productivity. Mis-estimation of the point score is common for individual stories but averages out in the aggregate. This technique does not handle features spanning multiple iterations where the bulk of the work may have been done in one week yet acceptance followed in a later iteration. Importantly, while we essentially translate points into time, this is only meaningful within each iteration and we do not attempt to arrive at a cost-per-point figure that could be applied to the whole project.
Despite these deficiencies we argue that in the aggregate the technique will yield useful insights. By summing up individual feature costs by related fuctional area, the aggregate data will paint a more accurate picture of where time and money was spent.
The deficiencies point the way toward more sophisticated techniques utilizing a database and analytical programming that could yield even better data without imposing additional bookkeeping overhead on developer staff.
Stay tuned for further results and read more about our ÆLOGICA-Flavored-Agile.