wiki:org/process/TorAgileProcess

Version 48 (modified by karsten, 6 years ago) (diff)

Fix more internal wiki links

The Agile/Scrum Process

The goal of the Agile process is to provide a way to measure work output, to schedule resources quickly and in response to changing requirements and conditions, and to aid in forecasting how much development capacity is available to complete a set of tasks. There are many variants of Agile, the most popular of which is Scrum.

In Scrum, development is broken down into Iterations and days, and work is performed on tickets. The set of tickets to be done in the beginning of the iteration is determined at the iteration planning meeting at the start of the meeting. Any new tickets that must be added to the iteration after the iteration meeting are called "Fires". Each day, each developer reports which tickets they worked on yesterday, which tickets they plan to do today, and then lists any roadblocks or issues they may have.

The amount of work involved in each ticket is measured in units called "Points". Points are preferred over "Hours" because "Hours" ends up causing all manner of ego games and pushback in terms of workload, effort, work rate, overtime, and can even lead to billing disputes with funders.

Despite each developer having different areas of expertise, with practice it should still be possible to estimate Points for other people's project tickets. If we wish to do this, there is a distributed consensus process for assigning Points to tickets. It is called Planning Poker. We are not yet using this process.

The Tor Agile Process

The Tor Agile Process is not too much different than Scrum, except we do not do any in-person meetings. The 3 points of the daily scrum meeting can occur over email, and the Iteration Planning meeting can occur over IRC.

Because Tor has multiple projects that have different developers, we will have multiple iterations running concurrently. It is up to the individual developer to decide how to schedule their time among various projects each iteration. In order to do this, we recommend also tracking your own personal output as if it were its own iteration, so that you have an idea of the total amount of development output you are able to contribute to each project.

Benefits of this Process

Agile seeks to balance the needs of the funder/client/users with the needs of the development team. The needs of the funder, client, and users can often be incompletely specified, subject to change, or subject to change in priority. New funders can appear and demand new things. New, serious issues can also arise without warning and must be dealt with immediately.

The development team, by contrast, requires relative stability of focus and clear direction, yet needs to be able to respond quickly to inevitable serious issues, and yet still must accommodate the wishes of the funders/clients. The development team also has finite resources, and understanding the nature of these resources and how they get consumed, get impeded, and are increased is key to meeting the needs of funders, clients, and users in a predictable, desirable time frame.

The core ideas of the Scrum system exist to provide ways to protect developers from undue distraction and extreme changes in direction; to set a predicable, steady, sustainable pace for development; to enable managers to communicate progress to clients/users; and to quickly address distractions and barriers to development progress. Scrum seeks to do this through both process and metrics.

The process includes the weekly Iteration meetings, and the daily Scrum emails. They are supposed to impose the bare minimum of distraction and overhead time in exchange for the maximum ability to address issues as soon as they come up. It is questionable exactly what format these meetings actually must take place in. It is likely that IRC and/or email suffice just fine for both.

The primary output metric of Agile is the "Burndown Rate" or Velocity. This is the number of Points you can complete in an Iteration. It is often represented as a chart with an ideal downward-sloping line overlayed with the actual progress in terms of remaining open total Points. Because we encourage measuring both project and individual iteration progress, both products and people have a "Burndown Rate" during the course of an Iteration.

A secondary metric of Agile is the rate of "Fires", or tickets created during an Iteration that must be dealt with immediately. Both people and projects will have some trending rate of Fires. Projects with too many Fires are a sign of something in need of redesign. People with too many Fires are a sign of them being spread too thinly.

A third metric of Agile is the rate of opened, valid tickets against a project or a person. If the amount of new ticket Points opened against a person is greater than their "Burndown Rate", that person is potentially a bottleneck and needs help. If the amount of new ticket Points opened against a project in an Iteration is greater than the "Burndown Rate" of that project, that project either needs manpower, or needs better triage and control of new ticket requests.

Reading a Tor Agile Report

For an example report on a person's Iteration, see #2591. For an example report on a product's Iteration, see #2606. As you can see, there's really not much difference from the two, and the same tickets can appear in both, due to our use of tagging by multiple trac Keywords.

Agile Reports are primarily written using Trac's Query support, but there is a limit to what can be done in a trac query, so some manual data entry is required. The key things to extract from reading the report are:

  • The Goals
  • The Fires
  • The Metrics

The Goals are what the person or project set out to do during that Iteration. These were taken either from the backlog of open tickets against that person or project, or from one of the Sponsor Roadmaps.

Trac will mark these with a strike-through when they actually get closed. The estimated Points and Actual Points will be present for completed tasks, which represents the estimated and actual work done.

The Fires list is similar, but it represents issues that came up during the Iteration that had to be dealt with immediately rather than being scheduled for a future Iteration.

The list of Opened tickets comes next. At this point, it is just a trac wart. The list of Opened tickets is likely not useful to look at after the report is completed, because it is a list of the most recent opened tickets only. Trac 0.12 will allow us to specify a date range here, so that this list stays fixed. Until then, it is not useful while reading the report at a later date. The only reason it is present is to help to compute some of the fields from the Metrics section.

The Metrics section is the meat of the report.

It first three lines lists the total estimated Goal Points at the beginning of the iteration, how many of these estimated Goal Points were done this iteration, and the total Actual Points for the Goal tickets completed this iteration (the Goal Points Done). These three metrics show you how good you are at estimating your "Burndown" Velocity, which is the primary purpose of Agile. The comparison of the Goal Points Done to the Actual Goal Points Done also tells you how well you are at estimating the Points of your tickets, on average.

The next two lines lists the Actual Points of all completed Fire tickets (Fires Done). This is the secondary Fire rate metric mentioned earlier, and can provide clues about ageing code in need of redesign, as well as overcommitment. The Actual Points Done line just below the Fires lists the sum of the Fires Done and the Actual Goal Points Done.

The Points Opened metric is the total estimated Points values assigned to all reasonable, desirable, completable tickets that were opened during the course of this Iteration. The Forward Progress metric is the total Actual Points Done minus the total estimated Points Opened value. If this value is negative or very low for a person, that person may be committed to too many projects. If this value is low or negative for a project, that project may need more manpower, or better ticket triage.

Trac's Keyword functionality also provides list of all Iteration Reports that follow this format. The Points field for each report is the list of estimated Points completed in that iteration. The Actual Points field in each report lists the total Actual Points completed in that iteration. These two fields are not displayed by default by trac. You need to select them from the columns checkbox region on that query page.

Using Trac for this Agile Process

There are two Trac projects that help factilitate Agile: Agilo and agile-trac. Agile-trac requires a complete overhaul of the Trac system. Agilo is an unknown.

Instead of completely overhauling Trac, we have opted to try to change how we use it in the minimal amount necessary to support Scrum. We've done this because not everybody wants to do Scrum, and we do not want to force those people to change how they use the bug tracker.

As such, we've added a couple of extra trac fields, created a tagging convention, and have created an "Iteration Report" format, all of which are described below.

Extra Ticket Fields

Each trac ticket now has two extra fields to aid in this process: "Points" and "Actual Points". We also make new use of the existing Keywords field for tagging.

"Points" holds your estimate for the number of Points a ticket will take to complete. You can break down tickets into subtasks, but if the Point total rises above 20, that is a sign that the ticket is likely too big for one iteration, and should actually be split into separate child tickets.

Note that tickets do not have progress meters. This is intentional. If you feel you need to break a ticket down into subtasks to be completed across iterations, you must create child tickets and score those.

You also can only close a ticket once you have completed some unit of work that you can actually demonstrate to someone, especially in the case of code or design proposals. The code should have been reviewed and/or tested before the ticket is closed.

After work is complete, the "Actual Points" field can be used to retroactively update the amount of Points a task took. This should really only differ from Points if the amount of work needed is substantially different than what was estimated. Don't nitpick over differences less than 2 points.

Once an iteration starts, the Points value on a ticket must not change. This is what Actual Points is for.

The Keywords field is used to tag tickets as part of an Iteration. The format for our tags is the name of the Iteration plus the end date for that iteration. Tickets for Fires have the Iteration name followed by "Fires" followed by the end date.

Each ticket should have a tag for both the project iteration and the developer iteration. Keywords must be separated by a space.

Ticket Ownership

Agile imposes some constraints on how we use trac. Traditionally, we reassign tickets to other people for things like code reviews or branch merging. However, when preparing reports, this ownership change can make it difficult to run a quick trac query on your completed tickets.

Hence, instead of ownership change, new tickets must be created. If a task is so big that two people must work on it for a substantial amount of time, that task should be two tickets.

This will prevent Points from getting mis-assigned.

Ticket Lifecycle

Tickets should be short enough to complete in an iteration, and should not be closed until their work is demonstrated. If, at a later date, an issue needs to be revisited after closing a ticket, a new ticket should be created instead of reopening the closed ticket. If a ticket does not get completed in its designated iteration, remove the iteration tag from its Keyword field.

Iteration Planning and Reporting

Each Agile Iteration should be tracked in a ticket under the component Agile with the Keyword IterationReport. There are two types of Iteration Reports: Project Reports and Personal Reports. The report formats are almost identical. #2591 is an example of a personal report, and #2606 is an example of a project report.

We've created a handy Trac report called the Iteration Backlog which should list your open trac tickets for you, sorted by priority. Ideally, your Goals should come off the top of this backlog, but they do not have to. You can also take them from a Sponsor Roadmap or off the top of your head.

For each project you are on, select the tickets you want, and tag them with two Keyword tags: One for yourself, and one for that project.

Then, for each Keyword tag you have created for yourself and your projects, open a ticket in component Agile with its Keyword field set to IterationReport, and use the following format for the description:

= Goals =
[[TicketQuery(keywords=~SomeProjectIteration20110305,format=table,col=component|summary|points|actualpoints,order=component)]]

= Fires =
[[TicketQuery(keywords=~SomeProjectIterationFires20110305,format=table,col=component|summary|points|actualpoints,order=component)]]

= Opened =
[[TicketQuery(max=20,owner=your_username,keywords!~=IterationReport,format=table,col=component|summary|points|actualpoints|created,order=id,desc=true)]]

= Metrics =
Goal Points: P
Goal Points Done: P
Actual Goal Points Done: P

Fires Done: P
Actual Points Done: P

Points Opened: P
Forward Progress: P

Hopefully the above Trac Queries are self-explanatory. Simply replace SomeProject with your name or project, and your_username with your username.

The Opened query is a little hackish because we do not yet have trac 0.12, which will allow us to cleanly specify the exact dates to use. Instead, you must manually tweak the max field there to cut off the display at the first ticket that was opened since the last Iteration period. If this is a personal report, use the owner=your_username format. Otherwise, remove the owner piece and add a query for your project's Component. Unfortunately, because trac 0.11 does not allow us to specify date ranges for this query, it will start to accumulate random tickets after the iteration has closed. This means it is extra important to explicitly write the Point totals for this query at the end of your iteration.

The Metrics section is for recording the total work done in the Iteration. Sadly, trac does not have queries to auto-compute these totals, and they must be added by hand from the auto-generated tables.

The first line, Goal Points lists the total estimated Points of all tickets at the start of the iteration. The next line Goal Points Done lists the total estimated Points of all tickets that were closed in this iteration. The third Actual Goal Points Done lists the total Actual Points for all completed goal tickets.

The Fires Done line is the total Actual Points for all closed Fires tickets. Because fires are unplanned, their estimated Points value is irrelevant, and their Points should always equal their Actual Points. After the Fires is the total Actual Points Done for all closed tickets this iteration. It should be the sum of the Fires line and the Actual Goal Points Done.

Next is the total Points value of all the tickets from the Opened query.

The Forward Progress is the result of subtracting the the total Points value of Opened tickets from your Actual Points Done. It is a good metric to check to ensure you aren't being hopelessly buried by bitrot and doomed to hit Infinite Chaos.

The ticket fields of the Agile report itself should be filled in as follows:

  • "Points" should be filled in with the Goal Points Done metric.
  • "Actual Points" should be filled in with the Actual Points Done metric.

Notes, Errata, etc

I personally believe that the optimal Iteration length is either 2 weeks, or one month.

We also need someone to help break down Sponsor Deliverables into tickets, and make sure that these end up in people's iterations at the proper rate to ensure completion.

phobos: I secretly track projects via an agile method with an estimated velocity. I look at projects in a 1 week timebox, with 4 week iterations. This helps me track how fast we can get work done, what blocks progress, and who is working on what month to month. I generally want to look at everything we do at a program level. I keep track of what's active, what's on queue, and what's in the parking lot.