The Spirit to Improve

This week in our weekly design committee, on of the developer demonstrated how to use the Saxon Injectors to create mock XQuery functions for unit testing.

It’s been one of the biggest problem with our use of XQuery and the developer solved it in his spare time. It was a much needed reminder that the spirit to always improve the software and the development practices is at the core of a good and effective developer and team.

These kind of efforts always raise the bar that much higher in terms of the effectiveness and quality of what we produce and are life blood of the development effort. It is also a key indicator of a healthy project.

This kind work also serves to motivate us in the face of project delivery pressures and becomes even more important when we feel demotivated by repetitive work and the rush to meet the next deadline.

Encouraging Collaboration via Design Committee

Agile process promotes the view that system design emerges and evolves throughout it’s life. It is a result of continuos discussions and decisions by the teams.

On large projects with multiple teams, the ideal of involving everyone in design discussion becomes very cumbersome. Even if you can organise such sessions, the smallest discussion takes ages to conclude.

The idea of a design committee has been proposed by some team members in particular [Sandro Mancuso]. The committee is made up of representatives from each team who convene on a weekly basis. The format of the meeting is similar to an Open Space. However the ideas/discussions are posted and voted on a day prior to the meeting. This allows the proposer(s) to come prepared for the meeting. Each member is allowed a single vote before every meeting and they may vote on the same topic over several weeks.

The team representatives rotate over time, and are chosen by the team. It is the responsibility of these representatives to act as spokesmen for their team and to feedback the discussions/decision held in the meetings.

The committee may also arrange ad hoc sessions which an be called up by any member of the committee.

At the start of the meeting a facilitator is elected who is also responsible for recording the outcomes for that meeting.

Encouraging Collaboration via Review Branches and Pull Requests

I have always advocated the use of a small well balanced team for projects rather than many less experienced ones. However this is not always possible.

Some time projects require a very fast ramp up with a much larger set of teams where achieving a good balance of talent/skills is not possible and the experience is a commodity. Several large projects I have been involved with in the past have had to make this compromise.

Collaboration is the key. Especially if the system under development is new and going through constant evolution. Although one can argue that healthy systems must always be in a constant state of evolution.

In these cases a good level of collaboration within the team is difficult but achievable. Collaboration across teams is near enough impossible unless we setup explicit processes that encourages such behaviour.

The use of Pull Requests and Review Branches is one such process. Review Branches are essentially very short lived branches, often only lasting a few hours and never longer than a day. The main purpose of the branch is to develop in the smallest increment that is reasonable for the purposes of a review. The development of a single feature would typically result in a number of these branches. The mechanism for inviting others to review a branch and merge into master is known as a Pull Request.

The principle behind the Review Branch is that the codebase is small enough to allow for an effective review by someone in 15 minutes max. In the first instance all members of the teams other than your own are invited to do the review. Once one or two people have agreed to do the review they will then sit with the developer to go through the changes and merge them to master after a successful review. In case of geographical separated teams desktop sharing technologies and conference facilities should be used. If no one volunteers within 5 or 10 minutes then you can invite your own team member to do the review.

The primary reason is to share the knowledge across teams via the review process. Quality assurance is a side effect of this process.

We have found that it often helps if you ask for a “review buddy” who will pick all your pull/review requests for a particular feature. This will help speed up the process and stop you from going over the same ground with different reviewers.

Also team members should report in their daily standup the number of pull requests they have reviewed during the previous day and in scrums of scrums the team should report the number of pull requests each team review. They should also report the number of pull requests that were left unanswered within the agreed time limit. This will allow for rebalancing if certain individuals or teams are taking most of the reviews and others are reluctant to volunteer. Needless to say the reviews should be planned in as part of your capacity planning.

A case against splitting User Stories

Often we encounter a story that is too large to fit into a sprint. So we go ahead and try to split the story. And since most of us are not User Story ninja’s the divide is often “artificial” – i.e. it’s demonstrable with certain caveats – the real value is delivered when all stories in the split are complete.

These artificial splits often result in a backlog that is littered with stories that do not provide coherent functionality on their own. Even worse, we end up with “Cut and Paste” blocks of story narrative – adding no new information to the backlog at the expense of clutter and complexity. My question is, if the original decision to split them was to have them fit them in a sprint then why don’t we “join” in the following sprints?

How about not splitting the story but documenting the current and future scope for the story. The Product Owner/Team agrees on what the current scope of the story is for this sprint and in the later sprints more and more future scope is brought into the current scope until the story is truly done. This current and future scope can be defined as acceptance criteria. The idea is that we achieve a demonstrable scope for the current sprint without having to split the story.

To best situation is where we have well defined stories that fit into a sprint and if we can maintain this when splitting stories than that’s the place to be. However, if you feel that you are unable to create two distinct stories by splitting one then this is a possible alternative.

Project Estimates

In my previous post on High Level Project Estimates, I talked about 3 points estimates to help provide an up-front estime of effort for a project. Aside from the effort estimate you also need to consider other aspects that the team will spend time on. The following are some aspects I consider with the kind of percentages (above the effort estimate) I have experienced. Note: these percentages are by no means standard or the norm they are essentially a rough guess based on the projects I have worked .

  • Planning Meetings @ 10%
  • Demos @ 2%
  • Code Reviews @ 5%
  • Retrospectives @ 5%
  • Dark Mater @15%
  • Bugs @15%
  • New Technical Stories and Spikes @ 25%
  • Change Requests (shouldn’t average more than 15%)
  • Holidays and Sick Leave @ 5%

You will also need to consider Management Overheads as well as Business Analysts (if included in the team), QA/Functional Testing, Support and Maintenance (if required).

Defining and Prioritising a Backlog

What is the best way to review a backlog? How do you ensure that it is “complete”? How do you ensure that the prioritisation reflects the business vision and goals?

When first faced with a backlog, you are often overwhelmed by the long list of userstories. The most important step is to set a context for these userstories. Are these userstories organised in a hierarchy of “epics”? This hierarchy will help set a context. But first we need to understand what these epics mean at the highest level. Do they represent a user’s high-level goals or are they merely there as a container for some loosely related stories?

When reviewing a backlog for completion it is vitally important that the stories are defined in a context. The context can take different forms depending on the nature of the application. For example if an application has a clear high-level flow that the user journeys along then the epics may be defined as activities in this flow and the userstories can be grouped under each epic representing the functionality required for this activity. This article by Jeff Patton presents such an approach. However, your application my exhibit a more random usage scenario. In this case epics representing high-level user goals may represent the best context for the stories. You can also provide references to other artefacts such as user journeys/wireframes to further enrich the context. This article by Scott Sehlhorst is an interesting discussion of setting a context for user stories.

This grouping of userstories by a context also helps to manage their prioritisation. You can individually prioritise stories within each epic and then also prioritise the epics. Note that just because one epic has a higher priority does not mean that all its child userstories are of a higher priority. You may discover that only the first few userstories can provide enough functionality that further work on that epic is of a lower priority then working on another epic.

Approach to Performance Tuning

Performance issues in a application manifest as bottlenecks in one or more of the following 4 layers:

  • Application: Application is not designed, developed or configured properly.
  • Platform: The platform that the application runs under (e.g. App Servers, Databases etc.) is not setup and configure.
  • System: The hardware the platform runs on is not sufficient.
  • Network: The network capacity is not sufficient. This is relevant when various parts of the platform are involved in high-bandwidth communication with each other).

The first step in Performance Tuning is to have an environment setup that is a realistic copy of the production. Then we look at what exactly are the performance requirements, these will be expressed in terms of
Round Trip Delay and Throughput (number of concurrent requests).

We can then devise some performance tests that exercise the application through typical usage scenarios and execute these tests to collate data that will provide the baseline for subsequent tests.  The data can be collated in the following areas:

  1. Response times and throughput for the over all application
  2. Response times and throughput for various parts of the platform
  3. Response times and throughput for various layers of the application
  4. Resource utilisation both at the platform level (e.g. thread profile, memory profile, socket connections  etc.) and system level (CPU utilisation, Disk IO, Network IO etc.)

Once collated and represented in graphical form you can then start looking at indicators where bottlenecks may be occurring. Please note: that these are simply indicators of performance bottlenecks meant to highlight areas which will most likely require further investigation and tests (in isolation) to pin point the exact problem. Performance tuning is an iterative process with the following steps

  1. Perform Load Test
  2. Collate and Analyse Test Result Data
  3. Compare with Previous Test Result Data (if not the first iteration)
  4. Analyse impact of changes (if not first iteration)
  5. Check if improvements satisfy the performance requirements
  6. Analyse to identify bottlenecks
  7. Investigate and isolate the problem
  8. Perform remedial action to eliminate bottleneck

Note: You should start with a baseline of results that you can compare the impact of your changes against. You should only tackle one bottleneck at a time making minimum necessary changes before performing another test.  Once you have made enough improvements to satisfy the performance requirements then you should stop the tuning  process.

Selecting the level of results for your baseline is also important since more granular data require more up-front work. One approach is to start at a higher-level, go through the process to see problems can be identified  at that level. If not then create another more granular baseline and repeat the process.
It is obviously very important that your environment and test scenarios remain the same as the baseline throughout the process.