A case against splitting User Stories

Often we encounter a story that is too large to fit into a sprint. So we go ahead and try to split the story. And since most of us are not User Story ninja’s the divide is often “artificial” – i.e. it’s demonstrable with certain caveats – the real value is delivered when all stories in the split are complete.

These artificial splits often result in a backlog that is littered with stories that do not provide coherent functionality on their own. Even worse, we end up with “Cut and Paste” blocks of story narrative – adding no new information to the backlog at the expense of clutter and complexity. My question is, if the original decision to split them was to have them fit them in a sprint then why don’t we “join” in the following sprints?

How about not splitting the story but documenting the current and future scope for the story. The Product Owner/Team agrees on what the current scope of the story is for this sprint and in the later sprints more and more future scope is brought into the current scope until the story is truly done. This current and future scope can be defined as acceptance criteria. The idea is that we achieve a demonstrable scope for the current sprint without having to split the story.

To best situation is where we have well defined stories that fit into a sprint and if we can maintain this when splitting stories than that’s the place to be. However, if you feel that you are unable to create two distinct stories by splitting one then this is a possible alternative.

Saxon XQuery With Multiple Documents

XQuery CloudSaxon is a wonderful API for XML processing. It provides complete support for XPath, XQuery and XSLT. Although I’m always baffled with it’s lack of adoption compared to Xalan and Xerces. Having said that the online documentation can definitely do with some improvement.  The following is a quick example of of how you may execute an XQuery that takes multiple XML documents as input.

@Test
public void runXQueryWithMultipleInputDocuments() throws SaxonApiException {
    Processor processor = new Processor(false);

    DocumentBuilder documentBuilder = processor.newDocumentBuilder();
    XdmNode document = documentBuilder.build(
            new StreamSource(new StringReader("<my><document>content</document></my>")));
    XdmNode document2 = documentBuilder.build(
            new StreamSource(new StringReader("<my><document>content2</document></my>")));

    XQueryCompiler xQueryCompiler = processor.newXQueryCompiler();
    XQueryExecutable xQueryExecutable = xQueryCompiler.compile(
            "declare variable $mydoc external; " +
            "declare variable $mydoc2 external; " +
            "<newdoc><doc1>{$mydoc/my/document/text()}</doc1>" +
            "<doc2>{$mydoc2/my/document/text()}</doc2></newdoc>");

    XQueryEvaluator xQueryEvaluator = xQueryExecutable.load();

    QName name = new QName("mydoc");
    xQueryEvaluator.setExternalVariable(name, document);
    QName name2 = new QName("mydoc2");
    xQueryEvaluator.setExternalVariable(name2, document2);

    System.out.println(xQueryEvaluator.evaluate());
}

This result is an output of:

  <newdoc>
   <doc1>content</doc1>
   <doc2>content2</doc2>
</newdoc>

The difference between UI Designer and UI Developer

In a previous post I talked about developers distinguishing themselves as specialists in a particular part of the application e.g. server side, gui, database etc. This kind of specialisation is counter productive to creating good software. However, there are roles where the specialisation is important.

One such specialisation is the role of the GUI Designer. To be more precise the roles of an Art Director and a User Experience Consultant. The User Experience Consultant works with the customer/business analyst to understand the UI requirements and creates user journeys / screen mocks to define how a user will interact with the system. An Art Director takes these mocks and creates graphical representation of these, adhering to the customer branding. It’s the job of the Art Director to present design choices, helping the customer create a suitable and effective look and feel for the UI. The UI Developer then takes the Design Guidelines from the Art Director and create a faithful representation in the UI technology of choice.

These roles – especially that of the Art Director – are highly specialised and are disciplines in their own right. A developer should not be expected to fullfil them. Yet most people in the Software Engineering industry have not even heard of these. Consequently the most important part of the UI Application – i.e. the User Interface – are put together as an afterthought.

The legacy of misplaced Testing

Recently I’ve been exposed to a number of projects that have been going on for a few years. As you’d expect they are at a stage where the cost of change is phenomenal. The codebase is large, convoluted and very difficult to understand. However, there are a lot of tests both at the system and unit level. A sigh of relief – if the code has a lot of tests then at least that would make refactoring easier. The reality is very different – the tests are even more complicated then the code and significantly add to the cost of change.

Not only we have convoluted unit tests we have a very large set of system tests that exercise lots of variations on the main flows. These are very expensive tests because they require a lot of test data to be setup and go through the full flow in order to verify a very small part in that flow. You can see that as the code base became more convoluted and dependencies became difficult to mock, the developers started  relying more and more on the system tests which proved comparatively easier to setup by copying the setup data from previous tests. If you have hundreds of these and no one knows exactly how many of the main flows/scenario are exercised then what good are these system tests?

How do you turn this tide? One approach is to start moving these variations into unit tests in order to reduce the system test suite into a very small set of targeted tests that are testing the main business scenarios. Granted you’ll have to spend effort on untangling the dependencies. You can adopt an automated testing framework (e.g. JBehave, Cucumber etc.) so the system and unit tests are defined in the language of the business. The units under test become more coarse grained to satisfy these scenarios. This may allow swathes of complicated unit tests to be replaced – don’t be afraid to remove them. You must be horrified – remove tests! – well if they are not improving our confidence in refactoring then one can argue that they are no longer fit for purpose.

I am in no way saying that we stop doing TDD/BDD. Lack of TDD/BDD is most likely the reason we get ourselves into this situation in the first place. This is more an approach to lessen the pain of misplaced and substandard tests – to allow us to gradually turn the tide on the cost of change.

Design aligned to the Problem Vs Design aligned to the Technology

Lets develop a web app from start: Well I know Java pretty well so it’s going to be Java and then the de facto design is Spring + Hibernate with Spring MVC. I know I’ll push the boat out get some AJAX in the mix. Wait I’m no “GUI developer” so let’s go for GWT it’ll generate my Javascript.

Lets develop an enterprise integration app: Well an upstream system needs to send me something – bring out my favourite messaging provider. A bit of Message Driven Bean or Spring Messaging support, some Hibernate for the persistence and Bob’s your uncle.

How often do we see this kind of design? You can have a very similar conversation for C# .Net. This kind of thinking process is further encouraged by so called architects imposing their favourite solutions regardless of the problem. Feels to me that the enterprise application development industry has armed it self with a few hammers and then preceded to whack everything in the head pretending it’s a nail. I’m certainly guilty of this thinking process.

There are many arguments for this approach – uniformity of software assets, availability of skills, developer mobility to name but a few. However, it can also be said that this kind of thinking process stifles innovation. In an industry that is built on innovation, that is a bit of a disadvantage.

A lot of us have effectively constrained ourselves in silos to the point where we are only GUIs developers, server side developers, “don’t touch the database” developers, “allergic to CSS and Javascript” developers so on and so on.

Our application designs are constrained by the prescribed technologies and frameworks often resulting in complicated and unmanageable solutions. This cannot be avoided due to the available skills and technologies. However we need to broaden the choices available to us so that our application designs result in more suitable solutions. There is always a compromise between uniformity and flexibility. Currently we lean heavily towards uniformity sacrificing the flexibility of choice during the design effort. A more balanced approach requires a breed of developer who is at home with learning different technologies. This requires strong fundamentals that underpin the current technology landscape allowing them to switch between technologies and learn new ones with relative ease. It is essential for a senior developer to be a polyglot so that during the design effort he/she is not dominated by one particular technology or framework. The emphasis then shifts towards understanding paradigms and approaches rather than specific implementations. Having said that one cannot implement a solution without seeking expert knowledge.

In conclusion a software engineering effort requires breadth and depth of skills during application design/development and it is our responsibility as professional software developers to ensure that we always strive for the best solution to every problem.

Project Estimates

In my previous post on High Level Project Estimates, I talked about 3 points estimates to help provide an up-front estime of effort for a project. Aside from the effort estimate you also need to consider other aspects that the team will spend time on. The following are some aspects I consider with the kind of percentages (above the effort estimate) I have experienced. Note: these percentages are by no means standard or the norm they are essentially a rough guess based on the projects I have worked .

  • Planning Meetings @ 10%
  • Demos @ 2%
  • Code Reviews @ 5%
  • Retrospectives @ 5%
  • Dark Mater @15%
  • Bugs @15%
  • New Technical Stories and Spikes @ 25%
  • Change Requests (shouldn’t average more than 15%)
  • Holidays and Sick Leave @ 5%

You will also need to consider Management Overheads as well as Business Analysts (if included in the team), QA/Functional Testing, Support and Maintenance (if required).

Defining and Prioritising a Backlog

What is the best way to review a backlog? How do you ensure that it is “complete”? How do you ensure that the prioritisation reflects the business vision and goals?

When first faced with a backlog, you are often overwhelmed by the long list of userstories. The most important step is to set a context for these userstories. Are these userstories organised in a hierarchy of “epics”? This hierarchy will help set a context. But first we need to understand what these epics mean at the highest level. Do they represent a user’s high-level goals or are they merely there as a container for some loosely related stories?

When reviewing a backlog for completion it is vitally important that the stories are defined in a context. The context can take different forms depending on the nature of the application. For example if an application has a clear high-level flow that the user journeys along then the epics may be defined as activities in this flow and the userstories can be grouped under each epic representing the functionality required for this activity. This article by Jeff Patton presents such an approach. However, your application my exhibit a more random usage scenario. In this case epics representing high-level user goals may represent the best context for the stories. You can also provide references to other artefacts such as user journeys/wireframes to further enrich the context. This article by Scott Sehlhorst is an interesting discussion of setting a context for user stories.

This grouping of userstories by a context also helps to manage their prioritisation. You can individually prioritise stories within each epic and then also prioritise the epics. Note that just because one epic has a higher priority does not mean that all its child userstories are of a higher priority. You may discover that only the first few userstories can provide enough functionality that further work on that epic is of a lower priority then working on another epic.