Skip to content

How a flat-design approach and shadows go together

written by Nils Morich and Kris Lohmann


In the past weeks, the UX team at CoreMedia has been working on a new design approach towards the editorial interface for our CMS. This interface is a rich-web client called the CoreMedia Studio. The original design from 2009 was based on gradients and rounded corners – These elements feel a little outdated, 5 years later and in the time of Google’s Material design, Apple finally going flat, and Windows 8.

In this article, we discuss thoughts on flat design and how to apply it without falling into common traps (for such traps, see this post by NormanNielsen Group).

Reduce to the Max

Consequently, in our new design, gradients disappeared and transparency is history. Inspired by the references mentioned above, which became very popular with the release of Windows 8, we decided to pick up that trend and iterate our design approach. For example, the new approach manifests in the design of the icons used in the studio. No borders, no gradients, not more than just one color. The basic principles are driven by a flat-design approach. They are vector-based, which makes them future-proof for high-resolution displays that are more and more used. The following image exemplifies the old and the new look of the icons.


CoreMedia Icons

Old Icons (Sample)


New CoreMedia Icons

New Icons (Sample)


Visual Cues

So what is this shadow-thing all about? What are shadows used for and why is it important to us to know? There are two occasions when shadows become important:

  1. When the source and the direction of light is important
  2. When the position of elements on the z-axis is important

For a usual software project, the source of light is nothing that concern. However, the position of several objects in the z-axis is an important cue to the user. It makes the hierarchy or special highlighting of elements visible. Especially once there is a focus on elements such as windows, shadows can be very useful for that purpose. Have a look at the next pictures. The first window comes with a shadow and the second window without a shadow. The first window gives the user the feeling that the window is only loosely connected to the ground. Users have the impression that they can physically interact with the element – which is called an affordance in UX design.


CoreMedia Library with Shadow

With Shadow


CoreMedia Library Without Shadow

Without Shadow

A flat design in its most consequential application gets rid of shadows and other elements that are a first glance solely for decoration. However, looking carefully at it, there is a functionality assigned to some of these graphical elements. In the physical world, elements still interact in a three-dimensional space with each other. The understanding of these interaction is a basis for our understanding of the world. Nobody is surprised by the fact that things tend fall to earth (OK, some do): The laws of physics are deeply integrated in our cognition and are, as such, processed with cognitive ease.

An example makes this evident. The following pictures show a dropdown many used in the CoreMedia Studio. On the first picture there is a blurry shadow around the persona chooser. On the second picture, the same element is shown without any shadow applied. As you can see, it is much easier to distinguish the activated element from the rest of the studio. Without a shadow, things will get more complicated, and they will get worse when several elements happen to overlap each other.

UI Element with Shadow

UI Element with Shadow

UI Element without Shadow

UI Element without Shadow

Shadows Help Understanding a User Interface

But is this really necessary? Why is it so important to visualize the position of an element within three-dimensional space? In fact we still mostly run software on devices with flat screens (Monitors, cell phones, tablets, …). So do shadows not just add visual clutter to a user interface?
Futhermore, there are other popular examples, that do not make use of shadows. Consider Comic-Movies like Southpark. There are neither body-shadows (shadows that appear on the objects caused by light) nor object-shadows (shadows that are caused by objects on the environment). So why does this work? In this case it is a movie, so we have the additional dimension of movement and animation that helps the viewer process the visual content.
In a user interface for software, which is in contrast to the movie interactive, the user is much more reliant on the affordances of objects. Hence, using shadows to induce affordances into objects by imitating physical interactions is not necessary.

Seeking Inspiration

What do others do? Let’s have a look at the Google’s material design: The philosophy is to stay close to the physical world and convert the behavior of physics (light, mass, material,…) to a flat-design. Google manages the shadow-topic in a very strict way:


There is a range of five different states of depth (Depth 1 is very close to the ground – Depth 5 is far away from the ground). As you can see, the shadow gets bigger and more blurry the greater the distance is between object and ground. Depending on their depth-level an object overlaps or is overlapped by other elements (i.e. an object with a depth-level of 3 covers all objects of depth-level 1 and 2, but is covered by all objects of depth-level 4 and 5). Elements which are supposed to match the same high as the ground are considered “Depth 0”.

Our Solution

We created several variants of adapting the behavior of shadows. We came to the result that three levels of depth dealing with shadows would be enough for our software products.
The shadows are constructed as follows:

CoreMedia Shadow Construction

CoreMedia Shadow Construction

The pictures below show two examples of how the new shadows look, applied to the CoreMedia Studio. You see a component called dashboard with widgets on it in the first picture. In the second picture, an opened free-floating window is shown. The widgets have depth level 1; the window has depth level 3.

Example for Shadows with Depth Level 1

Example for Shadows with Depth Level 1

Example for a Shadow with Depth Level 3

Example for a Shadow with Depth Level 3


Minimalistic design such as flat design eliminates unnecessary visual clutter from a user interface. Carefully applied, it results in a clean look and feel. Still, the elimination of some visual elements is risky. As exemplified by shadows, some visual elements provide cues that allow the user to better and easier understand what is going on. In particular, it allows the designer to inform the user on relations between objects and on potential interactions with these objects.
Visual elements such as shadows carry information that is processed by the user of the interface. Carefully applied, such elements can and should augment a minimalistic design approach.

JUnit-@RunWith: SpringJUnit4ClassRunner vs. Parameterized

If you like Spring but you also like data-driven-testing (DDT) you will soon run into a problem: both approaches require to define a runner with @RunWith – but JUnit only allows one runner for one test class.

A blog post by Konrad ‘ktoso’ Malawski actually points to a very interesting approach which is to copy the test-initialization behavior of the SpringJUnit4ClassRunner: @RunWith JUnit4 with BOTH SpringJUnit4ClassRunner and Parameterized.

The solution provided by Konrad uses @Before to set up the TestContextManager which is also used by SpringJUnit4ClassRunner. He actually misses to also respect the other life cycle phases of a test:

  • before test class
  • before test method
  • after test method
  • after test class

To solve these issues you can extend calls to the TestContextManager to @BeforeClass, @AfterClass and @After. But to have a reusable pattern we placed the solution into a JUnit Rule. Because it needs to run in test class and test method mode there is a slight quirk: You have to use the rule as @ClassRule as well as @Rule.

Usage in test:

public static final SpringAware SPRING_AWARE = 
public TestRule springAwareMethod =

Having this you can configure your test nearly just as if you have a SpringJUnit4Runner (with ContextConfiguration and such) despite some exceptions. The following annotations are not interpreted:

There might be more differences – just as always with any copy & paste approach.

Find the rule and a small example test as GitHub Repository.

Autowiring of @Required spring bean properties

Spring 2.0 introduced @Required, Spring 2.5 added @Autowired and Spring 3.0 added even more annotation driven configuration capabilities by introducing @Configuration and added support for JSR 330 annotations. So given a legacy bean of type A whose properties are annotated with @Required we might want to derive a more modern-style spring bean using @Named and @Inject. The problem now is that we can’t simply extend class A like that:

public class B extends A {

By default, springs RequiredAnnotationBeanPostProcessor would raise an exception when processing the bean named "fail". So we should either think about using delegation instead of inheritance – thus using two separately defined beans like outlined below:

public class B {
    @Inject private A a;

If we can’t resort to delegation for some reason, the following configuration class can solve our problem:

@Configuration public class TestConfiguration {

  @Inject private ConfigurableListableBeanFactory beanFactory;

  @Bean(autowire = Autowire.BY_TYPE)
  public BeanWithAutowiredProperties beanWithAutowiredProperties() {
    AutowiredAnnotationBeanPostProcessor autowiredAnnotationBeanPostProcessor = new AutowiredAnnotationBeanPostProcessor();
    BeanWithAutowiredProperties beanWithAutowiredProperties = new BeanWithAutowiredProperties();
    return beanWithAutowiredProperties;

  @Bean public BeanWithAutowiredProperties fail(){
    return new BeanWithAutowiredProperties();

  @Named public static class A {  }

  @Named public static class B {  }

  public static class LegacyBean {

    private A a;

    public A getA() {
      return a;

    @Required public void setA(A a) {
      this.a = a;

  public static class BeanWithAutowiredProperties extends LegacyBean {

    private B b;

    public B getB() {
      return b;

    public void setB(B b) {
      this.b = b;

    @PostConstruct void initialize(){
      if(null == getA()) {
        throw new IllegalStateException("required property must not be null");
      if(null == b) {
        throw new IllegalStateException("autowired must not be null");


Note that we’re using a local bean post processor so that the spring lifecycle for other beans is not affected.

Each Voice Enriches Us

Each voice enriches us. – G’Kar, Babylon 5

On today’s Diversity Day (German) I want to share a story with you.

One of our software engineers is hearing-impaired and needs to read people’s lips in order to fully understand what they are saying. The Daily Scrum in front of the taskboard can sometimes be challenging. People point at tasks and at the same time talk about what has changed, then they wave about in the general direction of “done” and mention something that got done, and then ask whether that thing with the build pipeline has occurred again, while pointing towards the seven tasks in “to do” from five feet’s distance away. It can be quite hard at times to match the conversation to an actual card on the board.


Image So our colleague asked whether we could introduce a pointer device to the Daily Scrum and hand it about. Now, each speaker in the Daily Scrum takes the laser pointer, highlights the task they are talking about, and then passes the laser pointer on to the next guy.

Then the person who initially felt the need for a laser pointer due to impaired hearing went on vacation — and the team kept using the pointer. They admitted that highlighting the tasks helped them all to structure the discussion and to not get lost. Image

There are a few more very similar examples. Behavior modifications were triggered on behalf of only one person with a disability and then turned out to improve communication for everybody.

The point to drive home is this: What seems like an extra effort and cost factor, serving only people with uncommon needs, can actually be beneficial to everybody. The position of a minority can be the crucial difference that makes a difference.

(PS: The title of this article is a quote from the Declaration of Principles.)


My internship at Coremedia

In the course of 8th grade, one is supposed to do an internship. I am very interested in programming and also coded on my own before, so I wanted to find out how this works out in large companies and if it fits my interests and the image I had.
Eventually, I was taken in by Coremedia’s Product Center.
As good as this sounds, I originally searched for a spot at one of Hamburg’s numerous companies producing games. However, none of them were eager to take inexperienced trainees, so I decided to publish a blog post addressing anyone working in the IT business in search for an internship. Martti was the one who answered and took me in along with another applicant.
Over the course of two weeks, we were introduced into the teams and not only did we learn about their work, we also got our own task, closely related to their work which gave us a very detailed insight into programming with HTML and JavaScript.
The employees received us in a very friendly and open manner and were eager to help if needed. We were treated as coworkers and Martti used a lot of time to help and coach us in the process of coding our application.
It was great to be at Coremedia AG, as a trainee I learned a lot about the used IT languages and the overall work ethic. At the same time as being very informative and funny, it was still exhausting at times, especially when we were stuck with a task. To conclude, as this was supposed to help us in determining whether working in this industry fits our interests, we definitely think that it helped to be here at Coremedia AG. I can imagine myself working on IT in the future.

Forget Commitment, Make Reliable Forecasts

There is a widespread confusion about the meaning and the relevance of “commitment” for teams that develop software according to Scrum. “Commitment” is one of the five Scrum values, as introduced in the book “Agile Software Development with Scrum”. However, since 2011 the word does not appear in the Scrum Guide any more, which has not helped to clear up any ambiguity. Plus, there is no really good translation of the word in German, complicating things even more for me in discussions with co-workers in Hamburg.
So, what is commitment and how can it, or an improved version, be a useful concept? And finally, how can this abstract value be brought to life by actual Scrum teams? I believe it is best to replace “commitment” with a more precise term that addresses more issues, and put that to work.

The Origins

In ye olden times, before the 2011 Scrum Guide, development teams used to commit to a sprint goal or even to a list of user stories. The future is unpredictable, estimates are way off, impediments show up, developers are easily distracted, miscellaneous foreign managers often distract the team, nevertheless the dev team promise really hard to deliver the sprint results the product owner ordered. The scrum master is in place to remove at least some of the issues.
The Agile Manifesto, in contrast, demands that teams be formed by motivated individuals who just give it a go. These individuals already have a basic inclination toward the project goal and sprint goal, and they do not need an additional appeal to their integrity in the fashion of “but you promised!” Also, they are an empowered team and therefore are capable of removing impediments of any kind for the higher purpose of customer value.
A commitment resembles a contract: The dev team promise to deliver a certain service (sprint goal) by a certain date (sprint review meeting). However, there are no provisions in place to reward a successful delivery, or to punish a failed delivery, other than the dev team feeling embarrassed in public when they have nothing to show at the sprint review. Also, only one party is asked to sign it, not both. In fact, Scrum’s commitment is almost, but not entirely different from a contract.

Building Trust

Now that we have had a look at the use of commitment, how is that useful to anyone? Some people seem to believe that, as it is really hard to keep a commitment due to the various slings and arrows of outrageous fortune, it is a wise move to only commit to learning, to adhering to the Scrum process, or to the team, but never to a specific outcome. Most times your commitment will be broken anyway (never mind whether on your account or by adverse circumstances), so pressure will increase, quality will degrade, control will increase, and your team will be in a place they really wanted to avoid.
I disagree and argue that it is rather important to make commitments to outcomes in the general direction of your customers, i.e., to your product owner. Trust is the cornerstone of customer relationship not only in the Agile Manifesto. It is also the foundation of the teamwork pyramid in Lencioni’s book “The five dysfunctions of a team”, and we need to see the dev team plus product owner as one team in order to avoid the biggest risk, namely, of building the wrong product. In order to improve teamwork, you need to build trust, writes C. Avery, and to do that, he advises to repeatedly make small promises and keep them, this way building a track record of reliability, thereby expanding your freedom. Foremost, Avery recommends to only make promises you can definitely meet, because one single failed commitment can ruin the positive effect of a hundred kept promises in an instant and destroy your client’s trust in you.
To me, a forecast you can count on most of the time is good enough, and close enough to a commitment that one can replace the other. When the dev team trust their own ability to execute and the product owner trusts the developers, they should work together smoothly even when the going gets tough, and resolve issues instead of shifting the blame about. Replacing “commitment” with “reliable forecast” works for me.
The obvious opportunity to repeatedly make and keep promises and build trust are sprints and sprint deliverables. This brings us to the question of how a dev team can actually keep their promises regarding sprint outcomes to the client.

Ready, Willing and Able

Meeting a sprint commitment (or better, “reliable forecast”) depends on three aspects that are easily confused:

  1. The organization must be ready.
  2. The dev team must be willing to do all they can for a sprint success.
  3. They must be able to perform all tasks that might turn out to be necessary to reach the sprint goal.

Many authors, including Schwaber in his 2002 book and this more recent article, focus on the organization as the biggest obstacle to sprint success. A team can only commit to a sprint goal when they are empowered to meet it and to blast through any impediments that might occur, even if they hurt feelings and disturb processes in other parts of the organization. The underlying assumption is that the biggest challenge for Scrum teams is a lack of organizational support when they ruthlessly turn towards customer value. If the organization is not ready to accommodate this change in mindset, the team is doomed from the sprint planning meeting on and is better off not entering any kind of commitment.

Now, on toward willingness of the dev team to work as hard as they can (see, e.g., this article). Asking the team for a commitment to the sprint goal may or may not be helpful. Hope is that, in order to meet their promise, the dev team are now a bit more motivated to focus, to remove obstacles, and build the important things first. This way, the chances of achieving the sprint result are supposed to be higher than without commitment. The Agile Manifesto starts out with the assumption that developers are motivated individuals, as discussed above, so there is no need for an additional act of commitment. However, there is the real danger that a lackluster interest in success is a self-fulfilling prophecy, whereas a “play to win”-attitude can help bring success about. The (fictional) Yoda has (fictional) success with his famous “do, not try” approach. But the best way of having the dev team’s full attention on the sprint goal is to grab them by their intrinsic motivation. In Appelo’ Management 3.0 (Chapter 5, “How to energize people”), there is a lot of good advice on how to help people give their best at work. Only one of them is grabbing people by their sense of honor and integrity, by having them make a promise that they are reluctant to break. Asking for commitment is just one of the many ways to increase the willingness of the dev team to deliver.

On to the third aspect of keeping commitments forecasting reliably, the ability to do so because the dev team have the necessary time, know-how, tools, and supply from upstream stations in the value stream. This is the classic area of team-level impediments and their removal. It is not enough to have a supportive company and a motivated team, they also need to be able to remove all impediments that slow them down. They need to notice them, attack them, and remove them for good. Noticing impediments is a science on its own, but good books have already been written on retrospectives, so this is not the place to dive in further.

Commitment and Sprint Planning

Let me wrap up this inspection of the notion of commitment by suggesting how to use “commitment” or “reliable forecast” in a sprint planning meeting.
The point of a Scrum Master is to address impediments from within and without the team, and to detect looming obstacles on the team’s likely path.

  1. When the organization is not ready to tackle the work in the sprint, the dev team plus Scrum Master must call out and provide the required infrastructure, helping hands, or authority.
  2. When the team is not willing to run for the sprint goal, it is necessary to notice that not everyone supports the sprint backlog, and bring possible side-agendas or doubts about the general direction on the table before they blow up in your face during the sprint.
  3. When the team is not able to deliver, it is important to raise any concerns right away, whether they concern limited availability of team members, know-how, technical unclarity, or any other risk threatening the sprint success.

The idea here is to educate people to act proactively (first habit of Covey) and to assume responsibility for the sprint goal and sprint backlog instead of evading a clear position (Avery).
So, in my role as Scrum Master, I usually ask two questions at the end of the sprint planning meeting: 1. Is this a good plan? 2. Can you do it?

  1. Look at the taskboard. Given this sprint goal and the stories planned here (and other stuff like the product vision on the wall over there). Do you think this is a sensible plan?
  2. Look at the table of who is available in this sprint for how long, at the tasks on the taskboard, at our list of impediments next to the taskboard, the recent sprint velocities in this chart here, etc. In light of that, can you do all the things on the board and thereby achieve the sprint goal, or is it wishful thinking?

The first question serves to draw out any discrepancies between individual goals and the team goal, and bring any doubts about the goal to the surface. Through the second question, I want the team to discuss all risks on the way to that goal, including the question of whether they are simply too optimistic.
These two questions work for me to replace the request for this mysterious commitment, which is, as mentioned above, especially awkward to ask for in German. Also, there is no waterfallish smell of a mini-contract, while addressing the questions of whether the organization is ready, and the dev team is willing and able to achieve the sprint goal.
The sprint may still fail, but I believe that this is the best way to get a forecast that is mostly reliable.


Phantomjs crashes in CI

Lately, we were facing many failed builds because of phantomjs crashing when executing our joounit tests. Grepping through the build logs did not give us much information. All we found was:

[WARNING] PhantomJS has crashed. [...]

Phantomjs did not do us the pleasure to crash when executing the same tests, not even when testing the same modules. In most cases, phantomjs did not even crash When running the next build. Fortunately, there are many others out there facing similar problems, see, for example. Using the reproducer given in that issue, we can derive a wrapper script to automatically retry the tests a few times. We can now replace the phantomjs executable by a wrapper script that evaluates the exit code of the phantomjs like this:

until [ ${RET} -eq 0 ]; do
  ${BIN} $@
  RUN=$(($RUN +1))

# exit immediately after max crashes
  if [ ${RUN} -eq ${MAX} ]; then
    echo "got ${RUN} unexpected results from phantomjs, giving up ..."
    exit ${RET}

# allowed values are 0-5
# see
  if [ ${RET} -lt 5 ]; then
    if [ ${RET} -eq 1 ]; then
      echo "phantomjs misconfigured or crashed, retrying ..."
      exit ${RET}
    echo "got unexpected return value from phantomjs: ${RET}. Retrying ..."

Fortunately, we have set up the joounit phantomjs runner to use exit code 1 only in the rare case that it is completely misconfigured, so any other valid test outcome, e.g., timeout, is still captured correctly.

Automated Documentation Check with LanguageTool


Here at CoreMedia we write our documentation in DocBook using IntelliJ Idea as an editor for the XML sources. From this XML we generate PDF and WebHelp manuals.

The documentation is part of our source code repository and is also integrated in CoreMedia’s continuous integration process with Jenkins, Sonar and the like. Naturally, the demand for a Sonar like quality measurement for documentation arouse.


The first task is to determine the metrics that we want to monitor.  Unfortunately, there is, at least now, no way to automatically test for accuracy and completeness of the information, so we have stick to more obvious features, such as:

  • Size of the manual measured through the number of chapters, tables, figures…
  • Spelling errors
  • Grammar errors
  • CoreMedia style guide errors

The first point is easy; simply count the corresponding DocBook tags in the manual using XPATH. The others require a checker that can be integrated into the build process and that delivers a usable format for further processing.

After searching the web we stumbled upon LanguageTool ( LanguageTool is an open source tool that offers a stand-alone client, a web front-end and a Java library for all the checks we want to do.

Integrating the Java library in our adapted version of the docbkx-maven-plugin was easy. Adding the Maven dependency to the project and creating a new Maven goal which instantiates the LanguageTool object:

langTool = new JLanguageTool(new AmericanEnglish());

The second line shows the big power of LanguageTool, the rules. Spell checking is done with hunspell but all of the grammar and style checks are defined in rules, either written in Java code or in XML. A simple XML rule that checks for the correct usage of email, would look like this:

<rule id="mode" name="Style: Do not write e-mail">
<message>CoreMedia Style: Its <suggestion>email</suggestion>  not e-mail</message>
<example type="correct">Send an <marker>email</marker></example>
<example type="incorrect">Send an <marker><match no="1"/></marker></example>

More complicated rules are possible using regular expressions and POS (part of speech, see tags. LanguageTool comes with a huge chunk of predefined rules for common grammar errors and can be extended by own rules. So, we implemented our style guide with XML rules.

When we start the check we get the results as a list of RuleMatch objects:

List<RuleMatch> matches = langTool.check(textString);

From a RuleMatch objects we can get all interesting information, such as the error message, the position, a suggested correction and more. In our HTML result pages, we show, for instance, the following information from a predefined rule:


In the build process we generate an overview site for all manuals:


False Positives

At the beginning we got a lot of errors that were not real errors but shortcomings of the checker. There were mostly three reasons for this:

  • Words not known by the spellchecker (all of these acronyms used in IT writing, for example)
  • Grammar rules not applicable to the format of our text
  • Words like file names or class names that can’t be known by the spellchecker

We applied three measures to overcome the false positives:

  • Creating a list of ignored words for the spellchecker. The list is managed in the repository so everyone can add new words.
  • Deactivating rules in LanguageTool with langTool.disableRule(deactivatedRule);. The list of deactivated rules is also managed in the repository
  • Tagging all specific words with the appropriate DocBook element and filtering the DocBook sources.

With this approach we were able to remove nearly all false positives.


Having an overview page for the documentation enhances the visibility and leads to better quality of the documentation. LanguageTool is a great product for this. It’s easy to integrate and to use and is very powerful. Questions in the forum or the mailing lists have been answered quickly. So, give it a try when you want to monitor the quality of your documentation.

API Design Kata


For our fortnightly coding dojo I recently suggested to focus on API design instead of implementation – at least for one session. The idea was that APIs live much longer than API implementations and that consequently flaws in the API design hurt much more than flaws in the actual algorithms. And because developers code much more often than they design APIs, the need for practice should be expected to be much more urgent.

The Task

Our goal was to design a generic caching API. Some use cases were given:

  • Lookup a value from the cache.
  • Compute a value that is not currently cached.
  • Let a value computation register dependencies. A dependency is a representation of a mutable entity.
  • Invalidate a dependency. All values whose computation registered that dependency must be removed from the cache.
  • Configure a maximum cache size.
  • Let one cache be responsible for fundamentally different classes of value at the same time.

The approach was to write the API, only, and to provide test cases simply to evaluate how a client would use the API. No implementation of the actual API was allowed, just implementation of callback interfaces that are normally provided by clients of the cache.

Under this assumption the tests would not run, but the test code was using the API and had to look natural and understandable. Of course, the crucial aspect was to make the API convenient for clients. API documentation snippets were written only as far as absolutely necessary.

Our Experience

We decided to work on a single laptop connected to a beamer to allow all participants to comment and implement improvements alternatingly. It turned out that it is surprisingly difficult to build an API without building the implementation. There is the temptation to let the intended internal data structure shine through in the API (“But how are dependencies stored after all?”) when the client of the API couldn’t care less.

There is also the tendency to skip the ‘test-driven’ design and write down the cache interface immediately when in fact the tests given you a good feeling which information has to be provided to the cache somehow.

It was observed that some upfront drawing would have helped a lot. It wouldn’t have to be proper UML, but an overview of the entities involved and their relationships would have given us a quicker start. Caching is more complex than the above use cases might suggest.

Java generics were a recurring topic. While we are all used to instantiating generic classes, actually defining the right type parameters for an interface is different matter.

We talked a bit about code style. The @Nonnull annotation sparked the most intense discussion.

Your Turn

If you decide to repeat the kata, also think (after the API is done) about possible performance implications of the design choices. Look for further missing features. On the other hand, look for redundant features that only make the API harder to understand.

Is the Backlog an Unnecessary Proxy?

High Priority Mail

Last week a letter reached me that had the announcement IMPORTANT DOCUMENTS on the envelope. When I eagerly tore open the letter, awaiting some life-changing documents inside, it turned out they were not. It is a common pattern: You all receive e-mails stating URGENT in the subject, but they’re not. And how many e-mails are flagged “!” important — but they’re not? When I judge the importance of a communication to me, its outward appearance is not the only thing I take into consideration, it is also very much the sender that counts. If I know the author to be trustworthy, based on my previous experiences with her, I tend to treat those messages as much more relevant than messages from an insurance company (that usually wants to sell more insurances) or from a business I never dealt with before.

An Intermediate Artifact

This thought crossed my mind when at the #lkce13, “the” David stated that a prioritized backlog introduces an unnecessary proxy variable. Agreed, stakeholders and dev team are better off talking to each other instead of capturing conversations in an artifact with only the product owner talking to each of the sides. Also, a backlog may imply a commitment to a path, when in fact it might just show alternate future directions in which the product may involve.

Backlog for Focus

On the other hand, if you have n stakeholders and m developers, and every developer is to talk directly to every customer, you will have n * m conversations taking place. When the product owner acts as information hub, only n + m conversations take place, which might be the only feasible way.

Also, the product owner is supposed to act as the business value expert, condensing the multiple voices of the stakeholders, and even opening up new options. It makes sense to me to have a domain expert drive this and not spread accountability for priorities all over the team.

The third and final point brings me back to the story about the “important” letter: it is about trust. When the dev team trusts the product owner enough to make good decisions on priority, there is less urgency to discuss priorities directly with stakeholders. Maybe the PO has shown good judgment in the past. Or the decision process is transparent enough, showing that all relevant stakeholders are involved, that all sides have been taken into consideration. If you do not trust your product owner to make good priority decisions, you might need to ask why, and address that issue.

 Trust the Product Owner

The backlog is not to be used as a contract artifact on paper, inhibiting face-to-face conversation and feedback loops. There is a balance to strike: The team needs to alternate between an opening, questioning stance involving all stakeholders, and a focusing, deciding stance where the product owner is responsible to narrow down all possibilities to a manageable number. The product owner is supposed to lead by reducing uncertainty about the future, providing clarity for the team. She can achieve this task better when all parties trust her decisions, based on her openness, her focus, and her commitment to the job. A prioritized backlog is a tool to achieve this, but it is abused when just used for written “THIS IS IMPORTANT” statements.

So, watch out for backlogs that are used to hide information like value, options, or risk, inhibiting collaboration. Make your backlog a dual-use tool both to start conversations and to provide focus.


Get every new post delivered to your Inbox.

Join 264 other followers