Skip to content

Down in the Jungle Room

by

For last week’s coding dojo at CoreMedia, I prepared a small exercise on concurrent programming. As a rule, it is a bad idea to write concurrency algorithms like a lock or a thread pool yourself. Get a proven library and use it! So the exercise had to deal with a more realistic problem: apply some changes to legacy code that is supposed to be concurrency-proof.
The given source code simulated – in the most elementary form – a company that organizes safaris. The original source code that was presented to the participants can be found here: https://github.com/okummer/ConcurrencyJungle

The task was to fix scalability issues in the code. The exact extend of the scalability issues was not mentioned, which initially did not seem to disconcert anyone. It was allowed to change the code in any way. It was pointed out that the developer of the existing code was no longer with the company.

The participants decided to try mob programming for this task. This led to very interesting discussions and provided me with a natural way to drop the occasional comment on possible alternate solutions. See http://marcabraham.wordpress.com/2014/02/05/what-on-earth-is-mob-programming for details on mob programming.

If you want to do this exercise in your dojo, it is a good idea to have someone with experience in concurrent programming attend the dojo, so that bad solutions can be spotted and hints can be given.

Spoiler alert: The remainder of this text contains hints that might lessen the experience of solving the problem oneself.

The code was sprinkled with various style issues and small inconsistencies that were supposed to serve as warm-up exercises and as distractions during the actual programming.

However, the participants ignored the small issues, hardly looking at the code. Instead, they headed straight to the test class, which actually seemed to test relevant aspects of the program. Of course it didn’t. It took some time until all of the blunders in the (green) tests were discovered and the tests were rewritten. It became quite obvious how existing code, even code that one should be suspicious of, can be accepted at face value.

After much test writing, some risen eyebrows, and finally a debugging session, it became clear why the code did not scale: nearly every method was synchronized. After removing the central synchronization point, which allowed exactly one safari at any given time, the program immediately ran into a deadlock. It was designed that way, of course, but the surprising takeaway was that you can get deadlocks by removing locks.

Only now did the participants look more closely at the entire set of classes. The code was a jungle of interrelated code that exhibited a highly interwoven call structure. Of course, this is bad news for concurrency requirements. Furthermore, the code suggested that this is really a thoroughly chosen architecture that closely matches the application domain. Thoroughly chosen, yes, but …

At this point, some approaches were considered and it was decided to reduce the potential for deadlock by removing those locks that were provably not necessary, because they protected immutable data or data that was accessed in one thread, only. The participants were very careful in this phase, avoiding a trap that would have led to an inconsistent object graph when synchronizing too little. I was quite proud of them.

When all was done, we talked a little about the result of the refactorings and about general approaches to deadlock avoidance.

It was a lot of fun to do this exercise. It turned out to contain the right amount to trickery, so that a robust solution was found before time was up. If there had been more time, code cleanup and further testing would surely have been possible.

CoreMedia Photowalk

Some photo enthusiasts at CoreMedia  formed a photowalk group that is called „Boring Photos Questionmark“. The question mark stands for “no-boring photos at all”! Some of our results are published on our group page in Flickr.
During our photowalk we do not snap everything. We have special topics which we try to express by images. For me it’s very interesting to see how my colleagues look at the perspectives, their methods, their techniques and their final results. Together we learn different camera movement techniques such as panning shot, where you rotate the camera on its horizontal/vertical/diagonal axis in order to keep a moving object in view while the rest of the scene is blurred. For the night photography we use the time exposure made by leaving the shutter open a relatively long time. The moving elements such as the lights of cars for example become drawing lines while the rest of the scene stays sharp.

Even without a camera we spend our time with knowledge sharing sessions such as Lightroom functions or portrait photography. For the next walk we are planning to visit the very interesting photography exhibition “100 years of Leica”.

Beside that, our photowalks are a special social event with lots of fun, where we share our experiences, skill improvements and knowledge.

12729677823_08be2afa53_b

Pandora Cart: Creating a Sprint Timeline

In a recent retrospective I wanted to dissect the sprint day by day. The books recommend drawing a timeline, adding events to it, and then maybe drawing a satisfaction graph for every team member across the whole thing. This always felt time-consuming and tedious to me.

Enter Gamification

When in doubt, make a game of it, I thought. So here is Pandora Cart. The name is a mix of the team name and the console game that inspired me.

1. Preparation

The team coach sets up a course of n x 3,5 numbered playing fields (just use index cards), where n = sprint length in days. Include field 0 as “start”.

Also, the coach creates a large number of event cards, about 5cm x 5cm. Half of them shows the picture of a banana, the others show a mushroom. There is plenty of space on the cards to make notes. A sensible number is m x 6 cards, where m = team size.

Finally, a game progress marker is required, preferrably a toy car approximating the shape of a cart; plus a D6 (regular 6-sided dice).

2. Game Setup

The team coach lays out the course on a table in numerical order.

Team members are asked to remember the sprint and all events that helped or hindered the team or the individual colleague. For each helping event, the team member takes a mushroom card and jots down a short summary of the event. For each hindering event, the team member writes a note on a banana card. Include the day of the sprint it occurred.

When setup is finished, we have

  • the course, i.e., the numbered fields, on the table
  • the cart sits on field zero/start
  • each team member has a stack of banana and mushroom event cards in front of them
  • the D6 lies on the table

3. The Rounds

There are n rounds in the game, representing the days in the sprint. For each round:

  1. One of the team members rolls the die and advances the cart by the according number of fields.
  2. All team members play event cards for the round, i.e., they describe the event and place the card on the table:
    • Whenever a mushroom is played, the cart advances by one field.
    • Whenever a banana is played, the cart moves back to the previous field.
  3. Take care to lay out the cards for each round in one separate row.

4. Game End

The game ends after the last round no matter what. No player should have any event cards in their hands any more. All event cards are on the table in n rows, the rows ordered by rounds (days). This display of event cards is your timeline of the sprint!

This image from a running game shows day four in a bad shape, while day six collected many mushrooms. The cart track is partly shown in a circle around the timeline, by the way, the cart is on field 18 at the lower right.

DSC00645

Now that you have gathered the data, you can move on to the “generate insights” stage of your retrospective.

Hints

The final field of the cart could be a point of discussion, but chance will heavily impact the result. Also, it does not matter at all where the cart ends up. Throwing dice is just there for the fun of it.

It is probably best to play this game for sprint lengths up to ten working days. For longer sprints and more rounds the game might become boring.

About the length of the course: The cart should, on average, advance by 3,5 fields per round. If, for every sprint day, you expect one more banana than mushrooms, you’ll end up at 2,5 fields per round on average,and can shorten the track accordingly. Maybe you even need negative fields if the first day of the sprint was a real mess …

 

How a flat-design approach and shadows go together

written by Nils Morich and Kris Lohmann

Introduction

In the past weeks, the UX team at CoreMedia has been working on a new design approach towards the editorial interface for our CMS. This interface is a rich-web client called the CoreMedia Studio. The original design from 2009 was based on gradients and rounded corners – These elements feel a little outdated, 5 years later and in the time of Google’s Material design, Apple finally going flat, and Windows 8.

In this article, we discuss thoughts on flat design and how to apply it without falling into common traps (for such traps, see this post by NormanNielsen Group).

Reduce to the Max

Consequently, in our new design, gradients disappeared and transparency is history. Inspired by the references mentioned above, which became very popular with the release of Windows 8, we decided to pick up that trend and iterate our design approach. For example, the new approach manifests in the design of the icons used in the studio. No borders, no gradients, not more than just one color. The basic principles are driven by a flat-design approach. They are vector-based, which makes them future-proof for high-resolution displays that are more and more used. The following image exemplifies the old and the new look of the icons.

 

CoreMedia Icons

Old Icons (Sample)

 

New CoreMedia Icons

New Icons (Sample)

 

Visual Cues

So what is this shadow-thing all about? What are shadows used for and why is it important to us to know? There are two occasions when shadows become important:

  1. When the source and the direction of light is important
  2. When the position of elements on the z-axis is important

For a usual software project, the source of light is nothing that concern. However, the position of several objects in the z-axis is an important cue to the user. It makes the hierarchy or special highlighting of elements visible. Especially once there is a focus on elements such as windows, shadows can be very useful for that purpose. Have a look at the next pictures. The first window comes with a shadow and the second window without a shadow. The first window gives the user the feeling that the window is only loosely connected to the ground. Users have the impression that they can physically interact with the element – which is called an affordance in UX design.

 

CoreMedia Library with Shadow

With Shadow

 

CoreMedia Library Without Shadow

Without Shadow

A flat design in its most consequential application gets rid of shadows and other elements that are a first glance solely for decoration. However, looking carefully at it, there is a functionality assigned to some of these graphical elements. In the physical world, elements still interact in a three-dimensional space with each other. The understanding of these interaction is a basis for our understanding of the world. Nobody is surprised by the fact that things tend fall to earth (OK, some do): The laws of physics are deeply integrated in our cognition and are, as such, processed with cognitive ease.

An example makes this evident. The following pictures show a dropdown many used in the CoreMedia Studio. On the first picture there is a blurry shadow around the persona chooser. On the second picture, the same element is shown without any shadow applied. As you can see, it is much easier to distinguish the activated element from the rest of the studio. Without a shadow, things will get more complicated, and they will get worse when several elements happen to overlap each other.

UI Element with Shadow

UI Element with Shadow

UI Element without Shadow

UI Element without Shadow

Shadows Help Understanding a User Interface

But is this really necessary? Why is it so important to visualize the position of an element within three-dimensional space? In fact we still mostly run software on devices with flat screens (Monitors, cell phones, tablets, …). So do shadows not just add visual clutter to a user interface?
Futhermore, there are other popular examples, that do not make use of shadows. Consider Comic-Movies like Southpark. There are neither body-shadows (shadows that appear on the objects caused by light) nor object-shadows (shadows that are caused by objects on the environment). So why does this work? In this case it is a movie, so we have the additional dimension of movement and animation that helps the viewer process the visual content.
In a user interface for software, which is in contrast to the movie interactive, the user is much more reliant on the affordances of objects. Hence, using shadows to induce affordances into objects by imitating physical interactions is not necessary.

Seeking Inspiration

What do others do? Let’s have a look at the Google’s material design: The philosophy is to stay close to the physical world and convert the behavior of physics (light, mass, material,…) to a flat-design. Google manages the shadow-topic in a very strict way:

07

There is a range of five different states of depth (Depth 1 is very close to the ground – Depth 5 is far away from the ground). As you can see, the shadow gets bigger and more blurry the greater the distance is between object and ground. Depending on their depth-level an object overlaps or is overlapped by other elements (i.e. an object with a depth-level of 3 covers all objects of depth-level 1 and 2, but is covered by all objects of depth-level 4 and 5). Elements which are supposed to match the same high as the ground are considered “Depth 0”.

Our Solution

We created several variants of adapting the behavior of shadows. We came to the result that three levels of depth dealing with shadows would be enough for our software products.
The shadows are constructed as follows:

CoreMedia Shadow Construction

CoreMedia Shadow Construction

The pictures below show two examples of how the new shadows look, applied to the CoreMedia Studio. You see a component called dashboard with widgets on it in the first picture. In the second picture, an opened free-floating window is shown. The widgets have depth level 1; the window has depth level 3.

Example for Shadows with Depth Level 1

Example for Shadows with Depth Level 1

Example for a Shadow with Depth Level 3

Example for a Shadow with Depth Level 3

Conclusion

Minimalistic design such as flat design eliminates unnecessary visual clutter from a user interface. Carefully applied, it results in a clean look and feel. Still, the elimination of some visual elements is risky. As exemplified by shadows, some visual elements provide cues that allow the user to better and easier understand what is going on. In particular, it allows the designer to inform the user on relations between objects and on potential interactions with these objects.
Visual elements such as shadows carry information that is processed by the user of the interface. Carefully applied, such elements can and should augment a minimalistic design approach.

JUnit-@RunWith: SpringJUnit4ClassRunner vs. Parameterized

If you like Spring but you also like data-driven-testing (DDT) you will soon run into a problem: both approaches require to define a runner with @RunWith – but JUnit only allows one runner for one test class.

A blog post by Konrad ‘ktoso’ Malawski actually points to a very interesting approach which is to copy the test-initialization behavior of the SpringJUnit4ClassRunner: @RunWith JUnit4 with BOTH SpringJUnit4ClassRunner and Parameterized.

The solution provided by Konrad uses @Before to set up the TestContextManager which is also used by SpringJUnit4ClassRunner. He actually misses to also respect the other life cycle phases of a test:

  • before test class
  • before test method
  • after test method
  • after test class

To solve these issues you can extend calls to the TestContextManager to @BeforeClass, @AfterClass and @After. But to have a reusable pattern we placed the solution into a JUnit Rule. Because it needs to run in test class and test method mode there is a slight quirk: You have to use the rule as @ClassRule as well as @Rule.

Usage in test:

@ClassRule
public static final SpringAware SPRING_AWARE = 
    SpringAware.forClass(SpringAwareTest.class);
@Rule
public TestRule springAwareMethod =
    SPRING_AWARE.forInstance(this);

Having this you can configure your test nearly just as if you have a SpringJUnit4Runner (with ContextConfiguration and such) despite some exceptions. The following annotations are not interpreted:

There might be more differences – just as always with any copy & paste approach.

Find the rule and a small example test as GitHub Repository.

Autowiring of @Required spring bean properties

Spring 2.0 introduced @Required, Spring 2.5 added @Autowired and Spring 3.0 added even more annotation driven configuration capabilities by introducing @Configuration and added support for JSR 330 annotations. So given a legacy bean of type A whose properties are annotated with @Required we might want to derive a more modern-style spring bean using @Named and @Inject. The problem now is that we can’t simply extend class A like that:

@Named("fail")
public class B extends A {
    ...
}

By default, springs RequiredAnnotationBeanPostProcessor would raise an exception when processing the bean named "fail". So we should either think about using delegation instead of inheritance – thus using two separately defined beans like outlined below:

@Named("delegate")
public class B {
    @Inject private A a;
    ...
}

If we can’t resort to delegation for some reason, the following configuration class can solve our problem:

@Configuration public class TestConfiguration {

  @Inject private ConfigurableListableBeanFactory beanFactory;

  @Bean(autowire = Autowire.BY_TYPE)
  public BeanWithAutowiredProperties beanWithAutowiredProperties() {
    AutowiredAnnotationBeanPostProcessor autowiredAnnotationBeanPostProcessor = new AutowiredAnnotationBeanPostProcessor();
    autowiredAnnotationBeanPostProcessor.setAutowiredAnnotationType(Required.class);
    autowiredAnnotationBeanPostProcessor.setBeanFactory(beanFactory);
    BeanWithAutowiredProperties beanWithAutowiredProperties = new BeanWithAutowiredProperties();
    autowiredAnnotationBeanPostProcessor.processInjection(beanWithAutowiredProperties);
    return beanWithAutowiredProperties;
  }

  @Bean public BeanWithAutowiredProperties fail(){
    return new BeanWithAutowiredProperties();
  }

  @Named public static class A {  }

  @Named public static class B {  }

  public static class LegacyBean {

    private A a;

    public A getA() {
      return a;
    }

    @Required public void setA(A a) {
      this.a = a;
    }
  }

  public static class BeanWithAutowiredProperties extends LegacyBean {

    private B b;

    public B getB() {
      return b;
    }

    public void setB(B b) {
      this.b = b;
    }

    @PostConstruct void initialize(){
      if(null == getA()) {
        throw new IllegalStateException("required property must not be null");
      }
      if(null == b) {
        throw new IllegalStateException("autowired must not be null");
      }
    }
  }

}

Note that we’re using a local bean post processor so that the spring lifecycle for other beans is not affected.

Each Voice Enriches Us

Each voice enriches us. – G’Kar, Babylon 5

On today’s Diversity Day (German) I want to share a story with you.

One of our software engineers is hearing-impaired and needs to read people’s lips in order to fully understand what they are saying. The Daily Scrum in front of the taskboard can sometimes be challenging. People point at tasks and at the same time talk about what has changed, then they wave about in the general direction of “done” and mention something that got done, and then ask whether that thing with the build pipeline has occurred again, while pointing towards the seven tasks in “to do” from five feet’s distance away. It can be quite hard at times to match the conversation to an actual card on the board.

 

Image So our colleague asked whether we could introduce a pointer device to the Daily Scrum and hand it about. Now, each speaker in the Daily Scrum takes the laser pointer, highlights the task they are talking about, and then passes the laser pointer on to the next guy.

Then the person who initially felt the need for a laser pointer due to impaired hearing went on vacation — and the team kept using the pointer. They admitted that highlighting the tasks helped them all to structure the discussion and to not get lost. Image

There are a few more very similar examples. Behavior modifications were triggered on behalf of only one person with a disability and then turned out to improve communication for everybody.

The point to drive home is this: What seems like an extra effort and cost factor, serving only people with uncommon needs, can actually be beneficial to everybody. The position of a minority can be the crucial difference that makes a difference.

(PS: The title of this article is a quote from the Declaration of Principles.)

 

My internship at Coremedia

In the course of 8th grade, one is supposed to do an internship. I am very interested in programming and also coded on my own before, so I wanted to find out how this works out in large companies and if it fits my interests and the image I had.
Eventually, I was taken in by Coremedia’s Product Center.
As good as this sounds, I originally searched for a spot at one of Hamburg’s numerous companies producing games. However, none of them were eager to take inexperienced trainees, so I decided to publish a blog post addressing anyone working in the IT business in search for an internship. Martti was the one who answered and took me in along with another applicant.
Over the course of two weeks, we were introduced into the teams and not only did we learn about their work, we also got our own task, closely related to their work which gave us a very detailed insight into programming with HTML and JavaScript.
The employees received us in a very friendly and open manner and were eager to help if needed. We were treated as coworkers and Martti used a lot of time to help and coach us in the process of coding our application.
It was great to be at Coremedia AG, as a trainee I learned a lot about the used IT languages and the overall work ethic. At the same time as being very informative and funny, it was still exhausting at times, especially when we were stuck with a task. To conclude, as this was supposed to help us in determining whether working in this industry fits our interests, we definitely think that it helped to be here at Coremedia AG. I can imagine myself working on IT in the future.

Forget Commitment, Make Reliable Forecasts

There is a widespread confusion about the meaning and the relevance of “commitment” for teams that develop software according to Scrum. “Commitment” is one of the five Scrum values, as introduced in the book “Agile Software Development with Scrum”. However, since 2011 the word does not appear in the Scrum Guide any more, which has not helped to clear up any ambiguity. Plus, there is no really good translation of the word in German, complicating things even more for me in discussions with co-workers in Hamburg.
So, what is commitment and how can it, or an improved version, be a useful concept? And finally, how can this abstract value be brought to life by actual Scrum teams? I believe it is best to replace “commitment” with a more precise term that addresses more issues, and put that to work.

The Origins

In ye olden times, before the 2011 Scrum Guide, development teams used to commit to a sprint goal or even to a list of user stories. The future is unpredictable, estimates are way off, impediments show up, developers are easily distracted, miscellaneous foreign managers often distract the team, nevertheless the dev team promise really hard to deliver the sprint results the product owner ordered. The scrum master is in place to remove at least some of the issues.
The Agile Manifesto, in contrast, demands that teams be formed by motivated individuals who just give it a go. These individuals already have a basic inclination toward the project goal and sprint goal, and they do not need an additional appeal to their integrity in the fashion of “but you promised!” Also, they are an empowered team and therefore are capable of removing impediments of any kind for the higher purpose of customer value.
A commitment resembles a contract: The dev team promise to deliver a certain service (sprint goal) by a certain date (sprint review meeting). However, there are no provisions in place to reward a successful delivery, or to punish a failed delivery, other than the dev team feeling embarrassed in public when they have nothing to show at the sprint review. Also, only one party is asked to sign it, not both. In fact, Scrum’s commitment is almost, but not entirely different from a contract.

Building Trust

Now that we have had a look at the use of commitment, how is that useful to anyone? Some people seem to believe that, as it is really hard to keep a commitment due to the various slings and arrows of outrageous fortune, it is a wise move to only commit to learning, to adhering to the Scrum process, or to the team, but never to a specific outcome. Most times your commitment will be broken anyway (never mind whether on your account or by adverse circumstances), so pressure will increase, quality will degrade, control will increase, and your team will be in a place they really wanted to avoid.
I disagree and argue that it is rather important to make commitments to outcomes in the general direction of your customers, i.e., to your product owner. Trust is the cornerstone of customer relationship not only in the Agile Manifesto. It is also the foundation of the teamwork pyramid in Lencioni’s book “The five dysfunctions of a team”, and we need to see the dev team plus product owner as one team in order to avoid the biggest risk, namely, of building the wrong product. In order to improve teamwork, you need to build trust, writes C. Avery, and to do that, he advises to repeatedly make small promises and keep them, this way building a track record of reliability, thereby expanding your freedom. Foremost, Avery recommends to only make promises you can definitely meet, because one single failed commitment can ruin the positive effect of a hundred kept promises in an instant and destroy your client’s trust in you.
To me, a forecast you can count on most of the time is good enough, and close enough to a commitment that one can replace the other. When the dev team trust their own ability to execute and the product owner trusts the developers, they should work together smoothly even when the going gets tough, and resolve issues instead of shifting the blame about. Replacing “commitment” with “reliable forecast” works for me.
The obvious opportunity to repeatedly make and keep promises and build trust are sprints and sprint deliverables. This brings us to the question of how a dev team can actually keep their promises regarding sprint outcomes to the client.

Ready, Willing and Able

Meeting a sprint commitment (or better, “reliable forecast”) depends on three aspects that are easily confused:

  1. The organization must be ready.
  2. The dev team must be willing to do all they can for a sprint success.
  3. They must be able to perform all tasks that might turn out to be necessary to reach the sprint goal.

Many authors, including Schwaber in his 2002 book and this more recent article, focus on the organization as the biggest obstacle to sprint success. A team can only commit to a sprint goal when they are empowered to meet it and to blast through any impediments that might occur, even if they hurt feelings and disturb processes in other parts of the organization. The underlying assumption is that the biggest challenge for Scrum teams is a lack of organizational support when they ruthlessly turn towards customer value. If the organization is not ready to accommodate this change in mindset, the team is doomed from the sprint planning meeting on and is better off not entering any kind of commitment.

Now, on toward willingness of the dev team to work as hard as they can (see, e.g., this article). Asking the team for a commitment to the sprint goal may or may not be helpful. Hope is that, in order to meet their promise, the dev team are now a bit more motivated to focus, to remove obstacles, and build the important things first. This way, the chances of achieving the sprint result are supposed to be higher than without commitment. The Agile Manifesto starts out with the assumption that developers are motivated individuals, as discussed above, so there is no need for an additional act of commitment. However, there is the real danger that a lackluster interest in success is a self-fulfilling prophecy, whereas a “play to win”-attitude can help bring success about. The (fictional) Yoda has (fictional) success with his famous “do, not try” approach. But the best way of having the dev team’s full attention on the sprint goal is to grab them by their intrinsic motivation. In Appelo’ Management 3.0 (Chapter 5, “How to energize people”), there is a lot of good advice on how to help people give their best at work. Only one of them is grabbing people by their sense of honor and integrity, by having them make a promise that they are reluctant to break. Asking for commitment is just one of the many ways to increase the willingness of the dev team to deliver.

On to the third aspect of keeping commitments forecasting reliably, the ability to do so because the dev team have the necessary time, know-how, tools, and supply from upstream stations in the value stream. This is the classic area of team-level impediments and their removal. It is not enough to have a supportive company and a motivated team, they also need to be able to remove all impediments that slow them down. They need to notice them, attack them, and remove them for good. Noticing impediments is a science on its own, but good books have already been written on retrospectives, so this is not the place to dive in further.

Commitment and Sprint Planning

Let me wrap up this inspection of the notion of commitment by suggesting how to use “commitment” or “reliable forecast” in a sprint planning meeting.
The point of a Scrum Master is to address impediments from within and without the team, and to detect looming obstacles on the team’s likely path.

  1. When the organization is not ready to tackle the work in the sprint, the dev team plus Scrum Master must call out and provide the required infrastructure, helping hands, or authority.
  2. When the team is not willing to run for the sprint goal, it is necessary to notice that not everyone supports the sprint backlog, and bring possible side-agendas or doubts about the general direction on the table before they blow up in your face during the sprint.
  3. When the team is not able to deliver, it is important to raise any concerns right away, whether they concern limited availability of team members, know-how, technical unclarity, or any other risk threatening the sprint success.

The idea here is to educate people to act proactively (first habit of Covey) and to assume responsibility for the sprint goal and sprint backlog instead of evading a clear position (Avery).
So, in my role as Scrum Master, I usually ask two questions at the end of the sprint planning meeting: 1. Is this a good plan? 2. Can you do it?

  1. Look at the taskboard. Given this sprint goal and the stories planned here (and other stuff like the product vision on the wall over there). Do you think this is a sensible plan?
  2. Look at the table of who is available in this sprint for how long, at the tasks on the taskboard, at our list of impediments next to the taskboard, the recent sprint velocities in this chart here, etc. In light of that, can you do all the things on the board and thereby achieve the sprint goal, or is it wishful thinking?

The first question serves to draw out any discrepancies between individual goals and the team goal, and bring any doubts about the goal to the surface. Through the second question, I want the team to discuss all risks on the way to that goal, including the question of whether they are simply too optimistic.
These two questions work for me to replace the request for this mysterious commitment, which is, as mentioned above, especially awkward to ask for in German. Also, there is no waterfallish smell of a mini-contract, while addressing the questions of whether the organization is ready, and the dev team is willing and able to achieve the sprint goal.
The sprint may still fail, but I believe that this is the best way to get a forecast that is mostly reliable.

References

Phantomjs crashes in CI

Lately, we were facing many failed builds because of phantomjs crashing when executing our joounit tests. Grepping through the build logs did not give us much information. All we found was:

[WARNING] PhantomJS has crashed. [...]

Phantomjs did not do us the pleasure to crash when executing the same tests, not even when testing the same modules. In most cases, phantomjs did not even crash When running the next build. Fortunately, there are many others out there facing similar problems, see https://github.com/ariya/phantomjs/issues/12002, for example. Using the reproducer given in that issue, we can derive a wrapper script to automatically retry the tests a few times. We can now replace the phantomjs executable by a wrapper script that evaluates the exit code of the phantomjs like this:

BIN=/usr/local/phantomjs-1.9.7/bin/phantomjs
RET=1
MAX=5
RUN=0
until [ ${RET} -eq 0 ]; do
  ${BIN} $@
  RET=$?
  RUN=$(($RUN +1))

# exit immediately after max crashes
  if [ ${RUN} -eq ${MAX} ]; then
    echo "got ${RUN} unexpected results from phantomjs, giving up ..."
    exit ${RET}
  fi

# allowed values are 0-5
# see https://github.com/CoreMedia/jangaroo-tools/blob/master/jangaroo-maven/jangaroo-maven-plugin/src/main/resources/net/jangaroo/jooc/mvnplugin/phantomjs-joounit-page-runner.js
  if [ ${RET} -lt 5 ]; then
    if [ ${RET} -eq 1 ]; then
      echo "phantomjs misconfigured or crashed, retrying ..."
    else
      exit ${RET}
    fi
  else
    echo "got unexpected return value from phantomjs: ${RET}. Retrying ..."
  fi
done

Fortunately, we have set up the joounit phantomjs runner to use exit code 1 only in the rare case that it is completely misconfigured, so any other valid test outcome, e.g., timeout, is still captured correctly.

Follow

Get every new post delivered to your Inbox.

Join 273 other followers