Skip to content

To Manage Agile You Need To Know What It Means

A Conference Report

Last week, the Manage Agile conference was held in a conference hotel near Berlin. It offered a mix of workshops and presentations held both by practicioners as well as consultants of agile methods. This way, the program balanced concrete success stories on the one hand with general recommendations on the other, and individual reality checks with reports of recurring patterns.

The talks covered topics like developing an agile organization with an agile culture, scaling agile, the role of managers in agile organizations and their personal development, and the role of HR.

However, Craig Larman set the right key note in his keynote when he set out to define what the word agile actually means (used eight times in this text already since you started reading). His phrase was “to be able to turn on a dime for a dime”, i.e., to be able to quickly and cheaply change direction when circumstances demand it. Such as a changing market, a business opportunity not to miss, profit to be made.

So, if you want to become an agile company, you have to ask yourself what you actually shoot for. “Becoming agile” is not a helpful claim if the goal is not clear. Example: Among three Agile Coaches that I happen to know, there were three different opinions on what they were optimizing their department for.

What is the goal of your agile transition? A “True North” can be stated, even if the path is not clear yet, and will certainly twist and turn in future times in unexpected ways.

Among the possible goals of an agile transition I heard at the conference were

  • fast, decentralized decisions
  • independent teams
  • small concept-to-cash time / cycle time / time-to-market
  • delivering results
  • fast delivery, continuous delivery (as part of small cycle time)
  • low cost of changing priorities, little work-in-progress
  • low variation across projects, predictability
  • return on investment
  • customer value
  • low cost of implementing new features, architectural flexibility
  • self-organization, responsibility culture
  • innovation
  • software quality, low technical debt
  • handling complexity
  • promoting and living agile values and culture
  • personal growth and aligning personal and company values

What do you optimize your organization for?

#socrates15: Embracing the Walking Skeleton

From the sessions I joined at SoCraTes 2015 two of them were entitled:

  1. The Walking Skeleton by Franziska Sauerwein and
  2. Embrace Failure by Xavier Detant.

Both sessions reminded me on a recent incident we experienced in our development team. And they were an eye opener spreading a new light on what I perceived as miserable failure before.

The Incident

The team I am working in had the task to develop a feature set with huge architectural impact to the product. Over some weeks the feature set grew and the feature became alive and kicking. But after a code review we detected some architectural problems which from the outside could be perceived as stupidity. As consequence we spent one sprint in fixing and rebuilding the architecture.

I felt as if we failed miserably and could not answer the question: “Why did we fail?”

The Walking Skeleton

From Alistair Cockburn (2008):

A Walking Skeleton is a tiny implementation of the system that performs a small end-to-end function. It need not use the final architecture, but it should link together the main architectural components. The architecture and the functionality can then evolve in parallel.

Compared to a spike which as given by Alistair answers the question “Are we headed in the wrong direction?” a walking skeleton is able to provide answers very fast to the question “What are impediments we have to challenge?” This is where it becomes important to embrace failures.

Embrace Failure

Embracing failure actually means to accept the fact that we are all about to fail sometime. And instead of struggling with it take it as a chance to learn from it.

To me the advantage of embracing failures is to get decisions fast as it is no problem but even a chance for learning when failing.

Embrace the Walking Skeleton

Thus to me both mindsets/approaches are about speed. While you get your skeleton going it will fail. And it is good that it will fail as you will learn from it, help up the skeleton and continue. I even argue that you should push your skeleton to directions where it will fail even faster just to learn faster.

So give some free hugs to the skeleton :-)


After these two sessions it was totally clear to me, that it was not important why we failed. It just was good that we failed and that we learned from it. Thus without having named it that way, we had developed a walking skeleton and bit by bit added some flesh to the bones. We mastered several challenges like architectural changes to the existing code and we learned a lot in a short amount of time. The failure I perceived just was a result of being fast.

At the end my summary is, that we could not have done better than that. Without having the skeleton walking we would have required weeks of planning, sketching and perhaps doing spikes. But having the skeleton the final architecture evolved quite naturally as every knowledge we missed before was at hand.

And now? We are just about to send another skeleton walking around…

Shape Your Blueprint – Only Use What You Need

 by Jens Dallmann and Daniel Straßenburg

Extendable Blueprint

The CoreMedia system will be delivered to customers with the Blueprint workspace as entry point for customizations. Technically it is a Maven project which aggregates core features in functional feature sets.  To represent a functional feature set an extension mechanism has been implemented. Each extension is self-contained and can be used out of the box.

In order to manage the set of extensions shipped with the Blueprint, the CoreMedia Blueprint Maven Plugin  was developed. This Maven plugin can be used to modify the extension set in the Blueprint workspace. In order to modify the extensions contained in the CoreMedia components, this plugin performs POM modifications, i.e. the feature set is determined during build time.

During developing the Blueprint at CoreMedia it is beneficial to be able to select a certain feature set during build time in order to build and deploy a system. Such a system can now be tested and it is possible to verify that given extensions can both coexist and run without undesired dependencies to other dependencies.

From a customer’s perspective management of shipped extensions can be helpful if certain extensions shall not be enabled because they are not required in a project. In this case, the feature of the extension shall neither be enabled in production nor the feature’s code base shall be packed or compiled during build time. In other words, removing an extension is a relevant use case if projects are based on a feature-rich initial workspace.


The modularization brought out two important facts:
1. Features building upon an extension must be removable as well. Otherwise the removal of the underlying extensions might break the build due to unresolvable dependencies. Those features are designed as extensions as well in order to make them removable.
2. The Blueprint contains example data in the form of example CMS content. Some example data belong only to a specific extension. Importing these example data if an extension is inactive can lead to errors during import. To avoid this, the example data is partitioned and the assembly process is designed in a way that only content from enabled extensions is considered.

Example of Use: Removing a Certain Extension

An extension is identified by its extension descriptor. The extension descriptor is a Maven POM which is part of the extension and it contains the dependency management of the extension’s modules.

The extension descriptor is used to enable or disable the extension in the Blueprint. An extension is enabled if the extension descriptor POM is imported into the dependency management of the Blueprint project.


The CoreMedia Blueprint Maven Plugin can be used to remove this extension. In order to do so, the remove-extension goal has to be executed. The extension to remove is referenced by the parameter “coremedia.project.extensions”. This parameter describes the Maven coordinates of the extension descriptor.

In the above example, the Maven call to remove an extension would look like

mvn com.coremedia.maven:coremedia-blueprint-maven-plugin:remove-extensions -Dcoremedia.project.extensions=my-extension

The notation my-extension is a short version of com.coremedia.blueprint:my-extension:1.0. Maven allows to omit the groupId and version if it is identical to the current ones of the project.

The result of this call is a Blueprint workspace with modified POMs in the extension-config module as well as in the root pom.xml file. The extension has been removed.


The extensible design of the Blueprint workspace enables developers to create extensions which are loosely coupled. Every extension is self-contained and brings required logic and data.

The Blueprint itself is released with numerous extensions. Users are able to select only those extensions they want to deploy. Other extensions can be removed by using the CoreMedia Blueprint Maven Plugin. This opportunity avoids dead code maintenance, reduces build time and allows you to shape your Blueprint to your needs.


CoreMedia Blueprint Maven Plugin

#socrates15: Ext JS 5 Tests with Selenide

Just as always – hm, the second time for me – SoCraTes 2015 was just great and the workshop I joined on Sunday just emphasized it: Getting into touch with Selenide, hosted by Alexei Vinogradov.

Selenide – jQuery for Java

Knowing Selenium for long now getting off with Selenide was a piece of cake – and knowing Selenium it just was great to feel how easy it is to use Selenide. I would really recommend it, especially if you are known to jQuery and want to have a similar feeling to access elements in Selenide. But Selenide provides far more (and I suggest even far more than I learned on that session).

Accessing Ext JS through Selenide

Not knowing Ext JS by heart – but at least having experience how to access Ext JS (the old version 3) from within automatic UI-Tests – I was curious how I would be able to access the components using Selenide.

Previous Experiences

As presented at the SoCraTes 2014 we use Java wrappers (aka proxies) to represent the Ext JS components. The advantages we see in this approach:

  • Access: Ext JS knows much better than any DOM-path-navigation how to locate components.
  • Hierarchy: Mirroring the component class hierarchy in Java helps us to also spread the knowledge how to access state of the components through the hierarchy – and possibly override it, if a specialized component requires some other approach for example to determine if it is visible.
  • Update: Having the wrappers it’s not a piece of cake to update Ext JS versions (at least when you try from Ext JS 3 to Ext JS 5+) – but it is feasible. You just have to update some central wrappers and all UI-tests are fine again without even touching the tests themselves.
  • Fixing: Actually it is the same for fixing issues in the UI-tests: For example the knowledge how to do a robust drag and drop is hidden deep within the wrappers – and it gets enriched by workarounds each time we learn how to make it even more robust. Again it’s just one small change and with a finger snap all drag and drop tests behave much better.

Adapt for Ext JS 5 and Selenide

I took the challenge to adapt this concept during the one-day workshop for Ext JS 5 using Selenide. What I learned from this proof-of-concept:

  • Locating Components: Ext JS now has a ComponentQuery which eases accessing components a lot. It feels similar to the jQuery syntax.
  • Change SUT and Tests: Just as always: It would be good to have control over the software under test (SUT) as well as over the tests. Otherwise – as you can see in the PoC – you will for example miss clear IDs (or item IDs) to access elements. As I tested against the examples hosted by Sencha I had no control over the SUT.
  • Selenide vs. StaleElementReferenceException: If you write UI-tests with Selenium you will for sure know it: the StaleElementReferenceException. For AJAX applications it is quite normal that the DOM changes and if it changes it is most likely that some elements you accessed in Selenium WebDriver before are not available anymore. Selenium documentation recommends to just update the reference. And here some of Selenide’s magic happens: It will do the trick for you: it will automatically refresh the reference transparently for the test.
    By the way: While doing the PoC I ran into a deadlock not knowing this behavior when I used an implementation which goes through Selenide → Selenium → Selenide (used Selenide in a By-instance). I did not analyze this in depth but the observation was that the UI test was just stuck in locating the element.
  • Local Browser Start: What is a pain in Selenium WebDriver is to start a browser locally. Especially if you want to support several browsers. Selenide provides this out of the box – also for connecting to the Selenium WebDriver Grid. See FAQ for details.

To give it a try just checkout the PoC from GitHub and either start the build via gradle (gradlew test) or just right from within your IDE.


The proof-of-concept was up and running within about 2 hours. It took less than a day (with helpful hints provided by Alexei, thanks for that!) to get the whole PoC to a state where you could get an idea how we might continue from here.

I definitely recommend to give Selenide a try to anyone testing “normal” web pages as a syntax similar to to jQuery just makes it a piece of cake to access the same elements from within your UI tests. But also for rich web applications I recommend to give Selenide a try because of many aspects – not only because the assertion approach using (possibly) custom conditions in the should() statements not mentioned (despite here) in this blog-post promises to provide additional flexibility for checking for example required component states in Ext JS.

SoCraTes’ Magic

There is so much more to say about SoCraTes – not only that I will participate again next year and that I have to thank the organizers a lot! But I think this video shows best that there is some magic going on in Hotel-Park-Soltau (the great location where SoCraTes took place for the second time):


Four Perspectives for Retrospectives

As Scrum Master or Agile Coach your job is to help the team work, and therefore you are to surface and remove impediments in daily stand-ups and retrospectives. Is that all there is?

Well no, or this article would be over right now. Depending on the situation of the team, it makes sense to point their attention to a specific aspect of their way of working. With these changes of perspective, it is possible to continue running interesting retrospectives and to keep a seasoned team on the improvement path.

For planning the next retrospective, I offer the following four perspectives on work. Choose one that was neglected recently, or one that people have trouble with, thereby steering the discussion towards the best chance of improving the team.

1. Efficiency: Impediments and Delivery

Smooth and efficient work towards a sprint goal is an obvious indicator of success for a team. Every moment of the day every team member feels whether they stumble and skip or make smooth progress. Without the feeling of progress, everything else — great team spirit, individual happiness zone, corporate benefits — is hollow.

Discussing the feeling of accomplishment may soon morph into a review of the team’s deliverable. Code quality, non-functional requirements, etc. may come up.

Example: A hidden voting (5-star or traffic lights) on the four dimensions of sprint deliverable functionality, sprint deliverable quality, work efficiency, and stress level. Or, check the Retr-O-Mat for ideas.

2. Collaboration: Team Routines and Rules

The next perspective on work is the way work works, as opposed to “how fast and smooth” in the section above. Many methods stress the visualization of intangible knowledge work through tools like task boards, avatars, or build monitors. These visualizations (or lack thereof) provide opportunities for reflection. Discussion-worthy are all kinds of process agreements like the way of applying the visualization tools, the team rules (such as a Definition of Done), how and when to hold meetings, and many more.

I like kick off a new team establishing the first ground rules and then updating them in the first retrospectives. And even for seasoned teams, my experience is that an explicit review of team rules is useful every now and then. Sometimes dissatisfaction is hidden under what looks like team consensus.

Examples: Set aside a retrospective to place thumbs-up/thumbs-down stickers on the taskboard to indicate what works and what needs to be discussed. Or, have the team rank all team meetings by usefulness, then discuss how to abolish/reshape the worst meetings and learn from the best. One could also ask people to mark all days in the past sprint they spent mainly in pairs vs. alone on their desks.

3. Why: The Purpose

A team needs, by definition, a common purpose in order to exist. (Otherwise it is just a group of people.) Usually the team coach defers the purpose thing to the product manager / product owner. However, it is well within the coach’s authority to help and challenge the product owner on setting clear goals and criteria for success. Even the customer might not be obvious, especially in teams providing services to other teams.

Example: Gathering data in a matrix along the lines of “which valuable things have we created / for which customer?” helps the team discuss their customers and priorities.

4. Who: The Individuals

Many thinkers on teamwork and motivation highlight the importance of the interests of the individuals. It might be useful to discuss team members’ motives every now and then, e.g., when a new goal is given out, team membership changes, or performance appraisals approach. I recommend to run this kind of retrospective infrequently, as motives tend to change slowly and will not surface new insights if run too often.

Examples: This is the field of moving motivators, Belbin’s team roles, personality poker, and other games that trigger discussions of what people prefer to do, and how they like to work.

In addition to personal motives, individual skills of a team member are relevant. Especially when knowledge bottlenecks occur, i.e., when tasks are blocked due to a missing expert, a retrospective discussing required and available skills should be in order. In addition to closing the immediate knowledge gap, the discussion might turn into a general review of your company’s learning initiatives. Are people satisfied with the development of their skills and with the support they receive in doing so? What can they do and what could the employer do to improve the situation?

A discussion of learning helps to highlight the growth/expert culture in your company in order to develop the employees and ensure long-term success of the team.

Other Models of Teamwork

Christopher Avery asks you to hold five conversations in order to build a team: about the purpose, individual motives, team rules, goals and performance, and the resources of team members. These conversations are facilitated through retrospectives according to the above four perspectives.

Lencioni’s five-layered pyramid of teamwork is also covered in the above four perspectives. His topics are building trust, surfacing conflict, deciding effectively, keeping agreements, and delivering results.

Flexible Stencils with Tables in OmniGraffle


Once you get the hang of them, tables will become the hammer to almost every nail you encounter in OmniGraffle. Recently, I have begun using complex tables to create very flexible stencils that adapt to almost every situation I use them in. If you are new to the topic, check out my post about using tables in OmniGraffle for prototyping.

This post will teach you how to use tables to create

  1. flexible toolbars, and
  2. resizable text areas with scroll bars.

Here is an OmniGraffle file that contains every step below.


Toolbars or button groups can be easy and flexible if you know how to bend tables to your will. Here is what I do:

A rectangle

Start out with a rectangle.

A table

Make a table out of the rectangle by pressing ⌘+Shift+T.

A table with five cells

Create some cells, one for each button.

A table, one cell is an arrow

Here is a neat trick to create toolbars or button groups with rounded corners. I learned it from the awesome bootstrap stencils for OmniGraffle. Select the leftmost cell and turn its shape into an Adjustable Arrow.

A table, one of the cells is an arrow with rounded corners

Set a border radius for this cell. In this case, five.

A detailed look at how to create rounded corners for a table using an arrow

Move the little handle all the way to the edge of the arrow. And there you have it! The corners are rounded.

A table, the lefthand side has rounded corners

The half-finished product.

A table with rounded corners on the left and an arrow on the right

Do the same for the rightmost cell.

Except you have to flip it. Don’t use rotation but the little “Flip Left/Right” button right next to the rotation control.

A table with rounded corners

Set the border radius and move the little handle to the edge. The table now has rounded corners.

A table with rounded corners and a background color

I set a background color for the whole table and turned off the strokes.

An icon next to the table

Let’s fill the toolbar with icons. This icon came from my stencil library. Select it and then cut it.

The table filled with icons

Double click on a single table cell and paste the icon. I repeated this step for some other icons. Pasted icons behave similar to text.

The table now looks like a toolbar

The divider line is just that: a grey line.

The toolbar has some more buttons now

You want to add some more functions in the middle of the toolbar after the fact? This is where using a table pays off. Select a single cell and use the hotkey ⌘+⌥+↵ (Command + Alt + Return). There is also a menu entry that does the same: Edit → Tables → Insert Columns.

The toolbar is almost finished now

I added some more icons and another divider line.

The finished toolbar

Sometimes, individual buttons have to be a little wider. In this case, one of the entries opens a menu so I added a little arrow and made the individual cell wider. I created a new star-icon with less opacity and replaced the old star-icon with it.

Text Areas with Scroll Bars

For high fidelity prototypes, some realism is needed. Scroll bars in text areas can make the difference. Now, you could just paint scroll bars onto a rectangle and be done with it. That has some disadvantages though, mainly that you can’t easily resize the text area now without also having to move the scroll bar. Tables offer a solution. Here is how I made a text area stencil for our stencil library at CoreMedia:


Start with a simple rectangle. It doesn’t have to have a border but I left it because otherwise it would be invisible.


Make a table out of the rectangle by pressing ⌘+Shift+T.


Create nine cells, eight for the borders and the middle one as the text field proper.


Fill the border cells with your border color and remove all strokes.


Select the individual border cells and resize them to your desired border width using the object inspector.


Create the background for the scroll bar by adding a new table cell next to the inner cell. To do this, select the inner cell and use the hotkey ⌘+⌥+↵ (Command + Alt + Return). There is also a menu entry that does the same: Edit → Tables → Insert Columns. The new cell will already be selected so you can go ahead and give it a background color and a width that suits your needs.


The scroll handler you see here is just a rectangle with rounded corners. Create one and cut it using ⌘ + C.


Double click the scroll bar cell to enter its edit mode. Paste the scroll bar using ⌘ + V. You will probably see nothing – let’s fix that: select the scroll bar cell again (i. e., leave the edit mode) and decrease its paddings to 0 on the type inspector. Justify the text and top-align it.


Rich text areas in CoreMedia are resizable in height. To indicate this, they have a handler at the bottom. You might not need this, but for my stencils, I added the handler by appending four more rows, giving each a height of one and making two of them white.


Add some text!


I placed the rich text area in a group and added a toolbar. As you can see, the text area is way too small! Let’s resize it by selecting and resizing the inner text area cell (not the whole table!). The scrollbar and bottom handler will adapt automatically.


text-area-15 Tables might be a little bit more work but that effort pays off when used in stencils that you use every day.

I hope these suggestions are useful for your stencil library. I have created an OmniGraffle file that contains every step. If you have more tips and tricks for OmniGraffle stencils, be sure to comment below.

Bitte recht mobilfreundlich! Mobilegeddon, Single-Page Websites und Page Rank

Seit dem 21. April 2015 hat Google seinen Suchalgorithmus so geändert, dass bei mobilen Suchabfragen Websites schlechter gerankt werden, die nicht für Smartphones optimiert sind. Obwohl Google die Optimierung für mobile Geräte schon seit Jahren forciert und sich diese Entwicklung abgezeichnet hat, hat das befürchtete „Mobilegeddon“ in der DACH-Region jetzt Anbieter wie (-45% Sichtbarkeit bei mobiler Suche), XING (-33%) und die Drogeriekette DM (-23%) getroffen. Eine komplette Übersicht der Gewinner und Verlierer gibt es hier.

Die wichtigsten Kriterien, die Google für eine mobile-friendly Website zu Grunde legt, sind die Vermeidung nicht mobiltauglicher Software (wie Flash), große Textdarstellung (lesbar ohne Zoomen), Anpassung der Größe des Inhalts an den jeweiligen Bildschirm (responsives Design), so dass Nutzer nicht horizontal scrollen oder zoomen müssen und genügend Abstand zwischen Links, damit User sie problemlos antippen können. Google bietet einen kostenlosen Test dazu an.

Single-Page statt Multi-Page?

Obwohl von Google nicht ausdrücklich gefordert, ist so viel sicher: Die 3-Klick-Regel gilt nicht mehr. Insbesondere mobile Nutzer möchten die gesuchte Information so schnell wie möglich und möglichst ohne Klick(s) erreichen. Daher sind Single-Page-Websiten, oft auch „Longscroller“ genannt, bei Webdesignern so in Mode. Und: Sie sind ausgesprochen mobilfreundlich. Der Nutzer muss nicht mehr nach Menü-Links suchen, sondern kann durch vertikales Scrolling jegliche Information einfach erreichen. Die Darstellung passt sich dabei responsiv dem mobilen Endgerät an.

Es gibt aber auch Nachteile – Auftritt: SEO. Wenn jeglicher Content einer Website auf einer oder wenigen Pages platziert ist, müssen alle definierten Keywords auf diesen Seiten auftauchen. Wenn ein Unternehmen beispielsweise verschiedene Produkte anbietet und für jedes dieser Produkte verschiedene Keywords festgelegt hat, werden Google-Bots ein Problem damit haben zu verstehen, was der Fokus dieser Website ist, und sie entsprechend schlecht ranken. Außerdem wird die Seite immer langsamer laden, je mehr Content sie beinhaltet, besonders bei der zunehmenden Einbindung von Rich Media.

Single-Page und Page Rank

Google bewertet Seiten mit Hilfe des PageRank-Algorithmus, benannt nach seinem Erfinder Larry Page. Je mehr Links auf eine Seite verweisen, desto höher ist deren Gewicht; je höher das Gewicht der verweisenden Seiten, desto größer der Effekt. Bewertet wird also die Relevanz von Content / Websites für den Suchenden.

Eine einzelne Seite kann die Relevanz Ihrer primären Keywords steigern, schmälert aber die Bedeutung von Unterthemen, die es auf eigens angelegten Seiten leichter hätten, gefunden zu werden. In diesem Zusammenhang ist Googles Hummingbird-Update interessant. Es zielt darauf ab, die Bedeutung einer Suche mit relevanten Dokumenten zu verbinden, statt Suchworte mit Worten auf einer Seite abzugleichen. Wenn Sie nur eine Seite verwenden, die Ihr Unternehmen, alle Ihre Produkte und Themen beschreibt, wie relevant kann sie dann noch für jede einzelne dieser Sektionen sein?

Meine Empfehlung: ein Kompromiss

Wenn Sie bereits eine Multi-Page Website fahren, sollten Sie daraus keine Single-Page Website machen (so auch Tom Schmitz). Prüfen Sie mit Googles Test die Mobilfreundlichkeit Ihrer Seite und machen Sie wo nötig entsprechende, mobiloptimierte Anpassungen. Für die Startseite und wichtige Themen- /Kategorieseiten empfehle ich außerdem die Verwendung von Longscroller-Pages als Ergänzung der Mobilfreundlichkeit Ihrer Website. Der Trend geht weg von Relaunches hin zu kontinuierlichem Messen und Verbessern; eine komplette Mobilisierung des gesamten Web-Angebotes ist nicht zwingend notwendig.

Wenn Sie das Thema interessiert und Sie weitere Einblicke erhalten möchten, sehen Sie sich doch hier die Aufzeichnung unseres Webinars „Googles Mobile-Friendly Update – was müssen Sie als Marketer beachten?“, das wir gemeinsam mit unserem Partner T-Systems Multimedia Solutions gehalten haben, an.

XML, Java, Unicode, and the See-No-Evil Monkey


The CoreMedia CMS stores quite a lot of data in XML: rich text, configuration options, page attributes. XML is quite mature and it comes in handy that XML supports the full range of Unicode characters for managing sites throughout the world. The backend being developed in Java, we rely heavily on the XML processing facilities built into Java.

Enters the see-no-evil monkey, or rather its Unicode incarnation. It is joined by its fellow Unicode characters that did not fit into the base plane of 65536 characters, like various Chinese symbols: the so-called supplementary characters. The problem is that the Xerces XML parser built into Java has a bug when handling supplementary characters.

Identified as JDK-8058175, the bug causes random characters to be inserted when a supplementary character is encountered in an attribute value. This is not just annoying, for example padding a comment with junk characters just because the user chose to include an emoji. It can actually be a security problem, because the inserted characters stem from an uncleared buffer, which might contain secret information or data for a cross-site scripting (XSS) attack.

The bug will be fixed in JDK 9, but that is not available yet and it will take a long time before we can discontinue support for older JDKs on all platforms. The bug is long fixed in current Xerces versions, but replacing the Xerces built into the JDK with a newer version is notoriously tricky, especially when running in application servers which tend to have their own opinion about the class loading order. You may want to have a look at this nice Stack Overflow question for the problem and a general idea of why we do not want to tweak the Xerces version for every installation.

So we had to develop a workaround. Because the bug is hidden deep inside of Xerces, we can only preprocess the XML file to avoid the erroneous behavior. At its core, the workaround is deceptively simple: replace the supplementary character with equivalent character entities, which Xerces happens to process without problems.

if (escape && Character.isSupplementaryCodePoint(currentCodePoint)) {
} else {

The difficulty is, of course, to determine whether supplementary characters need escaping at a given position in an input stream. Escaping would be unnecessary in a comment and incorrect in a tag name. That means that we have to parse an XML file at least to the level that it is possible to determine whether the character currently being processed belong to an attribute value. The XML specification is restrictive enough to make just that distinction by keeping track of the current type of grammatical object (comment, cdata, tag, …) and looking for a small number of limiting character sequences. A hand-written parser with a finite lookahead will do.

Now the changed XML file has to be presented to Xerces in a convenient way. This is done by a modified SAX InputSource, which hides the original stream and always returns a corrected character stream to the XML parser. The XmlStreamReader from the Apache Commons IO package came in handy to infer the encoding of byte streams, which is normally also done by Xerces, but which has to be moved into the InputSource to be able to detect supplementary characters in arbitrary encodings.

The final result is the FullUnicodeInputSource, which is a drop-in replacement of the original SAX InputSource. It is available in source form in a GitHub at for your convenience. Though provided as a Maven project, we do not provide a pre-built release at this early point.

On a more general level, it is worth remembering that a char in Java is not a character. It used to be when Java was invented, but today it just isn’t. It is an item of a UTF-16 representation of a character string. Still, Java has a lot of support for handling all of modern Unicode versions since JSR-204 took care of the problem. It’s worth to have closer look.

So all is well? The much nicer solution would be to get the fix of the original bug included in the maintenance releases of previous Java version. That fix would be quite literally one thousandth of the size of the workaround. But until that time, we cannot play see-no-evil monkey and pretend the problem is not there. Or not listen and hush things up. Like the hear-no-evil monkey and the speak-no-evil monkey that might suddenly pop up in XML attributes when their sibling is being processed.

Re: Redesigning Hamcrest

Jacob Zimmerman just wrote an interesting post “Redesigning Hamcrest”. There is very little to add but we also made several experiences with Hamcrest.

Just in case you do not know: Hamcrest is a library to compare expected and actual values and provides nice descriptions if the comparison fails. While it was originally written in Java offsprings exist in PHP, Objective-C, Python and Ruby.

JUnit relies heavily on Hamcrest meanwhile not only in simple assertions (assertThat) but also in JUnit Rules like the ErrorCollector which collects validation results and makes the test fail if one of them failed.

Our framework for UI-Tests (for our rich web application) also relies heavily on Hamcrest. We have combined it with our Wait-Pattern to wait for a certain state of the UI to reach as described for example in my blog post Death to sleeps! Raise of Conditions!

Having it integrated so deeply we also realized some of the shortcomings Jacob mentioned:

  • We also only relied on the TypeSafeMatcher, so the Matcher interface itself is obsolete.
  • While not yet using Java 8 for development we also made the experience of the clash of Predicates and Matchers. For now we use the Predicates provided by Guava. And to combine both worlds we don’t have an extra LambdaAssert class as Jacob but a PredicateMatcher which wraps a predicate and makes it a matcher.

And one additional flaw we found: Matchers do not remember their state. While I would not recommend to change Matchers to remember the state, users of Matchers (like JUnit) make a wrong assumption when building the failure description: They assume that the object under test did not change between validation and error report. But as soon as it comes to integration tests (and especially UI-tests) it is very likely that your objects changed from comparison until the error report is generated (for example the UI-element which was missing when the check is performed suddenly occurred while the error description is built). Therefore we keep the state in our Conditions (part of the Wait-Pattern, not to be mixed with Hamcrest’s Conditions).

All in all we love Hamcrest – but a redesign makes sense especially with the raise of Java 8’s Lambdas.

Webdesign-Trends 2015: Die Finger davon lassen?

Die Geschichte dieses Artikels begann mit einem Kommentar zu einem Post auf dem “Service Thinking”-Blog von Christian Reichel. Der Kommentar wurde dann zu lang für das Kommentarfeld…  Christian plädiert in seinem interessanten Post mit dem Titel “Gefährlich, gefährlicher, Webdesign-Trends” für einen zurückhaltenden Umgang mit Webdesign-Trends wie Responsive-Design und Material-Design, die in einem t3n-Artikel für 2015 vorhergesagt werden. Natürlich ist ein zurückhaltender Umgang mit Design-Mode und Design-Trends auf jeden Fall auch im Sinne der Benutzbarkeit meistens eine gute Idee.  Allerdings gibt es noch einen wichtigen Aspekt, der insbesondere die  Schnelllebigkeit von Design-Trends betrifft und die damit verbundene Nachhaltigkeit von Webseiten.

Es ist natürlich nicht alles, was neu ist, automatisch gut! Aber Dinge können “gut” im Sinne von intuitiv benutzbar werden, wenn sie viel benutzt werden. Und, dafür ist dieser Post ein Plädoyer, die Webdesign-Moden und -Trends sind nicht unabhängig von einem nachhaltigen und gültigen Standard, sondern beeinflussen den aktuellen Standard in hohem Maße.

Wie aus Design-Trends Design-Standards werden

Zwar verändern sich die Grundprinzipien menschlicher Wahrnehmung nicht, allerdings entwickeln sich Standardlösungen weiter und werden somit auch von einem größeren Anteil der Zielgruppe verstanden. Als Beispiel dient etwa die Kombination aus Off-Canvas-Navigation und “Hamburger”-Icon als Lösung für eine Navigation. Das Bedienungsmuster wäre noch vor 5 Jahren in – vermutlich – jedem Usability-Test als problematisch identifiziert worden. Viele Benutzer hätten schlicht nicht verstanden, was das Icon bedeutet. Heute ist es zumindest bei regelmäßigen Smartphone-Nutzern kein Problem mehr. Das Bedienungsmuster und das Hamburger-Icon als Symbol sind durch die verbreitete Benutzung konventionalisiert worden.

Christian Stetter, emeritierter Professor für Sprach- und Kommunikationswissenschaft an der RWTH Aachen, nennt solche Prozesse, die auch Treiber für die Evolution von Sprachen sind, “Trampelpfad-Prozesse”. Ein schönes Bild, finde ich: Irgendwann läuft jemand eine Abkürzung über eine grüne Wiese und durch die Leute, die hinterher laufen, wird irgendwann ein Pfad daraus, der sich Vorbeilaufenden als Abkürzung anbietet. Das gleiche passiert auch mit den Zeichen- und Konzeptsystemen, die wir für die Interaktion mit technischen Systemen nutzen.

Off-Canvas Naviagation

Eine Off-Canvas-Navigation

Alles hat Vor- und Nachteile

Natürlich gibt es nach wie vor Nachteile gegenüber anderen Navigationsarten. Zum Beispiel ist es für Benutzer aufwändiger zu Bereichen zu navigieren, die nicht direkt auf dem Startbildschirm der App oder der Startseite der Webseite platziert sind. Daher ist die Gefahr sehr groß, dass Benutzer Funktionen übersehen, die vielleicht nützlich währen. Die Funktion ist bei einem großen Display, wenn wie üblich oben platziert, auch schlecht zu erreichen, wenn das Gerät mit einer Hand bedient wird. Solche Nachteile sind in der Tat in der menschlichen Psychologie und Ergonomie begründet und werden sich auch nicht verändern. Den Nachteilen stehen aber auch Vorteile gegenüber, zum Beispiel schlicht mehr Platz für relevante Inhalte auch auf kleinen Bildschirmen. Diese Nachteile gegenüber Vorteilen abzuwägen (zu testen), kann nur nur im Einzelfall und mit Informationen über den Kontext geschehen.

Trends können den Standard verändern

Während sich bestimmte Prinzipien nicht verändern, verändert sich der Anteil von Benutzern in verschiedenen Zielgruppen, die bestimmte Symbole und Bedienungsmuster kennen. Diese Benutzer verstehen diese Muster und Symbole daher direkt und müssen sie nicht erst lernen. Dem liegt das psychologische Prinzip zugrunde, das Daniel Kahnemann frei zitiert “familiarity breeds understanding” nennt. Das Prinzip beschreibt, dass wir als Menschen Dinge besser verstehen und diese sogar mögen, wenn wir ihnen wiederholt ausgesetzt werden. Genutzt wird das Prinzip auch in der Musikindustrie: Die Dauerrotation der neuesten 20 Tophits im Radio sorgt bei den Radiohörern dafür, das diese Hits – im Bevölkerungsdurchschnitt – auch besser gefallen.

Während sich also bestimmte Grundprinzipien der menschlichen Wahrnehmung (zum Bespiel Gestaltgesetze oder Kapazität des Kurzzeitgedächtnisses) nicht ändern, gibt es doch eine Rückwirkung des “Stands der Dinge” auf die Bedienbarkeit von Webseiten und Anwendungen, denn die Standardmuster verändern sich mit der Zeit. Und im digitalen Zeitalter verändern sie sich sehr schnell. Das liegt auch daran, dass sich die technischen Möglichkeiten verändern, zum Beispiel sind Animationen, wie etwa in Googles Material-Design verankert, erst mit leistungsfähiger Hardware möglich.

Trends verstehen und deren Beständigkeit abwägen

Da sich das, was als Standard angesehen wird, sehr schnell verändert, sollten sich Designer und Entwickler neue Trends und Moden sehr genau anschauen und diese verstehen. Im Besonderen ist es wichtig, zu analysieren, ob ein Trend vielleicht schon ein neuer Standard geworden ist. Nur das ermöglicht im Kontext eines Projekts Vor- und Nachteile abzuwägen, auch im Hinblick auf die Zukunft zu designen und innovativ zu sein – ohne dabei jedem Trend hinterherzulaufen.


Get every new post delivered to your Inbox.

Join 302 other followers