Skip to content

Bitte recht mobilfreundlich! Mobilegeddon, Single-Page Websites und Page Rank

Seit dem 21. April 2015 hat Google seinen Suchalgorithmus so geändert, dass bei mobilen Suchabfragen Websites schlechter gerankt werden, die nicht für Smartphones optimiert sind. Obwohl Google die Optimierung für mobile Geräte schon seit Jahren forciert und sich diese Entwicklung abgezeichnet hat, hat das befürchtete „Mobilegeddon“ in der DACH-Region jetzt Anbieter wie wissen.de (-45% Sichtbarkeit bei mobiler Suche), XING (-33%) und die Drogeriekette DM (-23%) getroffen. Eine komplette Übersicht der Gewinner und Verlierer gibt es hier.

Die wichtigsten Kriterien, die Google für eine mobile-friendly Website zu Grunde legt, sind die Vermeidung nicht mobiltauglicher Software (wie Flash), große Textdarstellung (lesbar ohne Zoomen), Anpassung der Größe des Inhalts an den jeweiligen Bildschirm (responsives Design), so dass Nutzer nicht horizontal scrollen oder zoomen müssen und genügend Abstand zwischen Links, damit User sie problemlos antippen können. Google bietet einen kostenlosen Test dazu an.

Single-Page statt Multi-Page?

Obwohl von Google nicht ausdrücklich gefordert, ist so viel sicher: Die 3-Klick-Regel gilt nicht mehr. Insbesondere mobile Nutzer möchten die gesuchte Information so schnell wie möglich und möglichst ohne Klick(s) erreichen. Daher sind Single-Page-Websiten, oft auch „Longscroller“ genannt, bei Webdesignern so in Mode. Und: Sie sind ausgesprochen mobilfreundlich. Der Nutzer muss nicht mehr nach Menü-Links suchen, sondern kann durch vertikales Scrolling jegliche Information einfach erreichen. Die Darstellung passt sich dabei responsiv dem mobilen Endgerät an.

Es gibt aber auch Nachteile – Auftritt: SEO. Wenn jeglicher Content einer Website auf einer oder wenigen Pages platziert ist, müssen alle definierten Keywords auf diesen Seiten auftauchen. Wenn ein Unternehmen beispielsweise verschiedene Produkte anbietet und für jedes dieser Produkte verschiedene Keywords festgelegt hat, werden Google-Bots ein Problem damit haben zu verstehen, was der Fokus dieser Website ist, und sie entsprechend schlecht ranken. Außerdem wird die Seite immer langsamer laden, je mehr Content sie beinhaltet, besonders bei der zunehmenden Einbindung von Rich Media.

Single-Page und Page Rank

Google bewertet Seiten mit Hilfe des PageRank-Algorithmus, benannt nach seinem Erfinder Larry Page. Je mehr Links auf eine Seite verweisen, desto höher ist deren Gewicht; je höher das Gewicht der verweisenden Seiten, desto größer der Effekt. Bewertet wird also die Relevanz von Content / Websites für den Suchenden.

Eine einzelne Seite kann die Relevanz Ihrer primären Keywords steigern, schmälert aber die Bedeutung von Unterthemen, die es auf eigens angelegten Seiten leichter hätten, gefunden zu werden. In diesem Zusammenhang ist Googles Hummingbird-Update interessant. Es zielt darauf ab, die Bedeutung einer Suche mit relevanten Dokumenten zu verbinden, statt Suchworte mit Worten auf einer Seite abzugleichen. Wenn Sie nur eine Seite verwenden, die Ihr Unternehmen, alle Ihre Produkte und Themen beschreibt, wie relevant kann sie dann noch für jede einzelne dieser Sektionen sein?

Meine Empfehlung: ein Kompromiss

Wenn Sie bereits eine Multi-Page Website fahren, sollten Sie daraus keine Single-Page Website machen (so auch Tom Schmitz). Prüfen Sie mit Googles Test die Mobilfreundlichkeit Ihrer Seite und machen Sie wo nötig entsprechende, mobiloptimierte Anpassungen. Für die Startseite und wichtige Themen- /Kategorieseiten empfehle ich außerdem die Verwendung von Longscroller-Pages als Ergänzung der Mobilfreundlichkeit Ihrer Website. Der Trend geht weg von Relaunches hin zu kontinuierlichem Messen und Verbessern; eine komplette Mobilisierung des gesamten Web-Angebotes ist nicht zwingend notwendig.

Wenn Sie das Thema interessiert und Sie weitere Einblicke erhalten möchten, sehen Sie sich doch hier die Aufzeichnung unseres Webinars „Googles Mobile-Friendly Update – was müssen Sie als Marketer beachten?“, das wir gemeinsam mit unserem Partner T-Systems Multimedia Solutions gehalten haben, an.

XML, Java, Unicode, and the See-No-Evil Monkey

by

The CoreMedia CMS stores quite a lot of data in XML: rich text, configuration options, page attributes. XML is quite mature and it comes in handy that XML supports the full range of Unicode characters for managing sites throughout the world. The backend being developed in Java, we rely heavily on the XML processing facilities built into Java.

Enters the see-no-evil monkey, or rather its Unicode incarnation. It is joined by its fellow Unicode characters that did not fit into the base plane of 65536 characters, like various Chinese symbols: the so-called supplementary characters. The problem is that the Xerces XML parser built into Java has a bug when handling supplementary characters.

Identified as JDK-8058175, the bug causes random characters to be inserted when a supplementary character is encountered in an attribute value. This is not just annoying, for example padding a comment with junk characters just because the user chose to include an emoji. It can actually be a security problem, because the inserted characters stem from an uncleared buffer, which might contain secret information or data for a cross-site scripting (XSS) attack.

The bug will be fixed in JDK 9, but that is not available yet and it will take a long time before we can discontinue support for older JDKs on all platforms. The bug is long fixed in current Xerces versions, but replacing the Xerces built into the JDK with a newer version is notoriously tricky, especially when running in application servers which tend to have their own opinion about the class loading order. You may want to have a look at this nice Stack Overflow question for the problem and a general idea of why we do not want to tweak the Xerces version for every installation.

So we had to develop a workaround. Because the bug is hidden deep inside of Xerces, we can only preprocess the XML file to avoid the erroneous behavior. At its core, the workaround is deceptively simple: replace the supplementary character with equivalent character entities, which Xerces happens to process without problems.

if (escape && Character.isSupplementaryCodePoint(currentCodePoint)) {
  output.append(“&#”).append(currentCodePoint).append(“;”);
} else {
  output.appendCodePoint(currentCodePoint);
}

The difficulty is, of course, to determine whether supplementary characters need escaping at a given position in an input stream. Escaping would be unnecessary in a comment and incorrect in a tag name. That means that we have to parse an XML file at least to the level that it is possible to determine whether the character currently being processed belong to an attribute value. The XML specification is restrictive enough to make just that distinction by keeping track of the current type of grammatical object (comment, cdata, tag, …) and looking for a small number of limiting character sequences. A hand-written parser with a finite lookahead will do.

Now the changed XML file has to be presented to Xerces in a convenient way. This is done by a modified SAX InputSource, which hides the original stream and always returns a corrected character stream to the XML parser. The XmlStreamReader from the Apache Commons IO package came in handy to infer the encoding of byte streams, which is normally also done by Xerces, but which has to be moved into the InputSource to be able to detect supplementary characters in arbitrary encodings.

The final result is the FullUnicodeInputSource, which is a drop-in replacement of the original SAX InputSource. It is available in source form in a GitHub at https://github.com/okummer/FullUnicodeInputSource for your convenience. Though provided as a Maven project, we do not provide a pre-built release at this early point.

On a more general level, it is worth remembering that a char in Java is not a character. It used to be when Java was invented, but today it just isn’t. It is an item of a UTF-16 representation of a character string. Still, Java has a lot of support for handling all of modern Unicode versions since JSR-204 took care of the problem. It’s worth to have closer look.

So all is well? The much nicer solution would be to get the fix of the original bug included in the maintenance releases of previous Java version. That fix would be quite literally one thousandth of the size of the workaround. But until that time, we cannot play see-no-evil monkey and pretend the problem is not there. Or not listen and hush things up. Like the hear-no-evil monkey and the speak-no-evil monkey that might suddenly pop up in XML attributes when their sibling is being processed.

Re: Redesigning Hamcrest

Jacob Zimmerman just wrote an interesting post “Redesigning Hamcrest”. There is very little to add but we also made several experiences with Hamcrest.

Just in case you do not know: Hamcrest is a library to compare expected and actual values and provides nice descriptions if the comparison fails. While it was originally written in Java offsprings exist in PHP, Objective-C, Python and Ruby.

JUnit relies heavily on Hamcrest meanwhile not only in simple assertions (assertThat) but also in JUnit Rules like the ErrorCollector which collects validation results and makes the test fail if one of them failed.

Our framework for UI-Tests (for our rich web application) also relies heavily on Hamcrest. We have combined it with our Wait-Pattern to wait for a certain state of the UI to reach as described for example in my blog post Death to sleeps! Raise of Conditions!

Having it integrated so deeply we also realized some of the shortcomings Jacob mentioned:

  • We also only relied on the TypeSafeMatcher, so the Matcher interface itself is obsolete.
  • While not yet using Java 8 for development we also made the experience of the clash of Predicates and Matchers. For now we use the Predicates provided by Guava. And to combine both worlds we don’t have an extra LambdaAssert class as Jacob but a PredicateMatcher which wraps a predicate and makes it a matcher.

And one additional flaw we found: Matchers do not remember their state. While I would not recommend to change Matchers to remember the state, users of Matchers (like JUnit) make a wrong assumption when building the failure description: They assume that the object under test did not change between validation and error report. But as soon as it comes to integration tests (and especially UI-tests) it is very likely that your objects changed from comparison until the error report is generated (for example the UI-element which was missing when the check is performed suddenly occurred while the error description is built). Therefore we keep the state in our Conditions (part of the Wait-Pattern, not to be mixed with Hamcrest’s Conditions).

All in all we love Hamcrest – but a redesign makes sense especially with the raise of Java 8’s Lambdas.

Webdesign-Trends 2015: Die Finger davon lassen?

Die Geschichte dieses Artikels begann mit einem Kommentar zu einem Post auf dem “Service Thinking”-Blog von Christian Reichel. Der Kommentar wurde dann zu lang für das Kommentarfeld…  Christian plädiert in seinem interessanten Post mit dem Titel “Gefährlich, gefährlicher, Webdesign-Trends” für einen zurückhaltenden Umgang mit Webdesign-Trends wie Responsive-Design und Material-Design, die in einem t3n-Artikel für 2015 vorhergesagt werden. Natürlich ist ein zurückhaltender Umgang mit Design-Mode und Design-Trends auf jeden Fall auch im Sinne der Benutzbarkeit meistens eine gute Idee.  Allerdings gibt es noch einen wichtigen Aspekt, der insbesondere die  Schnelllebigkeit von Design-Trends betrifft und die damit verbundene Nachhaltigkeit von Webseiten.

Es ist natürlich nicht alles, was neu ist, automatisch gut! Aber Dinge können “gut” im Sinne von intuitiv benutzbar werden, wenn sie viel benutzt werden. Und, dafür ist dieser Post ein Plädoyer, die Webdesign-Moden und -Trends sind nicht unabhängig von einem nachhaltigen und gültigen Standard, sondern beeinflussen den aktuellen Standard in hohem Maße.

Wie aus Design-Trends Design-Standards werden

Zwar verändern sich die Grundprinzipien menschlicher Wahrnehmung nicht, allerdings entwickeln sich Standardlösungen weiter und werden somit auch von einem größeren Anteil der Zielgruppe verstanden. Als Beispiel dient etwa die Kombination aus Off-Canvas-Navigation und “Hamburger”-Icon als Lösung für eine Navigation. Das Bedienungsmuster wäre noch vor 5 Jahren in – vermutlich – jedem Usability-Test als problematisch identifiziert worden. Viele Benutzer hätten schlicht nicht verstanden, was das Icon bedeutet. Heute ist es zumindest bei regelmäßigen Smartphone-Nutzern kein Problem mehr. Das Bedienungsmuster und das Hamburger-Icon als Symbol sind durch die verbreitete Benutzung konventionalisiert worden.

Christian Stetter, emeritierter Professor für Sprach- und Kommunikationswissenschaft an der RWTH Aachen, nennt solche Prozesse, die auch Treiber für die Evolution von Sprachen sind, “Trampelpfad-Prozesse”. Ein schönes Bild, finde ich: Irgendwann läuft jemand eine Abkürzung über eine grüne Wiese und durch die Leute, die hinterher laufen, wird irgendwann ein Pfad daraus, der sich Vorbeilaufenden als Abkürzung anbietet. Das gleiche passiert auch mit den Zeichen- und Konzeptsystemen, die wir für die Interaktion mit technischen Systemen nutzen.

Off-Canvas Naviagation

Eine Off-Canvas-Navigation

Alles hat Vor- und Nachteile

Natürlich gibt es nach wie vor Nachteile gegenüber anderen Navigationsarten. Zum Beispiel ist es für Benutzer aufwändiger zu Bereichen zu navigieren, die nicht direkt auf dem Startbildschirm der App oder der Startseite der Webseite platziert sind. Daher ist die Gefahr sehr groß, dass Benutzer Funktionen übersehen, die vielleicht nützlich währen. Die Funktion ist bei einem großen Display, wenn wie üblich oben platziert, auch schlecht zu erreichen, wenn das Gerät mit einer Hand bedient wird. Solche Nachteile sind in der Tat in der menschlichen Psychologie und Ergonomie begründet und werden sich auch nicht verändern. Den Nachteilen stehen aber auch Vorteile gegenüber, zum Beispiel schlicht mehr Platz für relevante Inhalte auch auf kleinen Bildschirmen. Diese Nachteile gegenüber Vorteilen abzuwägen (zu testen), kann nur nur im Einzelfall und mit Informationen über den Kontext geschehen.

Trends können den Standard verändern

Während sich bestimmte Prinzipien nicht verändern, verändert sich der Anteil von Benutzern in verschiedenen Zielgruppen, die bestimmte Symbole und Bedienungsmuster kennen. Diese Benutzer verstehen diese Muster und Symbole daher direkt und müssen sie nicht erst lernen. Dem liegt das psychologische Prinzip zugrunde, das Daniel Kahnemann frei zitiert “familiarity breeds understanding” nennt. Das Prinzip beschreibt, dass wir als Menschen Dinge besser verstehen und diese sogar mögen, wenn wir ihnen wiederholt ausgesetzt werden. Genutzt wird das Prinzip auch in der Musikindustrie: Die Dauerrotation der neuesten 20 Tophits im Radio sorgt bei den Radiohörern dafür, das diese Hits – im Bevölkerungsdurchschnitt – auch besser gefallen.

Während sich also bestimmte Grundprinzipien der menschlichen Wahrnehmung (zum Bespiel Gestaltgesetze oder Kapazität des Kurzzeitgedächtnisses) nicht ändern, gibt es doch eine Rückwirkung des “Stands der Dinge” auf die Bedienbarkeit von Webseiten und Anwendungen, denn die Standardmuster verändern sich mit der Zeit. Und im digitalen Zeitalter verändern sie sich sehr schnell. Das liegt auch daran, dass sich die technischen Möglichkeiten verändern, zum Beispiel sind Animationen, wie etwa in Googles Material-Design verankert, erst mit leistungsfähiger Hardware möglich.

Trends verstehen und deren Beständigkeit abwägen

Da sich das, was als Standard angesehen wird, sehr schnell verändert, sollten sich Designer und Entwickler neue Trends und Moden sehr genau anschauen und diese verstehen. Im Besonderen ist es wichtig, zu analysieren, ob ein Trend vielleicht schon ein neuer Standard geworden ist. Nur das ermöglicht im Kontext eines Projekts Vor- und Nachteile abzuwägen, auch im Hinblick auf die Zukunft zu designen und innovativ zu sein – ohne dabei jedem Trend hinterherzulaufen.

Midnight in Moscow

by
(joint work with Tobias Stadelmaier)

This is the story of a bug involving an arch enemy of all software developers: the time zone.

But let’s start at the beginning. Our editorial front end application deadlocked, but only under Window and only with Internet Explorer or with very old Firefoxes and only in a very specific time zone: UTC+03:00 Moscow (RTZ 2).

The culprit was ultimately found to be the DateTimePropertyField of Ext JS. When initializing such a field, Ext JS tries to prefill the combo box that lets users pick the time in 15-minute steps, advancing a JavaScript Date object using the method Date.add. The date is advanced until it reaches a maximum value (23.59 pm). Now, when you want to do that, you must choose a day for the Date object, which in the case of ExtJS is hardcoded to January 1st, 2008. And everything works like a charm.

Unless Date.add does something wrong. Looking into the Ext JS implementation of Date.add, we see that it simply sets a new minute value and trusts the Date object to normalize itself. For the very special time zone UTC+03:00 Moscow (RTZ 2) mentioned above, it does not. You can check it easily if you put your Windows 7 into the right time zone, open the JavaScript console of an IE 9 and type:

new Date(2008,0,1,23,60,0)

You get:

Date {Tue Jan 01 2008 23:00:00 GMT+0300 (Russia TZ 2 Standard Time)}

Ok, so I lied. The DateTimePropertyField is not the bad guy. The problem should really be blamed on the calendar arithmetic that comes with Windows and/or the browsers.

The fix is simple: Get ExtJs to use a different date for doing the time computations. For example by having this line:

Ext.form.TimeField.prototype.initDate = ‘2/2/2008′

executed as early as possible in the startup sequence. This fixes Ext JS 3.4. Feel free to add hacks for other Ext JS versions in comments.

But what is so bad about January the 1st, 2008? Other dates seem to work nicely, other years too. But there are other bad days: 1/1/2013, for example. Or January the 1st of 1901, 1907, 1918, 1924, 1929, 1935, 1946, 1952, 1957, 1963, 1974, 1980, 1985, 1991, or 2002. The common pattern? All of these days are the first days of the year and … Tuesdays.

So don’t worry about Mondays. It’s the Tuesdays that will come after you in the dark, cold, and stormy nights of the deepest winter.

Speed Up Your UX Work: Tables in OmniGraffle 6

OmniGraffle 6 comes with a new feature called Tables. If you use OmniGraffle for wireframing or creating mockups, this feature might come in handy whenever you want to graffle any sort of tabular data. A tabular view on data is a common requirement for enterprise software. Another use case would be the creation of repeating patterns. In Axure, these use-cases are realized using a Repeater Widget, but it is hard to find a similar function in OmniGraffle. Here’s how it works.

Creating a Shape

You can’t create an empty table in OmniGraffle. Instead, you start out with a shape and make a table out of it.

Making a Table

Use cmd+shift+t or Arrange→Make table. You can create a table out of more than one shape at a time but they will huddle together and change their sizes. Now you’ve got your table, but the only thing that changed are the little table handles on the edges of your table.

Creating Columns

To create more cells, drag the handles away from the table. Here, I’ve dragged the right handle to the right to append a new column.

Resizing Columns

You can resize individual cells. If there were more rows below, they would be resized as well.

Adding Another Row

Before we add content, let’s add another row.

Adding Text

To add text, simply double click on a table cell. Adding and formatting text works the same as in any other shape. In editing mode, you can step through the cells with tab and shift+tab.

Adding Shapes

To add images, stencils or grouped shapes, paste them into a table cell as you would paste text. This handy but little known feature works on all shapes, not only tables.

Copying Rows

Drag the bottom handle of the table to copy the bottom row. This will help you quickly create large table views.

Resizing Table

Our table didn’t quite fit the window. Let’s resize it! This is where tables really shine. Also, the borders are not necessary so I just set the stroke property to “No Stroke” for the whole table at once.

Column Header

I have moved the column header to the middle. Uncheck the “Wrap to Shape”-checkbox on the type inspector to make the text flow into the adjacent cell.

All done!

And there we have it, a complete mockup. As you might have noticed, I’ve added some bells and whistles along the way, like different background colors for some rows. Tables make these things very easy.

Note that there are new menu entries under EditTables.

Also note that tables are actually fleshed out groups. In fact, you can ungroup a table at any time using cmd+shift+u. This is probably the reason why so many features that we know from other table-supporting software is missing in OmniGraffle. For instance, there is no way of reordering rows or columns.

Tips and Tricks

The tables-feature is not flawless. I’ve come across some issues and shortcomings. Here are my top 3 grievances and workarounds:

  1. Creating a table „post hoc“
    You can make a table out of multiple shapes but not grouped of items. This can stump you when you have drawn something that includes stencils and think „Great, now I need that 10 times in a row!“ Experienced grafflers will reach for the stamp tool and miss out on the handy table features. Table enthusiasts, however, will have to create a new table from scratch.
  2. Icon support
    Icons or other stencils are usually grouped shapes. Because groups can’t be made into a table it is somewhat awkward to include icons in your table. A novice may resort to arranging icons over the table but as soon as the table needs to be resized or moved, all the icons have to be moved as well. This is no way of living and robs us of one of the key advantages of tables. OmniGraffle has a little known feature though that makes tables support icons after all: text fields can contain shapes. Simply paste the icons (or any shape, really) into the cells in editing mode and you’re golden. I have shown this in the steps above. Be aware that the shapes can’t be changed anymore once they’re pasted. Positioning them correctly can become tedious. One solution is to give them their own column. That way, you can at least position them using the type inspector.
    This workaround is not without problems though. Oftentimes, the cell margins will change unpredictably. When that happens, try to edit a neighboring cell (i. e., add a random character, leave the cell, remove the random character). This can make the offending cell snap back to normal. Also, sometimes shapes will appear to be cut off at the edges. I have not yet found a satisfying way to repair that. Try emptying the cell and pasting the shape again.
  3. Cell management
    The OmniGroup people have been careful to avoid feature creep. They have left out a way to create single cells. This wouldn’t be so bad had they not included a way to delete single cells. My advice would therefore be: never ever delete single cells unless you really mean it. The only way to get them back is through cmd+z or ungrouping, repairing and remaking the table. Instead, empty the cells in editing mode.

Down in the Jungle Room

by

For last week’s coding dojo at CoreMedia, I prepared a small exercise on concurrent programming. As a rule, it is a bad idea to write concurrency algorithms like a lock or a thread pool yourself. Get a proven library and use it! So the exercise had to deal with a more realistic problem: apply some changes to legacy code that is supposed to be concurrency-proof.
The given source code simulated – in the most elementary form – a company that organizes safaris. The original source code that was presented to the participants can be found here: https://github.com/okummer/ConcurrencyJungle

The task was to fix scalability issues in the code. The exact extend of the scalability issues was not mentioned, which initially did not seem to disconcert anyone. It was allowed to change the code in any way. It was pointed out that the developer of the existing code was no longer with the company.

The participants decided to try mob programming for this task. This led to very interesting discussions and provided me with a natural way to drop the occasional comment on possible alternate solutions. See http://marcabraham.wordpress.com/2014/02/05/what-on-earth-is-mob-programming for details on mob programming.

If you want to do this exercise in your dojo, it is a good idea to have someone with experience in concurrent programming attend the dojo, so that bad solutions can be spotted and hints can be given.

Spoiler alert: The remainder of this text contains hints that might lessen the experience of solving the problem oneself.

The code was sprinkled with various style issues and small inconsistencies that were supposed to serve as warm-up exercises and as distractions during the actual programming.

However, the participants ignored the small issues, hardly looking at the code. Instead, they headed straight to the test class, which actually seemed to test relevant aspects of the program. Of course it didn’t. It took some time until all of the blunders in the (green) tests were discovered and the tests were rewritten. It became quite obvious how existing code, even code that one should be suspicious of, can be accepted at face value.

After much test writing, some risen eyebrows, and finally a debugging session, it became clear why the code did not scale: nearly every method was synchronized. After removing the central synchronization point, which allowed exactly one safari at any given time, the program immediately ran into a deadlock. It was designed that way, of course, but the surprising takeaway was that you can get deadlocks by removing locks.

Only now did the participants look more closely at the entire set of classes. The code was a jungle of interrelated code that exhibited a highly interwoven call structure. Of course, this is bad news for concurrency requirements. Furthermore, the code suggested that this is really a thoroughly chosen architecture that closely matches the application domain. Thoroughly chosen, yes, but …

At this point, some approaches were considered and it was decided to reduce the potential for deadlock by removing those locks that were provably not necessary, because they protected immutable data or data that was accessed in one thread, only. The participants were very careful in this phase, avoiding a trap that would have led to an inconsistent object graph when synchronizing too little. I was quite proud of them.

When all was done, we talked a little about the result of the refactorings and about general approaches to deadlock avoidance.

It was a lot of fun to do this exercise. It turned out to contain the right amount to trickery, so that a robust solution was found before time was up. If there had been more time, code cleanup and further testing would surely have been possible.

CoreMedia Photowalk

Some photo enthusiasts at CoreMedia  formed a photowalk group that is called „Boring Photos Questionmark“. The question mark stands for “no-boring photos at all”! Some of our results are published on our group page in Flickr.
During our photowalk we do not snap everything. We have special topics which we try to express by images. For me it’s very interesting to see how my colleagues look at the perspectives, their methods, their techniques and their final results. Together we learn different camera movement techniques such as panning shot, where you rotate the camera on its horizontal/vertical/diagonal axis in order to keep a moving object in view while the rest of the scene is blurred. For the night photography we use the time exposure made by leaving the shutter open a relatively long time. The moving elements such as the lights of cars for example become drawing lines while the rest of the scene stays sharp.

Even without a camera we spend our time with knowledge sharing sessions such as Lightroom functions or portrait photography. For the next walk we are planning to visit the very interesting photography exhibition “100 years of Leica”.

Beside that, our photowalks are a special social event with lots of fun, where we share our experiences, skill improvements and knowledge.

12729677823_08be2afa53_b

Pandora Cart: Creating a Sprint Timeline

In a recent retrospective I wanted to dissect the sprint day by day. The books recommend drawing a timeline, adding events to it, and then maybe drawing a satisfaction graph for every team member across the whole thing. This always felt time-consuming and tedious to me.

Enter Gamification

When in doubt, make a game of it, I thought. So here is Pandora Cart. The name is a mix of the team name and the console game that inspired me.

1. Preparation

The team coach sets up a course of n x 3,5 numbered playing fields (just use index cards), where n = sprint length in days. Include field 0 as “start”.

Also, the coach creates a large number of event cards, about 5cm x 5cm. Half of them shows the picture of a banana, the others show a mushroom. There is plenty of space on the cards to make notes. A sensible number is m x 6 cards, where m = team size.

Finally, a game progress marker is required, preferrably a toy car approximating the shape of a cart; plus a D6 (regular 6-sided dice).

2. Game Setup

The team coach lays out the course on a table in numerical order.

Team members are asked to remember the sprint and all events that helped or hindered the team or the individual colleague. For each helping event, the team member takes a mushroom card and jots down a short summary of the event. For each hindering event, the team member writes a note on a banana card. Include the day of the sprint it occurred.

When setup is finished, we have

  • the course, i.e., the numbered fields, on the table
  • the cart sits on field zero/start
  • each team member has a stack of banana and mushroom event cards in front of them
  • the D6 lies on the table

3. The Rounds

There are n rounds in the game, representing the days in the sprint. For each round:

  1. One of the team members rolls the die and advances the cart by the according number of fields.
  2. All team members play event cards for the round, i.e., they describe the event and place the card on the table:
    • Whenever a mushroom is played, the cart advances by one field.
    • Whenever a banana is played, the cart moves back to the previous field.
  3. Take care to lay out the cards for each round in one separate row.

4. Game End

The game ends after the last round no matter what. No player should have any event cards in their hands any more. All event cards are on the table in n rows, the rows ordered by rounds (days). This display of event cards is your timeline of the sprint!

This image from a running game shows day four in a bad shape, while day six collected many mushrooms. The cart track is partly shown in a circle around the timeline, by the way, the cart is on field 18 at the lower right.

DSC00645

Now that you have gathered the data, you can move on to the “generate insights” stage of your retrospective.

Hints

The final field of the cart could be a point of discussion, but chance will heavily impact the result. Also, it does not matter at all where the cart ends up. Throwing dice is just there for the fun of it.

It is probably best to play this game for sprint lengths up to ten working days. For longer sprints and more rounds the game might become boring.

About the length of the course: The cart should, on average, advance by 3,5 fields per round. If, for every sprint day, you expect one more banana than mushrooms, you’ll end up at 2,5 fields per round on average,and can shorten the track accordingly. Maybe you even need negative fields if the first day of the sprint was a real mess …

 

How a flat-design approach and shadows go together

written by Nils Morich and Kris Lohmann

Introduction

In the past weeks, the UX team at CoreMedia has been working on a new design approach towards the editorial interface for our CMS. This interface is a rich-web client called the CoreMedia Studio. The original design from 2009 was based on gradients and rounded corners – These elements feel a little outdated, 5 years later and in the time of Google’s Material design, Apple finally going flat, and Windows 8.

In this article, we discuss thoughts on flat design and how to apply it without falling into common traps (for such traps, see this post by NormanNielsen Group).

Reduce to the Max

Consequently, in our new design, gradients disappeared and transparency is history. Inspired by the references mentioned above, which became very popular with the release of Windows 8, we decided to pick up that trend and iterate our design approach. For example, the new approach manifests in the design of the icons used in the studio. No borders, no gradients, not more than just one color. The basic principles are driven by a flat-design approach. They are vector-based, which makes them future-proof for high-resolution displays that are more and more used. The following image exemplifies the old and the new look of the icons.

 

CoreMedia Icons

Old Icons (Sample)

 

New CoreMedia Icons

New Icons (Sample)

 

Visual Cues

So what is this shadow-thing all about? What are shadows used for and why is it important to us to know? There are two occasions when shadows become important:

  1. When the source and the direction of light is important
  2. When the position of elements on the z-axis is important

For a usual software project, the source of light is nothing that concern. However, the position of several objects in the z-axis is an important cue to the user. It makes the hierarchy or special highlighting of elements visible. Especially once there is a focus on elements such as windows, shadows can be very useful for that purpose. Have a look at the next pictures. The first window comes with a shadow and the second window without a shadow. The first window gives the user the feeling that the window is only loosely connected to the ground. Users have the impression that they can physically interact with the element – which is called an affordance in UX design.

 

CoreMedia Library with Shadow

With Shadow

 

CoreMedia Library Without Shadow

Without Shadow

A flat design in its most consequential application gets rid of shadows and other elements that are a first glance solely for decoration. However, looking carefully at it, there is a functionality assigned to some of these graphical elements. In the physical world, elements still interact in a three-dimensional space with each other. The understanding of these interaction is a basis for our understanding of the world. Nobody is surprised by the fact that things tend fall to earth (OK, some do): The laws of physics are deeply integrated in our cognition and are, as such, processed with cognitive ease.

An example makes this evident. The following pictures show a dropdown many used in the CoreMedia Studio. On the first picture there is a blurry shadow around the persona chooser. On the second picture, the same element is shown without any shadow applied. As you can see, it is much easier to distinguish the activated element from the rest of the studio. Without a shadow, things will get more complicated, and they will get worse when several elements happen to overlap each other.

UI Element with Shadow

UI Element with Shadow

UI Element without Shadow

UI Element without Shadow

Shadows Help Understanding a User Interface

But is this really necessary? Why is it so important to visualize the position of an element within three-dimensional space? In fact we still mostly run software on devices with flat screens (Monitors, cell phones, tablets, …). So do shadows not just add visual clutter to a user interface?
Futhermore, there are other popular examples, that do not make use of shadows. Consider Comic-Movies like Southpark. There are neither body-shadows (shadows that appear on the objects caused by light) nor object-shadows (shadows that are caused by objects on the environment). So why does this work? In this case it is a movie, so we have the additional dimension of movement and animation that helps the viewer process the visual content.
In a user interface for software, which is in contrast to the movie interactive, the user is much more reliant on the affordances of objects. Hence, using shadows to induce affordances into objects by imitating physical interactions is not necessary.

Seeking Inspiration

What do others do? Let’s have a look at the Google’s material design: The philosophy is to stay close to the physical world and convert the behavior of physics (light, mass, material,…) to a flat-design. Google manages the shadow-topic in a very strict way:

07

There is a range of five different states of depth (Depth 1 is very close to the ground – Depth 5 is far away from the ground). As you can see, the shadow gets bigger and more blurry the greater the distance is between object and ground. Depending on their depth-level an object overlaps or is overlapped by other elements (i.e. an object with a depth-level of 3 covers all objects of depth-level 1 and 2, but is covered by all objects of depth-level 4 and 5). Elements which are supposed to match the same high as the ground are considered “Depth 0”.

Our Solution

We created several variants of adapting the behavior of shadows. We came to the result that three levels of depth dealing with shadows would be enough for our software products.
The shadows are constructed as follows:

CoreMedia Shadow Construction

CoreMedia Shadow Construction

The pictures below show two examples of how the new shadows look, applied to the CoreMedia Studio. You see a component called dashboard with widgets on it in the first picture. In the second picture, an opened free-floating window is shown. The widgets have depth level 1; the window has depth level 3.

Example for Shadows with Depth Level 1

Example for Shadows with Depth Level 1

Example for a Shadow with Depth Level 3

Example for a Shadow with Depth Level 3

Conclusion

Minimalistic design such as flat design eliminates unnecessary visual clutter from a user interface. Carefully applied, it results in a clean look and feel. Still, the elimination of some visual elements is risky. As exemplified by shadows, some visual elements provide cues that allow the user to better and easier understand what is going on. In particular, it allows the designer to inform the user on relations between objects and on potential interactions with these objects.
Visual elements such as shadows carry information that is processed by the user of the interface. Carefully applied, such elements can and should augment a minimalistic design approach.

Follow

Get every new post delivered to your Inbox.

Join 285 other followers