Skip to content

Configuring the MDM login screen with xrandr for mirrored multiple monitors

Setting up my new laptop with Linux Mint 18 Cinnamon I faced the challenge that on a multi-monitor set up the MDM login screen did not show up well: While it looks great on the laptop monitor the external monitor has the wrong resolution and shows only a part of the login box.

Searching for a solution I stumbled across a post at Segfault (News from the Linux Mint development team):

This post was really helpful to eventually evolve to my solution. I needed a somewhat different approach than in the fixed call to xrandr because I sometimes switch working places. So it needs to be more generic. And the result is this xrandr setup placed directly in front of the exit 0 call in /etc/mdm/Init/Default:

PRIMARY=`xrandr -q|awk '/connected primary/ { print $1 }'`
if [ "x$PRIMARY" = "x" ] ; then
  # take the first one
  PRIMARY=`xrandr -q|awk '/ connected/ { print $1 }'|head -n 1`
fi

if [ "x$PRIMARY" != "x" ] ; then
  SECONDARY=`xrandr -q|awk '/ connected/ && $1 != $PRIMARY { printf "--same-as %s ", $1 }'`
  if [ "x$SECONDARY" != "x" ] ; then
    xrandr --output $PRIMARY --mode 1920x1080 $SECONDARY
  fi
fi

What the script does:

  1. see if xrandr already knows some primary screen to take this as reference
  2. if there is no primary screen take the first connected screen as primary screen
  3. if a primary screen could be determined choose all secondary screens which are all connected screens, which are not equal to the primary screen
  4. add --same-as for all secondary screen
  5. only if primary and secondary screen are present: use xrandr to set up all screens as mirrored screens; use resolution 1920×1080 which fits best for all working spaces

5 x 5km = 2:12h

Der Muskel wird durch starken Gebrauch gestärkt, der Nerv hingegen dadurch geschwächt. Also übe man seine Muskeln durch jede angemessene Anstrengung, hüte hingegen die Nerven vor jeder.

Arthur Schopenhauer

Einige CoreMedia Mitarbeiter kräftigen ihre Muskeln regelmäßig bei gemeinsamen nervenberuhigenden Laufeinheiten in der Mittagspause oder nach Feierabend. Die Teilnahme am diesjährigen Mopo-Team-Staffellauf war damit willkommene Abwechslung und Herausforderung. Fünf CoreMedianer ließen auf der jeweils 5km langen Runde durch den Hamburger Stadtpark ihre Muskeln spielen. Unterstützt und nervlich gebändigt wurden die CoreRunners durch Rhea, die den Papierkram erledigte, für Verpflegung sorgte und am Streckenrand fleißig motivierte. Erfreuliche Einzelleistungen und exzellenter Teamgeist mündeten in einer Zeit von 2:12:23 und damit Platz 321 von 986 gestarteten Teams.

Mopo Staffellauf 2016

Die CoreRunners: Patrick, Kerstin, Eva, Rhea, Karo und Peter (v.l.)

Farbwechsel

Unser letzter Blog-Eintrag vom Juli zeigt Fotos aus unserem leuchtend grün gestrichenen Chill-Out-Raum.

Das “Chill-Out” erfüllt bei CoreMedia verschiedenste Funktionen. Es dient uns allen als Küche, Aufenthalts- und Kickerraum, Kaffeeanlaufstelle, Obststand, Müslibar und Treffpunkt für ein kühles Getränk zum Feierabend am Freitagabend. Um es noch gemütlicher und einladender zu gestalten, verpassten unsere Auszubildenden und dualen Studenten dem Chill-Out vor Kurzem einen neuen Anstrich.

IMG_7817

IMG_7832

Leuchtend Grün wurde durch ein entspanntes Grau-Blau ersetzt, und die Wände bekamen auch eine neue Dekoration. Durch die – manchmal auch in Hamburg – hohen Temperaturen war die Arbeit sehr schweißtreibend. Doch der Einsatz hat sich gelohnt: Das Ergebnis kann sich sehen lassen!

IMG_0021

Vielen Dank, dass ihr euch so ins Zeug gelegt habt und nun ein frischer Wind im Chill-Out weht!

Frohes Neues!

DSCF8974

Man muss Feste feiern, wie sie fallen. Und bei CoreMedia fällt Neujahr auf den 1. Juli, denn an diesem Tag beginnt unser neues Geschäftsjahr. Grund genug also, am vergangenen Freitag auf das Erreichte anzustoßen und das neue Jahr gebührend zu begrüßen.

DSCF8971

Schon nachmittags sorgten die obligatorischen Neujahrsberliner – dankenswerterweise ohne Senffüllung –  für einen ersten Zuckerflash.

DSCF8997

Abends erhielt der Magen sein aus Kartoffelsalat, Würstchen und Sekt bestehendes Kontrastprogramm. Kulinarisch fühlte man sich damit ebenso ans kalendarische Jahresende versetzt wie dank des von Petrus spendierten Regenwetters beim Blick aus dem Fenster.

DSCF8976

Na dann, happy new year and a little more sunshine.

Sketch vs. OmniGraffle in UX Design

Featured Image

OmniGraffle has been used as a staple in UX design for more than a decade. This position has been challenged in the past 5 years though. What does Sketch do differently that has convinced so many UXers to switch?

The go-to product for many UX designers was, and maybe still is, OmniGraffle. Since it has been on the market for over a decade, many professionals have developed a high level of efficiency and expertise using it.

If you follow the trends on twitter, producthunt, or dribbble, you may have noticed that a great deal of UX related posts have something to do with Sketch. Many people seem to be convinced: Sketch is the new workhorse in UX design. How was Sketch able to become such a capable contender to OmniGraffle? Let’s analyze the differences to find out.

Features

At times, OmniGraffle’s background as a diagramming tool shines through. This makes OmniGraffle look less focused because it offers options that are usually not needed for interface design, for example connected shapes or diagram layouts. OmniGraffle is also less restricted to a specific medium: you can select from a wide array of measuring units and paper formats. Sketch, however, is clearly made for the screen. Everything is measured in pixels, with support for arbitrary pixel densities.

Both Sketch and OmniGraffle come with an almost identical structure that has a list of pages/canvases and layers/artboards, respectively, on the left and a bunch of properties on the right. Users who know one tool will not be completely lost in the other. However, there are some differences that affect each tool’s efficiency.

Groups and Layers

Grouping and layering shapes have to be used in any sufficiently complex design. OmniGraffle comes with a hierarchy of canvases, layers, groups, and shapes. I believe that most people will create a canvas for every screen, a layer for every large component, and a group for every set of shapes that belong together logically. Groups can be nested to create logical hierarchies. Layers always fill the whole canvas; they can only be moved on the z-axis.

A similar thing is happening in Sketch, albeit with different names: On the top are pages, which contain Artboards, which contain groups, which contain shapes (called layers in Sketch). In Sketch, grouping is more powerful than in OmniGraffle. You will find yourself grouping large components as well as sets of small shapes for organizing your work. Artboards, on the other hand, are often used for different screens. Pages can be used to organize different parts of an app, or to temporarily deposit unused parts.

Grouping, as mentioned, is done better in Sketch. This is due to the fact that you can see the contents of nested groups in the Layer List, and that you can easily select every element, no matter how deeply it is nested, with +Click. (Something similar works with Alt+Double Click in OmniGraffle, but you end up in the text edit mode of the selected shape.) In OmniGraffle, you can only ever see the elements on the first hierarchy level.

Another key difference that cannot be understated is that Sketch encourages you to name your elements. Renaming anything is only a +R away. This results in well-structured documents that are searchable and easy to work with. You can rename shapes in OmniGraffle as well (in the properties tab of the inspector) but that doesn’t help much for nested groups. Sketch also offers better keyboard navigation options (use Tab, Shift+Tab, Return, and Esc) for all deeply nested elements. Not that OmniGraffle doesn’t have keyboard navigation (use Tab, Shift+Tab, and +Cursor Keys) but they don’t work great in complex groups.

All in all, organizing your work is much easier in Sketch than it is in OmniGraffle because it has superior grouping and naming capabilities.

Component Reuse

I believe I can speak for all of us when I say that we don’t want to do repetitive work all the time. We want to design a button once and then reuse it whenever we need it again. Who does it better?

OmniGraffle

OmniGraffle has a vast array of features that help you speed up your UX work.

Stencils

Stencils are shapes or groups of shapes that you can drag onto your canvas. When you have a well-prepared stencil library, this makes prototyping very fast. You can extend your stencil library while you work. Once on the canvas, the stencils can be edited just like any other shape. This very flexible approach is also a problem: changes to one instance of a stencil don’t propagate to other stencils. So if you create a stencil ‘on the go’ and decide you want to change something about it, you’re out of luck.

At CoreMedia, we have a great stencil library for all of our widgets and icons. Especially for icons, though, this has been a huge amount of work for us. Depending on how you manage and store your icons, you might have an easier time.

Shared Layers

You can create layers that are visible on all canvases. In UI Design, I have found this troublesome for everything but website headers. As soon as the position of the element might change, shared layers are not helpful. They can be used to display titles and page numbers in presentations, though.

PDF LinkBack

PDF LinkBacks (I don’t know if that’s what the feature is really called) are the closest thing OmniGraffle has to Sketch’s symbols. You create them once, duplicate them wherever you need them, and when you change one of them, all the other ones change with them. You can even use them in your stencil library (sort of, at least.) They have many shortcomings though.

Copy/Paste Styles

Both programs can copy and paste styles of elements. OmniGraffle does it better though: you can grab individual style attributes or all of them and drop them onto the target. Neat!

Screen Shot 2016-05-31 at 10.55.13

OmniGraffle’s excellent style picker.

 

Tables

Tables are my favorite feature in OmniGraffle. They allow you to depict tabular data flexibly, but are also useful for creating extremely flexible stencils. If you use OmniGraffle, make sure to master tables. They can really make your life easier.

Sketch

Symbols

Arguably Sketch’s most famous feature are symbols. It works like this: you design your buttons, list entries, menus, etc. once, then reuse them everywhere. Text and images can be overridden for each instance. And since version 3.7, nested symbols are possible as well.

Changes to a symbol are immediately visible in all instances. What is a little strange is that, when you store all your symbols on the special symbols-page, you can’t edit them in situ anymore. This makes it hard to see the changes in relation to the surroundings.

Shared Styles and Text Styles

You can think of shared styles as CSS classes for your shapes. If you want to make sure all your, say, toolbars have the same color, give all of them the same shared style. Want to change the color of all toolbars? Just change it once, then update all other toolbars with the click of a button. You can even extricate the styles as CSS code.

Shared Text Styles work the same but for text. This helps a lot with typography, especially for website designs.

Copy/Paste Styles

Like in OmniGraffle, you can copy and paste styles. Sketch lacks methods to selectively copy certain style properties from one shape to another.

To conclude, Sketch wins again simply because its features are geared towards UI design. Both programs could benefit from better search options for stencils and symbols though.

Vector Paths

If vector paths (Sketch calls them vectors, OmniGraffle calls them curved lines) are for some reason important in your design, I would certainly recommend Sketch over OmniGraffle. It is way ahead in all aspects.

Interoperability

Both tools have excellent exporting capabilities. OmniGraffle supports a great number of formats, while Sketch has a handy slices-feature. Importing from other tools is limited in both. Sketch seems to be slightly more lenient with Adobe file formats (which it flattens to a bitmap), but you can always copy and paste from Adobe programs into OmniGraffle as well. Opening PDFs is not well supported in either programs, with Sketch sometimes even failing to render PDFs correctly that it exported itself.

Ad-hoc sharing of your work is easier in Sketch as you can just give your co-workers a link to an HTML-version that is hosted on your local machine. You can also view your designs on a connected iPhone using the Sketch Mirror app. In OmniGraffle, however, you can tie simple actions to elements or groups which allows you to create click dummies. These are even retained in exported PDFs which makes it practical for sharing with your colleagues.

You’ll find that support for the Sketch file format in prototyping tools is staggering. The only tool that can read the OmniGraffle file format is, to my knowledge, LucidChart.

If you were to make the decision to go with either Sketch or OmniGraffle, a soft factor could be the community. To make it short, Sketch has the advantage here. It dwarfs OmniGraffle in submissions to producthunt about 100:1, many of which are about community-created content around Sketch. It has a larger eco-system regarding plugins (which can be written in JavaScript, compared to OmniGraffle’s AppleScript), resources, and exchange of knowledge. There are also community events in the US and Europe for Sketch, but none for OmniGraffle as far as I am aware.

So, Sketch is clearly ahead in this area as well. The team behind Sketch manages this with a staff of 15, compared to OmniGroup’s 70 (who work on three other products besides OmniGraffle.)

Summary

I believe that OmniGraffle has an advantage in one area of UX work: low fidelity prototypes. These can be created blazingly fast and, if need be, beautified later with a little more effort.

For anything else, Sketch is the better choice. It has a clear focus on screen design and helps you create highly organized documents. It also has the superior eco system of plugins and community.

 

Related: Find out how Sketch fares against Photoshop and Fireworks.

Nearly Stable Teams

Long-lived teams with stable membership over a long duration are generally seen as desirable. But there are downsides, and therefore I argue that development teams should exchange some members every now and then, especially in a product company that does not think in projects.

Let’s ask some authorities on the subject.
* The Scrum Guide does not mention how long a team should exist. No help here.
* Reading about product ownership or user stories, we find the hidden requirement to keep your team largely constant. Empirically measuring past velocity and forecasting future velocity only makes sense when the team velocity is not influenced all the time by changing team size, or by people leaving and other people being onboarded.
* When reading about feature teams, you find that work should come to the teams, the teams should not be formed around the work. The team is the fixed part, work (and knowledge) come to them.
* The Agile Manifesto asks that people voluntarily form a team around a vision. There is a project thinking behind this and the notion that the team is stable until the project is done and the vision fulfilled. The principle is difficult to map to a company with several products in different stages of their lifecycles. Additionally, taken literally, people may leave the team when motivation wanes, and others who feel the urge to help the project along may ask to join the team.
* Tuckman’s team model tells us that re-forming leads to storming, meaning that awful touchy-feely conflict overhead waste of time as opposed to the performing stage.

I can see that there is merit in the principle of stable teams. When you start out in an organization where people are pulled off projects and reassigned almost arbitrarily by managers, juggling several competing assignments at once, it is probably very beneficial to demand that a person work on one project for an extended duration of time, such as several days in a row. The longer one works toward the same vision, the higher the potential for identification with it. A long-term perspective for the team is the foundation for long-term team goals like learning and sharing knowledge with each other, improving quality of the deliverable, raising craftsmanship, and investing in relations among the team members.

However.

My observations convinced me that this ideal is not to be pursued for several reasons.

For one, there is diversity. Diversity is generally seen as capability-enhancing for a team. There is diversity in work style, team role, experience, culture, gender and so on, but also in duration of team membership. Newcomers to a team are able to question an established consensus in a way the others cannot.

Then there is the team silo and local optimization. The longer a team works on its own, the greater the risk of losing the big picture and focusing on the local situation.  When people switch teams every few months, informal bonds remain in place and people find it easier to connect across team boundaries. This is a powerful countermeasure against local optimization.

A stable team might over-optimize and establish routines and experts for certain topics. The possibility and the reality of a slow team churn enforces a culture of knowledge sharing, because in that situation it is dangerous to have a single point of knowledge for anything.

Finally, there is real life. People take a year off to travel the world or to raise kids, people leave and people are hired, so there will always be a change of faces in all teams. The more a team is experienced in handling these situations, the quicker they will pass through the storming phase and hardly see the storming as any bother at all.

To sum up, from my observations the recommendation is to make sure that every team exchanges at least one member every few months. If it does not happen anyway through new hires and the like, there are violence-free ways to ask people to take on other roles and to shuffle the teams by a small degree.

Just avoid the extremes: Do not reassign people by the hour, and do not keep teams stable for years. A healthy dose of change makes a team more resilient.

 

To Manage Agile You Need To Know What It Means

A Conference Report

Last week, the Manage Agile conference was held in a conference hotel near Berlin. It offered a mix of workshops and presentations held both by practicioners as well as consultants of agile methods. This way, the program balanced concrete success stories on the one hand with general recommendations on the other, and individual reality checks with reports of recurring patterns.

The talks covered topics like developing an agile organization with an agile culture, scaling agile, the role of managers in agile organizations and their personal development, and the role of HR.

However, Craig Larman set the right key note in his keynote when he set out to define what the word agile actually means (used eight times in this text already since you started reading). His phrase was “to be able to turn on a dime for a dime”, i.e., to be able to quickly and cheaply change direction when circumstances demand it. Such as a changing market, a business opportunity not to miss, profit to be made.

So, if you want to become an agile company, you have to ask yourself what you actually shoot for. “Becoming agile” is not a helpful claim if the goal is not clear. Example: Among three Agile Coaches that I happen to know, there were three different opinions on what they were optimizing their department for.

What is the goal of your agile transition? A “True North” can be stated, even if the path is not clear yet, and will certainly twist and turn in future times in unexpected ways.

Among the possible goals of an agile transition I heard at the conference were

  • fast, decentralized decisions
  • independent teams
  • small concept-to-cash time / cycle time / time-to-market
  • delivering results
  • fast delivery, continuous delivery (as part of small cycle time)
  • low cost of changing priorities, little work-in-progress
  • low variation across projects, predictability
  • return on investment
  • customer value
  • low cost of implementing new features, architectural flexibility
  • self-organization, responsibility culture
  • innovation
  • software quality, low technical debt
  • handling complexity
  • promoting and living agile values and culture
  • personal growth and aligning personal and company values

What do you optimize your organization for?

#socrates15: Embracing the Walking Skeleton

From the sessions I joined at SoCraTes 2015 two of them were entitled:

  1. The Walking Skeleton by Franziska Sauerwein and
  2. Embrace Failure by Xavier Detant.

Both sessions reminded me on a recent incident we experienced in our development team. And they were an eye opener spreading a new light on what I perceived as miserable failure before.

The Incident

The team I am working in had the task to develop a feature set with huge architectural impact to the product. Over some weeks the feature set grew and the feature became alive and kicking. But after a code review we detected some architectural problems which from the outside could be perceived as stupidity. As consequence we spent one sprint in fixing and rebuilding the architecture.

I felt as if we failed miserably and could not answer the question: “Why did we fail?”

The Walking Skeleton

From Alistair Cockburn (2008):

A Walking Skeleton is a tiny implementation of the system that performs a small end-to-end function. It need not use the final architecture, but it should link together the main architectural components. The architecture and the functionality can then evolve in parallel.

Compared to a spike which as given by Alistair answers the question “Are we headed in the wrong direction?” a walking skeleton is able to provide answers very fast to the question “What are impediments we have to challenge?” This is where it becomes important to embrace failures.

Embrace Failure

Embracing failure actually means to accept the fact that we are all about to fail sometime. And instead of struggling with it take it as a chance to learn from it.

To me the advantage of embracing failures is to get decisions fast as it is no problem but even a chance for learning when failing.

Embrace the Walking Skeleton

Thus to me both mindsets/approaches are about speed. While you get your skeleton going it will fail. And it is good that it will fail as you will learn from it, help up the skeleton and continue. I even argue that you should push your skeleton to directions where it will fail even faster just to learn faster.

So give some free hugs to the skeleton🙂

Findings

After these two sessions it was totally clear to me, that it was not important why we failed. It just was good that we failed and that we learned from it. Thus without having named it that way, we had developed a walking skeleton and bit by bit added some flesh to the bones. We mastered several challenges like architectural changes to the existing code and we learned a lot in a short amount of time. The failure I perceived just was a result of being fast.

At the end my summary is, that we could not have done better than that. Without having the skeleton walking we would have required weeks of planning, sketching and perhaps doing spikes. But having the skeleton the final architecture evolved quite naturally as every knowledge we missed before was at hand.

And now? We are just about to send another skeleton walking around…

Shape Your Blueprint – Only Use What You Need

 by Jens Dallmann and Daniel Straßenburg

Extendable Blueprint

The CoreMedia system will be delivered to customers with the Blueprint workspace as entry point for customizations. Technically it is a Maven project which aggregates core features in functional feature sets.  To represent a functional feature set an extension mechanism has been implemented. Each extension is self-contained and can be used out of the box.

In order to manage the set of extensions shipped with the Blueprint, the CoreMedia Blueprint Maven Plugin  was developed. This Maven plugin can be used to modify the extension set in the Blueprint workspace. In order to modify the extensions contained in the CoreMedia components, this plugin performs POM modifications, i.e. the feature set is determined during build time.

During developing the Blueprint at CoreMedia it is beneficial to be able to select a certain feature set during build time in order to build and deploy a system. Such a system can now be tested and it is possible to verify that given extensions can both coexist and run without undesired dependencies to other dependencies.

From a customer’s perspective management of shipped extensions can be helpful if certain extensions shall not be enabled because they are not required in a project. In this case, the feature of the extension shall neither be enabled in production nor the feature’s code base shall be packed or compiled during build time. In other words, removing an extension is a relevant use case if projects are based on a feature-rich initial workspace.

Implications

The modularization brought out two important facts:
1. Features building upon an extension must be removable as well. Otherwise the removal of the underlying extensions might break the build due to unresolvable dependencies. Those features are designed as extensions as well in order to make them removable.
2. The Blueprint contains example data in the form of example CMS content. Some example data belong only to a specific extension. Importing these example data if an extension is inactive can lead to errors during import. To avoid this, the example data is partitioned and the assembly process is designed in a way that only content from enabled extensions is considered.

Example of Use: Removing a Certain Extension

An extension is identified by its extension descriptor. The extension descriptor is a Maven POM which is part of the extension and it contains the dependency management of the extension’s modules.

The extension descriptor is used to enable or disable the extension in the Blueprint. An extension is enabled if the extension descriptor POM is imported into the dependency management of the Blueprint project.

<dependencyManagement>
  ...
  <dependencies>
    <dependency>
      <groupId>${project.groupId}</groupId>
      <artifactId>my-extension-bom</artifactId>
      <version>${project.version}</version>
      <type>pom</type>
      <scope>import</scope>
    </dependency>
  </dependencies> 
  ...
</dependencyManagement>

The CoreMedia Blueprint Maven Plugin can be used to remove this extension. In order to do so, the remove-extension goal has to be executed. The extension to remove is referenced by the parameter “coremedia.project.extensions”. This parameter describes the Maven coordinates of the extension descriptor.

In the above example, the Maven call to remove an extension would look like

mvn com.coremedia.maven:coremedia-blueprint-maven-plugin:remove-extensions -Dcoremedia.project.extensions=my-extension

The notation my-extension is a short version of com.coremedia.blueprint:my-extension:1.0. Maven allows to omit the groupId and version if it is identical to the current ones of the project.

The result of this call is a Blueprint workspace with modified POMs in the extension-config module as well as in the root pom.xml file. The extension has been removed.

Summary

The extensible design of the Blueprint workspace enables developers to create extensions which are loosely coupled. Every extension is self-contained and brings required logic and data.

The Blueprint itself is released with numerous extensions. Users are able to select only those extensions they want to deploy. Other extensions can be removed by using the CoreMedia Blueprint Maven Plugin. This opportunity avoids dead code maintenance, reduces build time and allows you to shape your Blueprint to your needs.

References

CoreMedia Blueprint Maven Plugin

#socrates15: Ext JS 5 Tests with Selenide

Just as always – hm, the second time for me – SoCraTes 2015 was just great and the workshop I joined on Sunday just emphasized it: Getting into touch with Selenide, hosted by Alexei Vinogradov.

Selenide – jQuery for Java

Knowing Selenium for long now getting off with Selenide was a piece of cake – and knowing Selenium it just was great to feel how easy it is to use Selenide. I would really recommend it, especially if you are known to jQuery and want to have a similar feeling to access elements in Selenide. But Selenide provides far more (and I suggest even far more than I learned on that session).

Accessing Ext JS through Selenide

Not knowing Ext JS by heart – but at least having experience how to access Ext JS (the old version 3) from within automatic UI-Tests – I was curious how I would be able to access the components using Selenide.

Previous Experiences

As presented at the SoCraTes 2014 we use Java wrappers (aka proxies) to represent the Ext JS components. The advantages we see in this approach:

  • Access: Ext JS knows much better than any DOM-path-navigation how to locate components.
  • Hierarchy: Mirroring the component class hierarchy in Java helps us to also spread the knowledge how to access state of the components through the hierarchy – and possibly override it, if a specialized component requires some other approach for example to determine if it is visible.
  • Update: Having the wrappers it’s not a piece of cake to update Ext JS versions (at least when you try from Ext JS 3 to Ext JS 5+) – but it is feasible. You just have to update some central wrappers and all UI-tests are fine again without even touching the tests themselves.
  • Fixing: Actually it is the same for fixing issues in the UI-tests: For example the knowledge how to do a robust drag and drop is hidden deep within the wrappers – and it gets enriched by workarounds each time we learn how to make it even more robust. Again it’s just one small change and with a finger snap all drag and drop tests behave much better.

Adapt for Ext JS 5 and Selenide

I took the challenge to adapt this concept during the one-day workshop for Ext JS 5 using Selenide. What I learned from this proof-of-concept:

  • Locating Components: Ext JS now has a ComponentQuery which eases accessing components a lot. It feels similar to the jQuery syntax.
  • Change SUT and Tests: Just as always: It would be good to have control over the software under test (SUT) as well as over the tests. Otherwise – as you can see in the PoC – you will for example miss clear IDs (or item IDs) to access elements. As I tested against the examples hosted by Sencha I had no control over the SUT.
  • Selenide vs. StaleElementReferenceException: If you write UI-tests with Selenium you will for sure know it: the StaleElementReferenceException. For AJAX applications it is quite normal that the DOM changes and if it changes it is most likely that some elements you accessed in Selenium WebDriver before are not available anymore. Selenium documentation recommends to just update the reference. And here some of Selenide’s magic happens: It will do the trick for you: it will automatically refresh the reference transparently for the test.
    By the way: While doing the PoC I ran into a deadlock not knowing this behavior when I used an implementation which goes through Selenide → Selenium → Selenide (used Selenide in a By-instance). I did not analyze this in depth but the observation was that the UI test was just stuck in locating the element.
  • Local Browser Start: What is a pain in Selenium WebDriver is to start a browser locally. Especially if you want to support several browsers. Selenide provides this out of the box – also for connecting to the Selenium WebDriver Grid. See FAQ for details.

To give it a try just checkout the PoC from GitHub and either start the build via gradle (gradlew test) or just right from within your IDE.

Conclusion

The proof-of-concept was up and running within about 2 hours. It took less than a day (with helpful hints provided by Alexei, thanks for that!) to get the whole PoC to a state where you could get an idea how we might continue from here.

I definitely recommend to give Selenide a try to anyone testing “normal” web pages as a syntax similar to to jQuery just makes it a piece of cake to access the same elements from within your UI tests. But also for rich web applications I recommend to give Selenide a try because of many aspects – not only because the assertion approach using (possibly) custom conditions in the should() statements not mentioned (despite here) in this blog-post promises to provide additional flexibility for checking for example required component states in Ext JS.

SoCraTes’ Magic

There is so much more to say about SoCraTes – not only that I will participate again next year and that I have to thank the organizers a lot! But I think this video shows best that there is some magic going on in Hotel-Park-Soltau (the great location where SoCraTes took place for the second time):

Links