It was a real challenge getting bots to work well. They are still not as reliable as I would like, but the tests have come a long way.
It was a challenge to get all 190 End To End tests to run sequentially. Running them individually was a success early on, but running them back to back rarely succeeded.
Often times it was very hard to determine what went wrong. The following screenshot of an integration gave no results:
Some integrations showed that all tests passed, but the integration failed:
Here is a stack overflow post that has more details:
All Xcode Bots Tests Pass, but why does it fail integration with "No tests found"?
The largest factor was timing, and race conditions. Earlier on we suspected that we had bad uses of Singletons
in our production code and the tests were having trouble managing "global state." In the end,
it turned out to be NSTimers and race conditions between UI element animations.
Battling through those challenges made the reward very sweet. We now have an automated system that runs on every check-in to our git repo on master.
It runs End to End 191 feature "pinning tests" in 5 minutes as seen on the following screenshot:
That is great in all, but often times the developer forgets or cares little about the test results.
Automated tests need a system to inform the developer and provide value back to the development process.
That is where bots "bigscreen", Philips Hue lights, and my handy test owl lamp.
This is the view every day during stand-up. If there is a problem with tests, the team will notice. When a build starts, it flashes yellow.
Interested in doing this yourself? Check my stack overflow post that describes how to make a bash script parse the test results, and signal the Test Owl:
Where do Xcode Bots put their results, so I can parse them?