We are extremely happy to announce the launching of an online community blog based on the interest we have received from our first three Automated Testing Meetup Groups. The blog is called AutoTestCentral.com “Where people who write and test software come to talk about automation” We are very excited to grow and support this community!
Here we will post on all things about the Automation of Software Testing. We decided to create this blog after trying to share content among our Automated Testing MeetUp Groups. We currently have groups in San Francisco, New York City and Boston. We have had some great talks in each meetup, and sharing presentation materials to only each city’s own meetup page was not going to cut it! We had people in SF wanting to know about NYC and people in Boston trying to learn what the last month’s SF talk was about! With the hopes of launching in more cities in the new year, we knew we needed to change something! So we created this blog, so that we can share all the content from the MeetUp Groups in one place… here!
We are also going to be asking the community to contribute posts. We already have some great ones posted from leaders in the space. If you or someone you know would like to author a post, please reach out to Sarah at firstname.lastname@example.org, and she will guide you through the process.
If you are in one of our covered cities please join! If you would like to have an Automated Testing MeetUp group come to your city, please say so in the comments section.
We hope to see this group grow organically into a place where all testing professionals can learn, knowledge share, post content and talk with one another.
Thank you! Lets get started!!!
- The Solano Labs Team
Solano CI uses the exit status from commands to determine whether a test passes or fails. The behavior follows in a venerable Unix tradition whereby the exit status of zero indicates success and a non-zero exit status indicates failure.
On occasion we’ve seen bugs in test frameworks that can cause false positives, or worse false negatives. Users with ruby test suites should check that they are not impacted by a recent defect when using SimpleCov 0.8.x, RSpect 2.14, Rails 4.0.x, and Ruby 2.1.0. Details may be found in the Github issue: https://github.com/colszowka/simplecov/issues/281.
We’re happy to announce that the changes we’ve been planning to our GitHub authentication integration are live in our production environment!
As we described in an earlier post, we’ve changed our OAuth model to allow users to select the privilege level they give Tddium to communicate with GitHub. Now, when you link a GitHub account, you’ll see a menu of privilege levels that you can authorize: You can always change the level you’ve authorized by visiting your User Settings page, where you’l see the same menu. For more information on Tddium’s use of GitHub permissions, see our documentation section.
At Solano Labs, we believe that a seamless integration between our service and our customers’ tools provides the best user experience. Many of our customers today use GitHub and have connected a GitHub account with their Tddium account using OAuth.
We take the security of our customers’ code very seriously, and we’re making some important changes to our GitHub OAuth integration that should give you much finer-grained control over the privileges you give Tddium to operate on your GitHub account.
What we do now
Our current GitHub OAuth functionality requests nearly complete permissions to your GitHub account (“user,repo” scope in GitHub’s API terminology). Tddium requests these privileges so that it can fully automate the setup of the CI workflow (commit hooks, deploy keys, and keys to install private dependencies). Our updated GitHub integration allows for multiple privilege levels so that you can make a tradeoff between permissions and automated setup.
In the next week or so
we’ll roll out changes that will:
- Allow basic Single-Sign-On with no GitHub API access otherwise.
- Let you choose between 3 privilege levels that allow Tddium to:
- post commit status to update pull requests (for public and private repos)
- automate CI webhooks and deploy keys for public repos.
- automate CI webhooks and deploy keys for public and private repos.
- post commit status to update pull requests (for public and private repos)
- Give instructions on creating bot Github users to allow your builds to pull dependencies installed from private GitHub repos.
If you have already linked your GitHub account, it will continue to be linked, and will give Tddium the current high level of permissions. After the rollout, you’ll be able to easily edit Tddium’s permissions on your GitHub account on your User Settings page.
We look forward to your feedback at email@example.com.
The Solano Labs Team
by Carl Furrow of Lumos Labs
Making sure your test suite runs quickly ensures that it will be run often. We at Lumos Labs (lumosity.com) have been working on an in-house Jenkins CI setup to run our ~2500 tests across ~360 files in under 10 minutes. Our Jenkins setup consists of about 24 executor VMs. For each build we allocate 12 executors, and each executor would get a subset of the total files to be run. For example, with 360 test files, each executor VM would be responsible for running 30 test files each.
Under this configuration a build would complete in anywhere from 12-20 minutes. Which is fine when the production releases are coming slow, but it’s an eternity when three or more people are queueing up changes that need to go into production. Running the suite locally can take 45 minutes to run just the rspec tests, so parallelization is necessity when needing to test the entire suite.
As our company grew, and more developers were creating feature branches, more CI builds were being queued in Jenkins. With the limited number of executors in Jenkins that we had, builds were queuing up. If you were behind 2-3 people in the build queue, that would mean you’d be waiting up to 30-40 minutes for your build to even start running! It was becoming a headache for all of us, so we looked at increasing the number of executor VMs, as well as beefing up the processing power of each one.
Adding more VMs to the cluster brought on additional headaches. With the increase in speed, we were noticing more segfaults occuring during the builds, marking it as a failure. But re-running the build would usually get it to pass. We spent many hours debugging the different environments, gems, etc, trying to determine where the segfaults were happening, and eventually coded up scripting solutions that could detect a segfault, and re-run the subset of tests where the segfault occured. Not a permanent solution, but it was one that could get our builds passing more often without these ‘flickering’ segfaults. Coupled with this was a constant hunt to determine whether or not a failed cucumber scenario was a legitimate one, or perhaps something related to Capybara Webkit. More developer time was spent re-working our selectors and specs that hit Capybara, which was time well spent, but it took a long time to re-code, and deal with version changes to the Capybara API. Obviously you cannot rid yourself of all responsibility, but running tests and managing our own servers was becoming tedious (wait for it).
Knowing that we wanted to stop managing our own CI environment, we went looking to CI service providers (hosted, and self-hosted) to see how they would perform. Unfortunately, we ended up investing days into configuration and setup, and still the test suite times were worse than what we were seeing in our own setup. It seemed we had the best CI around for us, and we’d have to give up on finding a hosted CI service that could be easy to setup, plus, and more importantly, faster than what we currently had. So we started building a beefier set of servers and VMs to run our Jenkins setup, and that was promising, but it was expensive.
Flash-forward to a testing-related meetup this past August, hosted by Solano Labs in SF. They showed off their hosted CI product, tddium, along with a general discussion on testing strategies and horror stories. I had a chance to talk with co-founders Jay and William about our current CI setup, and they felt strongly they could improve the running time, if nothing else.
After setting up the trial account, creating a tddium.yml configuration file, and working with Solano’s support staff to setup an environment that more closely resembled our current Jenkins setup, I had a green build!
Today most of our builds run in about 5 minutes on 1.9.3-p327.
We even had our ruby2.0.0-p247 branch under 4 minutes!
Now that our tests are run via tddium, we’ve phased-out our Jenkins setup, and the testing queue has been all but eliminated. We ended up with a setup that allowed three builds going at once, and that seems like the sweet-spot for us with builds taking about five minutes apiece.
|System||Average Build Time||Executors/Workers||Speed Improvement %|
|Self-Hosted Jenkins||17 minutes||12||-|
|tddium (ruby 1.9.3-p327)||5 minutes||24||340% improvement!|
|tddium (ruby 2.0.0-p247)||4 minutes||24||425% improvement!|
At approximately 2:14pm PT on Oct 24, 2013, Tddium’s DB master server experienced a CPU usage spike that cascaded into to a server stoppage. No data was lost.
Examining data (thanks New Relic!) and logs, our conclusion is that though average usage hovers around 20-30%, our DB master has burst CPU usage close to 100%. Once postgres crosses into “queue backup” territory, it never comes back.
Tonight, we will upgrade our DB cluster to use faster servers. This upgrade should only take a few minutes, but it will require the app to be down.
We appreciate your patience as we address these infrastructure issues.
- The Solano Labs Team
tddium rerun <session_id> # rerun failed tests from session tddium describe <session_id> # show session details
tddium describe` with a small shell-script wrapper:
rspec `tddium describe $session_id --names --type=rspec`
Note: `tddium rerun` is pretty simple right now — it doesn’t do much in the way of local sanity checking, so if you ask it to rerun the tests for the wrong repo, it’ll happily try.