GitHub API Authentication Updates

At Solano Labs, we believe that a seamless integration between our service and our customers’ tools provides the best user experience. Many of our customers today use GitHub and have connected a GitHub account with their Tddium account using OAuth.

We take the security of our customers’ code very seriously, and we’re making some important changes to our GitHub OAuth integration that should give you much finer-grained control over the privileges you give Tddium to operate on your GitHub account.

What we do now
Our current GitHub OAuth functionality requests nearly complete permissions to your GitHub account (“user,repo” scope in GitHub’s API terminology). Tddium requests these privileges so that it can fully automate the setup of the CI workflow (commit hooks, deploy keys, and keys to install private dependencies). Our updated GitHub integration allows for multiple privilege levels so that you can make a tradeoff between permissions and automated setup.

In the next week or so
we’ll roll out changes that will:

  • Allow basic Single-Sign-On with no GitHub API access otherwise.
  • Let you choose between 3 privilege levels that allow Tddium to:
    1. post commit status to update pull requests (for public and private repos)
      (“repo:status” scope)
    2. automate CI webhooks and deploy keys for public repos.
      (“repo:status,public_repo” scope)
    3. automate CI webhooks and deploy keys for public and private repos.
      (“repo” scope)
  • Give instructions on creating bot Github users to allow your builds to pull dependencies installed from private GitHub repos.

If you have already linked your GitHub account, it will continue to be linked, and will give Tddium the current high level of permissions. After the rollout, you’ll be able to easily edit Tddium’s permissions on your GitHub account on your User Settings page.

We look forward to your feedback at support@solanolabs.com.

Cheers,

The Solano Labs Team

github-oauth


Speeding Up our Test Suite: From 2.5 hours to 20 mins with Solano Labs

by Drew Blas, Software Engineer, Chargify.com

At Chargify we rely heavily on automated testing to ensure that we always maintain a working app. With so many customers and a heavily utilized API, it’s critical that we maintain complete backwards compatibility and ensure we don’t impact existing customer operations. That’s why our test suite consists of thousands of tests for gateway interactions, workflows, and response formats. Unfortunately, it also made for painfully slow development. By integrating Solano Labs’ TDDium, we were able to reduce our complete test suite run from 2.5 hours to 20 minutes. This incredible improvement helped to promote a radical change in our testing attitudes while keeping our deployment cycle as fast and agile as possible.

What we were doing before Solano Labs’ Tddium

Our previous, homegrown Continuous Integration environment relied on single server to do each build. Unfortunately, because of the intense load from our test suite, we had numerous limitations. We could only run a single build at a time, we often couldn’t see results until it was complete, and debugging test failures was a major pain (that involved sshing into the server to try and extract environment-specific issues). Worst of all, waiting for several hours between builds meant we weren’t able to get quick feedback about broken tests and prevented us from truly making good use of our tests during development.

These factors meant we not only spent a lot of time ‘fixing’ the build, but that we often skipped steps or didn’t test properly because of time constraints. Thanks to TDDium, that process has been greatly streamlined so that we can perform TDD the way it was meant to be.

Faster Testing with Solano Labs

Of course, with a codebase as big as ours, switching to TDDium was not instantaneous. We found a lot of areas in our tests that had to be improved or refactored. Some of these changes were due to the different runtime environments, but most had to do with brittle tests that did not respond well to the randomized distributed testing model. Many of our tests were order dependent or conflicted with other tests and couldn’t be reliably run in parallel. However, TDDium did a great job of giving us the tools needed to re-create and fix these issues. There’s extensive logging available and even an option to pinpoint the exact tests and ordering used in particular run. In the cases where we needed help, the support from Solano Labs was top-notch. They worked side-by-side with us on issues where we needed assistance and saved us even more time. All this helped us to decouple our tests and prepare them for highly-distributed execution. The process was definitely worth it: we wound up with much more robust test suite that runs in a fraction of the time!

Having a test suite that runs quickly means we can get much faster feedback about changes that we make.  Instead of waiting until the next day to see if a simple change ‘breaks the build’, we can instead update, test, package, and deploy at a much more rapid pace.  Reducing the turnaround time on getting code to production is truly a lynchpin in any agile operation!

Easy database support!

Surprisingly, one of the easiest integration points with TDDium is the database support. The connections are mostly automatic and it’s easy to specify multiple databases in specific versions. Our tests rely on Redis, Mongo, & MySQL, and all worked basically out of the box! External service integration (like Campfire & Github) has also helped to make our lives easier and improve our team’s workflow.

Help with Rails Upgrades

Finally, we’re especially thankful to have set up TDDium before we started on our major upgrade from Rails 2.3 to 3.2. It was a huge undertaking and the ever-watchful eye of our tests is a big reason for our success. Constantly running builds and getting rapid feedback about major architectural changes allowed us keep making progress while not affecting our day-to-day development. TDDium gracefully handled the constantly changing configuration of our app with aplomb!

Ultimately, TDDium has given us a wonderful collaborative environment for running our tests AND provided an order of magnitude improvement in build time to keep our development team happily coding away. Thanks!


CoachUp Now Coaching on CI Best Practices

by Arian Radmand CTO @ www.coachup.com

The CoachUp engineering department is constantly refining its development process for the sake of efficiency. I wanted to spend some time talking about one change we’ve recently made that I really feel has maximized our development speed: setting up Tddium’s continuous integration environment (solanolabs.com).
I should begin by talking a bit about our development process at CoachUp. First, we attempt to get new features into production as quickly as possible. We push new releases of our code into production every single day. To ensure that we do not break existing functionality, we put a heavy emphasis on testing. More specifically, we put a heavy emphasis on automated testing. We’re a Ruby on Rails shop, so we leverage the great built-in testing functionality that comes with Rails (primarily rspec, but a variety of other utilities as well). Although none of our engineers subscribe to TDD fully, we are meticulous in ensuring that every piece of functionality that enters our codebase is accompanied by a corresponding set of tests. We have operated in this way since the company was started.

As a result of our practices, the test suite has grown larger each day, which has been both a blessing and a curse at the same time. The engineering team has a policy of not pushing any release to production if our test suite is not completely green. We realized that it was great that we were maintaining adequate test coverage across our application. But with a large test suite, the problems we ran into were twofold:

1. The amount of time in which our test suite ran constantly increased
2. As it took longer and longer to run our test suite, the frequency of test suite runs began to decrease (after all, we were running everything manually)

Our solution: Tddium continuous integration environment.

For those of you unfamiliar with Tddium and continuous integration, I’ll explain a bit about how we’ve integrated Tddium into our dev process to make us faster and more efficient. At CoachUp, we used Tddium to address the two problems mentioned above. We signed up for a Tddium account, which involved hooking up our github account and selecting a plan in accordance with the size of our test suite. After we were set up, the rest was really effortless!

From our perspective, we basically just develop as normal: create a new feature branch from our github repository, develop, push to github, issue a pull request, and merge when ready. In the background, Tddium works on our behalf to do several things. It will monitor our github repository and when a new feature branch is pushed up to github, Tddium springs into action by grabbing the new feature branch and cranking through our test suite. We then conveniently get an email report sent to us detailing the results of the test run. If the new feature branch introduces a regression bug, we know about it immediately and can fix it well before it even has a chance to become a problem. Further, Tddium makes it super easy to switch plans and add/remove workers based on your scaling needs.

For us, the move to Tddium greatly cut down on development time by letting us really step on the gas and develop at a face pace, knowing all the while that we would be notified immediately if we introduced any regressions into the codebase.

We’re constantly trying new tools and processes here at CoachUp to make us more efficient, and Tddium has been one of our biggest successes.

Bottom line: if you’re looking for a quick, easy, non-intrusive way to speed up your test suite and make your dev team more efficient, definitely check Tddium out!

This is what has worked for us. What tools have others used to boost the efficiency/productivity of their team?

See the post live on their site: http://engineering.coachup.com/continuous-integration/


Dr. Testlove or: How I Learned to Stop Worrying and Love Automated Testing* by Brent McNish

(A great post from one of our joint Sauce Labs / Solano Labs Customers at www.deliberator.com. Thanks Brent for sharing! Congrats guys on your Beta Launch!)

________

* with apologies to Stanley Kubrick

I’m the co-founder and CTO of Deliberator, a new social network for ideas. Deliberator brings people together to create, debate and propagate solutions to the complex problems of the day. Join the debate at www.deliberator.com

Pre-testoric

The image of the head-down coder hacking away Tasmanian Devil-like, paying – at best – lip service to writing tests is a thankfully less and less accurate cliché these days. But that wasn’t always the case. These guys (and girls) used to be the everywhere. They didn’t get into development to do testing! Look, the code works! Job done. Next feature, bring it on!

Yes, once upon a time the “who needs automated tests” dinosaurs ruled the earth. And I was one of them.

I began my web development career in a large IT consultancy. We had testers, and test managers, and elaborate test plans. They would catch the bugs. It wasn’t a developers job to test.

Then I left to co-found my first (bootstrapped) startup, as the sole developer, with a single co-founder. And in Bootstrapped-Startup-Land you don’t have testers, and test managers, and test plans. There is just you. And your code

And every line of code you write, someone, somewhere in another startup is also writing a line of code, and they might have the same idea as you. And they’re going to launch their feature before yours. And they’re going to beat you.

So each line of code is precious. And you don’t want to “waste” it on a test.

So I still wasn’t interested in testing.

When each feature was completed I manually tested it, then my co-founder tested it. Then I fixed any bugs. Then we both tested it again. Then, when it worked, I moved on to the next feature.

Then things that had worked would break. So I’d go back and make them work, then move on. Then, later, they broke again. A mantra began in my head, quietly at first, then with increasing volume: “Write some tests, write some tests…” But where to find the time with all this bug fixing to do….

Yes, that way madness lies.

Unit-ed we stand

So I began writing tests. This was a baptism of fire as, of course, there was now a large backlog of untested functionality to tackle. But I gritted my teeth, girded my loins (whatever that involves…), dove in and began writing unit tests. And, you know, a funny thing happened…

Slowly, very slowly, assertion by assertion, I learned to love testing.

I began to take actual pleasure (pleasure! imagine that!) in crafting a test then seeing the little green icon in my IDE ping to life when it passed. Knowing that my new feature was fit to go live. And more importantly, that I hadn’t broken something else in the process.

So, this was job done right? Sure, this didn’t test the front-end. But that’s what humans are for right?

Wrong.

The extra confidence, and speed, I gained from the unit tests simply seemed to manifest itself in more front-end bugs. Dammit!

Seleniummmm…

 Then I discovered Selenium.

This was Selenium 1 (aka Selenium RC) so not the most stable and robust framework ever. We would often see a test fail then pass immediately after without any change to the code or data in between. Hmmm……

Even so, automating browser tests seemed like magic. It was mesmerising to watch the tests running. A never-tiring invisible hand filling in form fields and clicking buttons. Ok, maybe I’m just easily mesmerised.

It was fortunate that I found watching the tests so entertaining, because boy were they s-l-o-w. A full run would take around 90 minutes. It also sent the CPU fan on my laptop crazy and made using it for any other purpose at the same time a painful ordeal.

The upshot of this was that I didn’t run the selenium tests very often. Which in turn meant that they grew more and more out of date. Which in turn meant that I was even less likely to run them….

So we ended up falling back on manual browser testing again. Doh!

Hot Sauce

But, wait, what’s that sound? Enter stage left, our hero on a white horse….. it’s Saucelabs!

Yes, I remember distinctly the day I came across the Saucelabs website. I instantly Skyped my co-founder “Praise the Lord!” I exclaimed, or words to that effect. “Selenium is re-born!”

And for us it really was.

After a very small amount of painless integration, there they were, our browser tests running in the cloud. Sweet!

The test sweet…ahem suite still took a long time to run but it was now fire and forget. Just kick off the tests and get on with my normal business.

Although I did kinda miss being able to fry eggs on my Macbook when the tests were running locally. Those were damn good eggs!

Almost as big a deal as being able to run our tests in the cloud was that we now had a complete record of every Selenium test run, including a video of the test running!

Add to this the ability to do ad-hoc cross browser/OS testing with Sauce Launcher, and test against our local dev build with Sauce Connect and it’s fair to say we were pretty ecstatic!

This was by far the best situation we’d been in test-wise and the quality of the code showed it.

Carpe Tddium

But, you know, I’m hard to please.

My two biggest remaining niggles for me were the speed of the Selenium test suite execution, and the fact that our Unit and Selenium test results weren’t integrated.

I’d come to accept these limitations, until…

What’s this, the rumble of horses hooves again? Here comes the second hero of our piece, Tddium, charging into the fray!

The claim of Tddium to completely lift the testing burden – unit and selenium – into the cloud, and run both in parallel blew me away. To the extent that I was dubious as to how well it would work.

The answer…. very well indeed!

Now add in Tddium’s Github integration and Continuous Integration support and…. wow… just….wow.

I am currently running 8 Tddium workers in parallel and the runtime for my complete suite of unit and Selenium tests is down from around 2 hours to 15 minutes.

This has been a game-changer in my development routine. I’m now much more ready take risks and try stuff, knowing I can get such quick and comprehensive test feedback.

Continuous Inspiration

So yes, my conversion, is now complete. From throwing code over the wall to ‘those tester people’, to being forced by necessity to slowly embrace automated testing, to now realising the full potential of automation with Sauce and Tddium, it’s been quite a ride. And it’s not finished yet…

I still know that, someone, somewhere in another startup is still writing that line of code, and they might have the same idea as me. But now I’m not worried that they’re going to launch their feature before us. And they’re not going to beat us.

Unless….they’re using Sauce and Tddium too….

Dammit!


Tddium for JRuby

Here at Solano we’ve run over 13 million tests since Tddium launched.  We hear on a regular basis that a fast, automatically managed continuous integration platform changes the way our users develop software.  We’ve also heard from folks that want the power of on-demand testing and CI with Tddium but use JRuby.  We’re therefore pleased to announce private beta for JRuby on Tddium.  You can now run your tests on JRuby in either Ruby 1.8 or Ruby 1.9 mode, and together with all of the frameworks and backend services we support for MRI and REE.

Our JRuby support is in private beta today, and will be rolled out to all of users in the near future. If you’re interested in running your tests on JRuby, enter your information here or shoot us an email at info@tddium.com and we’ll get you set up as soon as we can.


Usability Enhancements to the Tddium CLI

We’re happy to announce some changes to the “tddium” command — the main CLI interface to Tddium.

To pick up the changes, “gem update tddium” to get version 1.4.1 or later.

Watch the video tour:

1. “tddium run” – Automatic Suite Setup and Testing

TL;DR: “tddium run” automatically creates a suite (setup for CI) for the current branch.  No need to run “tddium suite” manually.

Tddium is built around the concept of test “suites” — the test files associated with a repo and a branch, along with other configuration data, so when we set out to build a CLI, we made the “tddium suite” command the first step a new user ran to create and configure a test suite, followed by “tddium spec” to start tests (back when we only supported RSpec).  The “tddium suite” command is the one-stop suite setup and configuration utility — it creates new suites, and lets you edit suite settings.

We soon discovered that many users who follow the common topic-branch (or feature-branch, or git-flow) methodology had to go through the suite setup procedure often, sometimes many times a day.  They would simply accept the defaults “tddium suite” automatically determined – for the test pattern used to select tests, the ruby version, and the CI origin URL, and then waiting for the suite update to persist in their Tddium git repo (“your git git repo is being prepped”) before starting tests.

We also found that new users were confused by the output and prompts from “tddium suite”.   Much of the copy produced by “tddium suite” was written before www.tddium.com had much content, so it had to be both utility and HOWTO.

So, we renamed “tddium spec” to “tddium run”, and made it a whole lot smarter:

  1. Automatically creates a new suite (and configures it for CI!) or chooses an existing one with sensible defaults.  To view or configure the suite, use the “tddium suite” command as before.
  2. Waits for your Tddium repo to be set up and automatically starts tests when it’s ready.
  3. Has better formatted warnings and status messages.

2. “tddium web” – Open the latest session in your browser

Instead of cutting and pasting a URL for a manual run from the CLI, you can use “tddium web” to automatically open your latest build in a browser.

3. Shared login across repos

Tddium used to require you to log on the CLI in once per repo – no more!

Now, your login is valid across all of your git repos.

Enjoy!

We’re busy working on more usability enhancements based on feedback from all of our great customers.

Don’t hesitate to send us questions, comments, or suggestions.

- The Tddium Team


Heroku Continuous Deployment

A few weeks ago, we rolled out preliminary support for automatic code coverage collection and custom post-build tasks.

Over the coming weeks, we’re rolling out better UIs in front of these features, but if you’re impatient, and you’re up for using our sample rake task, read on for end-to-end continuous deployment.

I’ll describe how we use post-build tasks and environment variables to implement continuous deployment of one of our own apps into Heroku, including running migrations.

Note: If you currently use Tddium’s push-on-pass functionality, this approach replaces it.

Step 1: Setup Environment Variables

The first step is to set ephemeral environment variables in Tddium containing sensitive parameters, like your Heroku app name and credentials.

$ tddium config:add account HEROKU_EMAIL    my_heroku_login_email@example.com
$ tddium config:add account HEROKU_API_KEY  my_heroku_api_key
$ tddium config:add account HEROKU_APP_NAME my_heroku_app_name

Tddium’s environment variables allow you to pass this sensitive information to your tests and the post-build hook that we’ll create – without having to check these in to your repository.

You can find your Heroku API key by logging in to your Heroku Account page.

Step 2: Install the Post Build Task

We’ve written up a sample post-build task that will push to Heroku automatically (gist). You can customize this task as you need. Over the next few weeks, we’ll be rolling out a more streamlined UI to make post-build configuration much simpler.

def cmd(c)
  system c
end

namespace :tddium do
  desc "post_build_hook"
  task :post_build_hook do
    # This build hook should only run after CI builds.
    #
    # There are other cases where we'd want to run something after every build,
    # or only after manual builds.
    return unless ENV["TDDIUM_MODE"] == "ci"
    return unless ENV["TDDIUM_BUILD_STATUS"] == "passed"

    dir = File.expand_path("~/.heroku/")
    heroku_email = ENV["HEROKU_EMAIL"]
    heroku_api_key = ENV["HEROKU_API_KEY"]
    current_branch = `git symbolic-ref HEAD 2>/dev/null | cut -d"/" -f 3-`.strip
    app_name = ENV["HEROKU_APP_NAME"]
    push_target = "git@heroku.com:#{app_name}.git"

    abort "invalid current branch" unless current_branch

    FileUtils.mkdir_p(dir) or abort "Could not create #{dir}"

    puts "Writing Heroku Credentials"
    File.open(File.join(dir, "credentials"), "w") do |f|
      f.write([heroku_email, heroku_api_key].join("\n"))
      f.write("\n")
    end

    File.open(File.expand_path("~/.netrc"), "a+") do |f|
      ['api', 'code'].each do |host|
        f.puts "machine #{host}.heroku.com"
        f.puts "  login #{heroku_email}"
        f.puts "  password #{heroku_api_key}"
      end
    end

    puts "Pushing to Heroku: #{push_target}..."
    cmd "git push #{push_target} HEAD:master --force" or abort "could not push to #{push_target}"

    Bundler.with_clean_env do
      puts "Running Heroku Migrations..."
      cmd "heroku run rake db:migrate --app #{app_name}" or abort "aborted migrations"

      puts "Restarting Heroku..."
      cmd "heroku restart --app #{app_name}" or abort "aborted heroku restart"
    end
  end
end

Step 3: Authorize Tddium’s Worker Key

Run tddium account to get the Tddium worker key you need to authorize with Heroku.
Save the key in a file: tddium-worker-key.pub.
Then run heroku keys:add tddium-worker-key.pub

Step 4: Trigger A Build

That’s it! Push to your git repo to trigger Tddium CI, or trigger a build manually on your Tddium Dashboard.

When the push and migration completes, you’ll see a post_build_hook.log.

If you haven’t configured Tddium CI, read our getting started guide for more information.

If you don’t yet have a Tddium account, sign up now for a free trial!

Don’t hesitate to contact us at support@tddium.com for more information.

Update (10/25/2012): 

Our awesome customers  have pointed out a few gotchas and solutions:

  1. Make sure you have the ‘heroku’ gem in your Gemfile, or the above Heroku commands won’t work.  We’ll soon be automatically including the heroku toolbelt in our workers, but until then…
  2. If you’re using Rails 3.1+ and the asset pipeline, make sure you enable the heroku user-env-compile labs feature.

Update (6/21/2013):

The Heroku toolbelt package has been installed in test VMs for some time now so it is safe to use instead of the gem.

Update (7/15/2013):

If you are using Ruby 2.0, you will need to use Bundler.with_clean_env to run the Heroku toolbelt command.


Tests are Part of your Product

Check out the slides from my Railsconf 2012 Lightning Talk on Speakerdeck:

http://speakerdeck.com/u/tddium/p/tests-are-part-of-your-product-railsconf-2012

I’ll be expanding on these concepts and sharing my thoughts on how developer-written tests fit into a strong engineering culture over a series of blog posts in the next weeks.  Stay tuned!


2 Million Tests!

I’m happy to announce that Tddium has just run it’s 2,000,000th test!

That represents well over 10,000 hours of test execution for rspec, cucumber, test::unit, spinach, turnip, and jasmine tests.

We’re also pleased to announce some great new integrations:

Stay tuned, we’ve got some exciting news coming over the next weeks and months!


RabbitMQ, CouchDB, Build Controls & CCMenu

Update: CCMenu can now be configured from your organization’s chat notifications configuration dialog.

Happy Holidays, everyone!

Tddium’s been open to the public for a month now, and we’ve seen great response and growth we can be proud of!   Tddium has run over 750k tests, with usage accelerating every day.

The Tddium elves have been hard at work on holiday presents for our loyal users.  We’re happy to announce these great new features:

New Integrations:

  • RabbitMQ integration:  now your tests have access to sandboxed live RabbitMQ instances.
  • Couch DB support (preliminary):  We asked if anyone used Couch DB, and you answered “Yes!”.  Enjoy!  Tddium’s CouchDB is compatible with apps that use Cloudant’s popular hosted DB service.
  • CCMenu/CCTray integration:  If you’re used to having CCMenu to track build status, Tddium is here to fit your workflow.  Check out the CCMenu configuration link in your dashboard.  The CCMenu link delivers an XML document with all of your build status, so you can use it to drive your own status displays.
  • MySQL 5.5 support, and timezone support in all versions of MySQL
  • RepositoryHosting.com post-commit notifications:  Use Tddium with RepositoryHosting for full-featured Git repository and CI hosting.

User Interface Enhancements:

  • Streamlined dashboard:  the most recently used suites are listed first, with space for inline controls and configuration
  • Build Controls:  start a CI build, stop a running build, bookmark your latest build
  • Support for  multiple SSH keys:  the tddium gem now lets you authorize multiple SSH keys for pushing to Tddium.  You can either user a key you already have, or generate a new keypair.
  • Support for git submodules
  • HTML page capture for Spinach test failures
  • Better support for multi-byte characters in source and Rake files: many new pre-installed locales, with the default locale set to en_us.UTF8.
  • Email notification controls:  suppress CI notification emails, for example if you use Campfire

We’re here to make Tddium more useful to you, so stay tuned for what’s coming down the pipe.

Thanks!

The Tddium Team


Follow

Get every new post delivered to your Inbox.

Join 369 other followers