Terrible Blog

Because Terrible Labs loves you.

Testing a RubyMotion App With a Rails Backend

| Comments

Or, “those scars look really neat, where did you get them?”

Learning how to test RubyMotion apps can be daunting if you’re coming from a server-side programming background, especially one like Rails with a mature ecosystem of testing tools and community wisdom. Full-stack integration testing—firing up the entire system and putting it through through its paces as a user would—can be particularly challenging.

We recently wrapped up an engagement with one of our favorite clients, MeYouHealth, during which we needed to build a way to sanely test a RubyMotion app backed by a JSON-serving Rails backend. I’ll cover both the strategies we tried and discarded as well as our most successful approach.

Frank and Bacon

A brief RubyMotion testing aside: there are two libraries we use to write tests for our RubyMotion apps.

  • MacBacon is an rspec-esque library that ships with RubyMotion, which we use to write model and controller tests for individual components of the app.
  • Frank is a testing framework that fires up your entire app and walks through Cucumber-esque scenarios written in Gherkin, using Apple’s UIAutomation framework to drive the app from the outside. We use this to write black-box integration tests.

For both of these types of tests, we needed a way to mock out the JSON responses from our backend.

What Didn’t Work: Stubbing and VCR

Stubbing HTTP requests

Our first approach to making sure our tests didn’t depend on the backend was to simply stub out the HTTP responses in our Bacon specs using webstub. This worked, but it quickly became a pain to keep the stubbed responses in sync with the backend under active development. When they’d fall out of sync, debugging what was wrong in the tests wasn’t straightforward.

Worse, while technically possible to use webstub for Frank tests, to do so would have involved some signification contortions. Because Frank itself runs outside the app, the only way of stubbing is to add methods to the AppDelegate class and call them using Frank’s remote execution, doing so for each request we wanted to stub. Yikes.

VCR-esque HTTP Recording

The next tack we tried was recording responses from the server once and playing them back, inspired by VCR. VCR itself doesn’t work in RubyMotion (as with most regular Ruby gems), so we used a custom recording proxy server written in Go with a thin Ruby wrapper for driving cassette use inside tests. This would record HTTP interactions with a local test server once, then replay them each subsequent time.

This worked, and let us run our Frank specs without a live backend. However, it suffered from the same skew between backend development and recorded cassettes. Changing modeling on the backend necessitated either some tricky editing of the cassette files or setting up the scenario again and re-recording the whole scenario. Time-sensitive data became tricky as well, with the timestamps being fixed in the cassettes once recorded. Not great.

What Worked: Remote Fixture Loading

First, we stopped trying to emulate actual HTTP tests in our Bacon controller and model specs, instead favoring use of dependency injection to test components in isolation. Any testing of HTTP concerns was deferred to the integration tests.

Next, to get our Frank features in shape, we built out a system to let us talk to a local instance of the backend and selectively load small scenarios for each individual feature to run against.

We’ve put together a pair of example applications to showcase this approach, along with instructions on how to get up and running. Note that while our example iOS app is written in RubyMotion, since Frank works independently of the app, there’s no reason this can’t be adapted for Obj-C or Swift iOS apps.

How It Works

  • On the Rails side, we first run a script that starts Rails in test mode and then listens for commands on a named pipe.
  • We then run the Frank tests on the client side. Each Frank feature that has a tag of the form @api_fixture_NAME will push a command over the named pipe for the Rails instance to load and run a scenario file with the specified NAME.
  • The scenario files are just plain Ruby files where we create the ActiveRecord objects that feature requires.
  • After the scenario file is loaded, the feature runs normally, with the app talking over HTTP to the backend with no stubbing involved. Each feature is now running against a known set of data and working with the actual APIs that the server is exposing.
  • The test database is wiped at the start of each test, so as to provide a clean slate and prevent state bleeding from one feature to another.

Disadvantages

This isn’t all roses.

For one, it’s slow. This is a problem with any form of iOS testing, as the app needs to be compiled and launched in the simulator before the tests can run, and it’s exacerbated here by the time it takes for the server’s state to be set up and torn down in between features.

It also adds complexity to the testing process by requiring you to have the Rails environment up and running for the Frank tests run at all. This also means some extra work to set up on a CI environment like Travis.

So It’s The Worst Approach, Except For All The Rest

We’re overall pretty happy with this approach: it lets us do full-stack integration testing without tearing our hair out trying to manage a bunch of mocked state. It also does a great job of letting us know when the client’s expectations of the API have gotten out of sync with reality on the server side. Despite the downsides it’s still a huge win in our minds.

Comments