A Post-Internship Look at RTR

This post was written by our awesome colleague, Maude "It's pronounced 'Mode'" Lemaire, whom we can't wait to welcome back after she finishes school.

I left New York City a few hours ago with every intent to return. Yesterday marked my final day as a tech intern for Rent the Runway but I still feel as though I'll be back on Tuesday, grabbing myself an ice coffee from the kitchen and tackling some new bugs. Needless to say it'll be strange heading to class bright and early and hitting the books once more.

I spent thirteen weeks working alongside some of the most insightful engineers at Rent the Runway's SoHo offices. Within just three months, I learned more than in a single semester of university. Pat (Newell) & John (Holdun) have taught me about writing efficient JavaScript, best CSS practices, and using Backbone to solve just about every problem. After a few weeks, I developed a decent understanding of Ruby where I had none whatsoever previously.

From my experience this summer, I learned most from the code review process. At Rent the Runway, when you're working on a new feature or fixing a bug, you start on a local branch. When you think it's all good, you make a pull request to merge your changes onto the master branch. At that point, your peers will review your code. They'll make comments about syntax, a block of code you can reduce to a single line (sometimes), and the bigger picture of your solution; sometimes it turns into a big discussion about how your code will scale and evolve with future features on the horizon. Although it might seem harsh at first, you have to go into the code review process with an open mind and hope to come out of every pull request a better programmer than you were before.

Everyone's constantly talking about building a scalable, maintainable system. There are discussions about the best practices everywhere you turn in the office. Don't know how a system works? Open up your chat and ask someone you think might know. Don't know the specifics of Ruby syntax? Just turn around and ask someone! You'll find experts in a bunch of niches and it's an environment that makes it incredibly easy to learn a ton of new things. As an intern, it's a perfect opportunity to turn to your neighbor and ask them a million questions about what they know! I was able to learn about product, business strategy, marketing and buying in addition to tech just by having coffee chats with coworkers. In terms of work experience diversity, you truly can't beat the Rent the Runway team.

The amount of women in tech at Rent the Runway is surprising. I wasn’t prepared to see so many, coming from a university program where barely 9% of us are women and having worked an internship the previous summer where I was the only woman on my team. It was great to see that no matter what background any programmer was coming from, everyone was open to their ideas. No need to prove yourself (which I've had to do in certain cases) – you're instantly an important part of this dynamic group of hardworking engineers. Even though I was "just an intern," I found that I was the only one ever saying anything of the sort. To my team, I wasn't "just an intern;" by the end of the summer, I was given just as much work as my coworkers and writing as much production code. There were certainly times when I seriously screwed up a pull request with a million rebasing related commits and caused a fair share of JavaScript errors but I'm happy to say I fixed more problems than I caused.

About a month before my original end date of August 22nd, I was sitting down with Jade, the team lead on our current project, when he asked when I'd be heading back to Montreal. At the time, I'd heard about Hack Week during the last week of August – a full five days of working on anything you wanted (so long as it made Rent the Runway better) and I desperately wanted to stay the extra week. With his support, my internship was extended by a week and I was able to stay and participate in the festivities. To top it all off, a few weeks later I was given a full-time offer! Beyond the perks of free rentals, unlimited vacation, and living in NYC, it's an opportunity I simply cannot pass up. Between the people in tech at RTR and the opportunity for fast-paced growth, Rent the Runway is a really (really) great place to work and you can count on me coming back after graduation.

Experimenting with Alchemy


A/B testing is a tool we heavily rely on to tell us whether new or improvements of existing user-facing features add business value.  It could be something as simple as the placement of a calendar on a page and as complex as an entire new Checkout workflow.  In both cases, it's worth knowing whether these changes will affect the business positively or negatively.

In order to configure these experiments and to randomize which user receives which treatment (e.g. original Checkout page vs new Checkout page), we've written our own in-house tool.  In fact, it's gone through three incarnations.


First Generation

The first approach was fairly straight-forward:

  1. A user logs into our website and a list of experiments is retrieved
  2. For each given experiment, if the user had not been randomly assigned a treatment yet, they would be assigned one, and this mapping would be stored in a database
  3. Any time they visited the site in the future, this mapping would be retrieved and would represent which treatment a user should receive

This first attempt has both pros and cons:

  • Pros
    • Able to control treatment assignments to users on a fine-grained level
    • If the ratio of users that should receive treatment A versus B changes in the future, it does not affect existing users
    • Cons
      • Does not scale well once you have millions of users and hundreds of experiments
      • Each time a unique user visits the website, it has to be loaded from the database and then cached
      • Doesn't handle the scenario where experiments should also apply to anonymous users who haven't logged in
        • Can't store a row for treatment assignment for each session id

Some arguments could be made that fine-grained control of treatment allocation isn't needed for A/B testing, since after all, it's supposed to be randomized testing.  With this argument in mind, the second incarnation was born.

Second Generation

With some new ideas of how the problem of A/B testing could be approached, the second version of our testing framework was created.  There were also additional requirements that the old system did not meet, such as supporting anonymous users.

The new approach was as follows:

  1. Experiments are configured by specifying what ratio of users should receive a given set of treatments
  2. These treatments are assigned to a series of 'bins'
  3. When a user or guest accesses the site, their information is hashed to a number, which is then assigned to a 'bin'
  4. Depending on what treatment was assigned to that 'bin', the user receives that treatment
    1. As an example, you could have the first 50 bins assigned to A, and the second 50 bins assigned to B, if the user lands in bin 75, they receive B
    2. A given user will always hash to the same 'bin' number, which is also randomized by a seed value configured on each experiment
    3. To ensure that the same user always receives the same treatment, the user's userId is hashed.  In the case of an anonymous user, the sessionId is hashed

For the most part, experiment service met most of the needs for configuring and running experiments.  There was a list of things left to be desired:

  • Being able to override what treatment a user is assigned, mainly for QA testing, which the old system allowed easily
  • Ease of configuration
    • There were a lot of quirks with how the experiments were configured and where they were stored
      • Experiments were stored using Redis, but a separate instance per host that was hosting experiment service
      • Each time an experiment had to be configured, it needed to be configured on N machines
      • When configuring treatment allocations, each bin had to be explicitly assigned a treatment, rather than just blocks of treatments
      • Whenever a new experiment needed to be added, an entry had to be hard-coded into a Java file
      • We wanted to open source experiments service for the community to use and improve

As a result of these items, Alchemy was written: an OpenSource A/B testing framework that makes configuration of experiments simple and that runs on a time-proven RESTful framework, Dropwizard.

Third Generation

So, how did we do with our list of things to be desired?

1. Being able to override what treatment a user is assigned

  • This is now supported in Alchemy through 'treatment overrides'
  • Uses a simple filter expressions to be able to specify predicates like "user_id=7389332" for matching who should receive what treatment override
  • Should be used sparingly, namely, for QA testing purposes, since each 'treatment override' expression has to be evaluated any time a user retrieves their treatments

2. Ease of configuration

  • Alchemy uses a simple allocation model
    • You no longer deal with bins, you deal with allocations of treatments
    • Treatments are allocated, deallocated or reallocated with given amounts
      • For example, to assign all 100 bins, one could allocate 50 to A and 50 to B, and that's it
      • Reallocation allows you to reassign a portion of users receiving one treatment to another treatment without caring which 'bins' they were actually assigned to
      • Easy to use REST interface
        • By leveraging Dropwizard, which uses Jersey and Jetty, it's easy to spin up a REST service for configuring experiments
        • Experiments are stored in a single place
          • The currently implementation supports MongoDB but can be easily extended to support other databases
          • Only one place where experiments are stored means only one endpoint to configure experiments on for an entire cluster of hosts
          • Database is read as little as needed -- all experiment configurations are cached
          • Caching is highly configurable

3. Open source