The goal of this interview series is to inspire and help people to transition their career into a new or next experimentation related role. In this edition Ryan Lucht shares his journey. You can follow Ryan on LinkedIn, his website and Eppo.
I’m Ryan Lucht, and I just joined the experimentation platform Eppo as their Community Evangelist. Prior to joining Eppo, I spent 5+ years as a consultant at Cro Metrics working with brands (as you might guess by the name) more in the “conversion rate optimization” subset of the Experimentation space.
What is your current experimentation role and what do you do?
I’ve built my entire career hypothesis on the idea that I can help push the experimentation space forward if I: a) help drive influence from the top-down by speaking to business leaders about the need for experiment-driven decision making, and b) help experimentation leaders overcome the cultural hurdles they face when trying to scale programs.
Both are incredibly hard missions that I’ll be working on for a long, long time I’m sure.
Eppo is giving me a home to do that work, and we share a vision that we’ll make a lot of progress by fostering community and connections between experimentation leaders. When you come up against cultural challenges to scaling an experimentation program (vs. technical ones), it’s really hard for a product or a blog post to help you… but a great conversation with another leader who has “been there, done that” can be enormously helpful.
You recently changed roles (or are in the midst of changing). What made you look for something else? How did you approach your job hunt?
Conversion Rate Optimization was never totally the right umbrella for me – I’ve always been more interested in the meta-level of running experiments than purporting to be a valuable source of specific experiment ideas. While a lot of my most successful peers in the CRO space have bookshelves focused on UI/UX design or copywriting or persuasion, mine is a lot more about statistics, philosophy of science, and leadership/communication.
I was certainly looking for the opportunity to work with teams who are running hundreds to thousands of experiments a year. Many marketing teams run programs that are more modest in scale – dozens a year, perhaps – since you still need the ability to write code for experimental treatments and marketing teams typically don’t have dedicated engineering resources. (That might be a great fit for you if the primary work you’re interested in is the ideation of individual hypotheses!)
I can’t say I conducted any sort of formal job hunt… my move to Eppo was largely the result of keeping in touch with CEO Chetan Sharma over the last year and a half or so. I’ve always been excited about Eppo because I like the product, but even more importantly, Eppo has demonstrated a real commitment to the space and to doing experimentation right. The team is overwhelmingly made up of folks who have run successful experimentation programs before and it shows in everything from the product roadmap to the marketing. It said a lot that they were interested in creating a Community Evangelist role.
How did you enter the experimentation space? What was your first experimentation related role?
I was the first marketing hire at a small e-Learning subscription startup bootstrapped by the co-founders. The co-founders were experts in the space and put a lot of effort into producing content, so when I joined, there was a massive amount of organic traffic coming in the front door, but very few conversions to the paid product. This was right around the time that the Optimizely founders were publishing their book, and back then you could grab an Optimizely license for $50/mo or something equally ridiculous… I pretty quickly latched onto A/B testing as a key growth lever since we had plenty of data available, but needed to figure out how to guide these users on more of a journey beyond reading a blog post.
How did you start to learn experimentation?
I think a more interesting moment was when I started to un-learn a lot of what I had been led to believe in conversion rate optimization, and that was definitely when I read Nassim Taleb’s Incerto (especially the first two books, “Fooled by Randomness” and “The Black Swan”). I started to suspect that even the carefully-researched hypotheses were unpredictable enough to be virtual shots in the dark as far as what impact they would actually have. I also recognized online experimentation as being “convex”, as Taleb would call it – when you lose, you lose very little, but the upside is nearly unlimited… the cost to run an experiment is low, so the optimal approach is to run as many experiments as possible.
I started pulling all sorts of data from Cro Metrics’ database (had tens of thousands of experiments I could look at!) and sure enough, found a shocking lack of correlation between basically anything and win rate. No strategist on the team was particularly smart or prescient, no company had better ideas on average. My thinking got pretty extreme for a while – what was the value of building and utilizing a fancy prioritization model if the famous $100M Bing ads tweak never would’ve gotten prioritized using it?
Since then, a number of folks have taken a critical eye to how bad we actually are at guessing. I don’t remember exactly who compiled the data, but I know somebody came up with the estimate that experts guess A/B test outcomes correctly ~60% of the time. Marginally better than a coin flip, I guess.
How do you apply experimentation in your personal life?
I don’t purport to be an expert in n=1 experiment designs, but I do love tinkering with running little switchback experiments or something similar – usually health-related. I recently bought an Eight Sleep, the $2000 cooling mattress pad that’s a favorite of all sorts of social media influencers, and returned it after it failed to exceed my Minimum Effect of Interest as measured by my Oura ring sleep score 🙂
What are you currently doing to keep up with the ever-changing industry?
There’s always a world of conversation going on on LinkedIn, which (as somebody prone to social media addiction) is usually my favorite place to start a chat. But nothing beats actually grabbing time with other experimentation leaders in-person or on Zoom! I’ve met so many mentors in our space who genuinely enjoy talking about experimentation and have been so generous in opening their calendars to me. I especially enjoy keeping in touch with academia, since there are a number of researchers local to me in Boston to chat with.
What recommendations would you give to someone who is looking to join the experimentation industry and get their first full-time position?
I have three go-to book recommendations that I always share with folks who are interested in starting experimentation:
- Experimentation Works by Stefan Thomke – the business case for running experiments
- Trustworthy Online Controlled Experiments by Kohavi, Tang, and Xu – the “operator’s manual” from the frontlines of running tens of thousands of experiments
- Statistical Methods in Online A/B Testing by Georgi Georgiev – an invaluable desk reference on all things frequentist statistics
My other piece of advice, though, would be this: in almost every role and organization you could work in, your soft skills (communication, managing up, collaboration) are going to end up mattering more than your technical skills (stats knowledge, programming languages, etc.)
One of my favorite books on the soft skills front, even though it is written for an audience of consultants, is “The Trusted Advisor” by David Maister et al. I had a great conversation with Carlos Hernandez, who leads Customer Success at Eppo, and he said something that stuck with me – “many data teams are basically a service desk [to the rest of the business], but they want to be thought partners”. The Trusted Advisor is a book all about how to become more than a “vendor” (even if an internal one) and build yourself into, well, a Trusted Advisor.
How do you think experimentation will develop (in the next 10 years)? How will AI change how experimenters work?
I guess the obvious prediction is that generative AI will give us a lot of experimental treatment ideas, maybe even help us code them too… I think the bigger question is how will experimentation change how AI works?! There’s a very good case to be made that experimentation infrastructure is necessary to create good AI products. Offline evaluation of models has historically been insufficient – what looks like improvement gained in simulations often disappears when deployed to the real world. This is something Eppo has been starting to focus on: https://www.geteppo.com/blog/ab-experiment-infra-is-ai-infra
Otherwise, I think the most important change is continuing the push for adoption – we’re still very early on the adoption curve for experimentation, maybe in the Early Majority at best. We’ll see a continued move away from client-side implementations of experimentation as they become more costly, a trend I wrote about in a Medium post a few months ago. I also think fewer marketing teams will “own” experimentation vs. being end-users of experimentation since the advantages of Data teams owning a centralized experimentation platform are large.
Is there anything people reading this can help you with? Or any parting words?
Yes! I know a lot of marketing leaders who run experiments, but if you consider yourself more of a product or data professional, I’d love to chat and pick your brain about your biggest challenges and frustrations in leading your experimentation program 🙂 Shoot me a DM on LinkedIn or email at ryan@geteppo.com, if you’d be willing to spend 15 min on the phone your perspective would be invaluable to me.
Which other experimenters would you love to read an interview by?
Lukas Vermeer (Booking.com, Vista), Sven Schmit (Stitch Fix, Eppo), Stefan Thomke (Harvard Business School), Julian Runge (Meta, Northeastern University)
Thank you Ryan for sharing your journey with the community.