Drip
Case StudiesProcessCareers
Conversion Optimization LicenseCRO Audit
BlogResourcesArtifactsStatistical ToolsBenchmarksResearch
Book Your Free Strategy CallBook a Call
Our Process

How E-commerce Brands Increase
Their Revenue Per User by 10%+
in 6 Months

In this guide, we walk through the exact system we've used across 250+ brands to generate over €500M in additional revenue — without spending more on ads.

The CRO Agency Behind 250+ of the World's Leading E-Commerce Brands

Whether high-growth startups or global leaders — we consistently drive measurable revenue increases.
Strauss
Koro
Sunday Natural
The Body Shop
Grover
Hello Fresh
Natural Elements
AG1
Bluebrixx
Woom
Hornbach
Tourlane
Congstar
Holy
Junglück
PV
Wunschgutschein
Motel A Mino
Ryzon
Kickz
The Female Company
Livefresh
Schiesser
Horizn Studios
Seeberger
Luca Faloni
Zahnheld
Snocks
Bruna
NatureHeart
Priwatt
Jumbo
NKM
Oceansapart
Omhu
Blackroll
1 Kom Ma 5
Purelei
Giesswein
T1tan
Buah
Ironmaxx
Waterdrop
Send a Friend
Fitjeans
Mofakult
Plantura
BGA
4,000+
A/B Tests Run
95%
Client Loyalty
52.6%
Test Win Rate
€500M+
Revenue Generated
  • Proof
  • Who It's For
  • Core Concept
  • Our Story
  • Growth Protocol
  • Research
  • A/B Testing
  • Prioritization
  • Summary
  • Options
  • Case Studies
  • Benefits
  • How We Execute
  • Get Started
  • FAQ

SNOCKS

€100M+ annual revenue2019 → ongoing
Before
Started at €3M annually with low AOV and no testing.
After
Over 5 years, helped generate €8.2M in additional revenue. Now doing €80M+/year.

KoRo

€250M+ annual revenue2023 → ongoing
Before
No A/B testing, rising CACs.
After
Launched a testing program and generated €2.5M in just 6 months.

Kickz

€30M+ annual revenue2020 → ongoing
Before
Conversion rate at 0.59%, struggling to turn profit.
After
Improved to 2.7% (3.6x growth) in 3 years. Got acquired.

Giesswein

€100M+ annual revenue2020 → ongoing
Before
Post-COVID revenue drop with no clear answers.
After
Generated €12.2M in additional revenue over 3 years.

OceansApart

Acquired by SNOCKS out of insolvency6 months
Before
Not profitable.
After
34 experiments in 6 months → 17 wins → €323,923 in extra monthly revenue.

Who This Is For

  • If you're running an e-commerce brand doing €5M+ a year and you've hit a ceiling you can't seem to break past — despite trying every growth idea under the sun — this is for you.
  • If you're tired of guessing what to change on your site, tired of copying competitors and hoping it works, tired of agencies who run a few tests a quarter and call it optimization — this is for you.
  • If your conversion rate or AOV feels low compared to what it should be, and you know there's revenue being left on the table, but you can't pinpoint exactly where — this is for you.
  • If you suspect that most "best practices" in CRO are just recycled advice that doesn't account for your specific customers — you're right.
  • If you've hired a marketer or agency before and got no real yield from it — that's not uncommon. Most testing programs underperform because they're built on the wrong foundations.
  • If you care about compounding, long-term profitability — not hacks, not quick wins, but a repeatable system that gets stronger over time — then stick around.

Who This Is Not For

  • If you're chasing overnight results instead of building a system that compounds — this isn't for you.
  • If you'd rather copy a competitor's homepage than run a real experiment — this isn't for you.
  • If you see CRO as a cost line instead of a profit lever — this isn't for you.
  • If your store is doing under €500K/month, you likely have bigger levers to pull before a structured testing program becomes your best investment.

Here's the Truth

You can achieve predictable, compounding revenue growth if you systematically optimize how strangers experience your brand for the first time.

However, most e-commerce brands don't have the research depth, testing velocity, or prioritization rigor to do this well. They run a handful of tests, hope something sticks, and call it optimization.

We've spent the last five years building a system that solves this — specifically for e-commerce brands doing €5M+ a year.

How We Got Here

2019
We're Samuel Hess and Fabian Gmeindl. We founded DRIP Agency, and what started as an obsession with e-commerce optimization turned into one of the leading CRO agencies in the world.
SNOCKS — The Beginning
It started in 2019 with a comment on a LinkedIn post. SNOCKS was doing €150K/month at the time. They gave us something most agencies never get: full access. Their dev team, their analytics, their design files — everything.
Full Access
We weren't sending decks. We were testing every part of the business.
450+ Experiments
Over 5 years, we ran 450+ experiments for SNOCKS. We saw how compounding CRO actually works at scale — which ideas hold up under real traffic, how small changes affect big revenue, and how to build a system that runs dozens of tests without breaking things.
€3M → €80M+
SNOCKS grew from €3M to €80M+ annually. They didn't just keep working with us — they became one of our earliest investors.
Expanding
Every major e-commerce brand in Germany started asking what we did and how we did it. So we brought the same system to brands worldwide.
4,000+ Tests
Along the way, we documented over 4,000 A/B tests, published scientific research, and appeared on the top e-commerce podcasts in the space.

The DRIP Growth Protocol

The money you make from customers breaks down to two metrics: Conversion Rate (CR) and Average Order Value (AOV). Everything else is a derivative.

We've broken the growth of these two metrics into a formula:

Revenue Growth
=
QQuality
×
RRate
×
SSuccess
Quality of Tests × Rate of Testing × Success Rate

Each variable maps to a dedicated method. Together, they form the DRIP Growth Protocol.

Quality of Test

Predictive Consumer Research

How well your funnel aligns with what customers actually want — consciously and subconsciously.
Rate of Testing

Rapid A/B Testing

How quickly you can test, learn, and ship improvements.
Success Rate

Iterative Prioritization

How consistently you pick the right tests — the ones that actually win.

Now let's break down each method — the old way most brands do it, the new way, and why the difference matters.

1
Quality of Test

Method 1: Predictive Consumer Research

The Old Way

Most brands optimize based on "best practices," competitor copying, or gut instinct. Someone sees a feature on a competitor's site, says "we need that," and it gets built without any deeper analysis.

Or they hire an agency that runs generic heuristic audits — the same checklist applied to every brand regardless of audience.

Result
Low test win rates. Generic changes that don't move the needle. Wasted design and development resources. The feeling of "we're testing, but nothing's really happening."
The New Way

Start with your actual customers. Before touching a single test, build a deep psychological profile of who's buying from you, what drives their decisions, and where your funnel is losing them.

Use AI to analyze thousands of data points — reviews, surveys, social comments, competitor sites, forums — anywhere customers share what they love, hate, or wish was different. Then map those insights against every step of your funnel.

Result
Every test is grounded in real customer behavior. You're not guessing what might work — you're testing hypotheses backed by data about what your specific customers actually care about. Win rates go up. Test impact goes up. Compounding starts faster.

Psychological Drivers unlock high-impact tests that generic audits miss

We built our own research software — the DRIP Research Hub — that turns raw customer data into structured insights: buying motivations, psychological drivers, category entry points, brand perception mapping, emotional journey mapping, and feature extraction.

Proof — Kickz

Our Research Hub identified that status and belonging were the top two psychological drivers for Kickz's customers. We also spotted a significant drop-off on product collection pages. Based on that, we hypothesized that labeling popular products with "Hot" badges would increase revenue per visitor — nudging users toward products that signal social status and belonging.

+8% conversion rate, +6.57% AOV, +€187,610/month.

Category Entry Points (CEPs) reveal the triggers that bring strangers to your brand

CEPs are the specific situations, feelings, and needs that cause someone to seek out a product like yours. The more of these your funnel addresses, the more strangers you convert.

We identify CEPs by answering six questions: Who are they buying for? Where are they when they decide? Why are they buying? When does the need arise? What else are they buying alongside it? How are they feeling in that moment?

Proof — Giesswein

Giesswein was making over €30M/year selling wool shoes. When we analyzed their reviews, we found that "Initial Quality Perception" was a top CEP — people bought and loved the shoes because they could feel the quality immediately. We made multiple changes to the product page that doubled down on showcasing material quality.

Two tests alone generated +€232,500/month and +€52,470/month respectively.

Revenue Leak Detection finds the money you're losing right now

Once you understand the audience, the next step is mapping the entire customer journey to find revenue leaks. This means in-depth funnel analysis, heatmap analysis on every key page, 40+ hours of session recording review, filter behavior analysis, payment method conversion analysis, and cross-sell pattern identification.

Most brands skip this because it's time-consuming. That's exactly why it works — your competitors aren't doing it either.

“We already have analytics set up — isn't that enough?”

Analytics tell you what's happening. Research tells you why. Knowing that 68% of visitors drop off on your PDP doesn't tell you what to change. Understanding that your customers' #1 driver is quality perception — and your PDP doesn't communicate quality — tells you exactly what to test.

“How long does the research phase take?”

We dedicate a full month to research before running the first test. By the end of that month, the first tests are already live. The research isn't a delay — it's what makes everything after it 3-5x more effective.

PARALLEL
2
Rate of Testing

Method 2: Rapid A/B Testing

The Old Way

Most testing programs run sequentially — one test at a time, wait weeks for results, then move to the next. 1-2 tests per month. Maybe 12 tests a year.

The prevailing belief is that you can't run multiple tests on the same page without corrupting your results.

Result
Painfully slow learning. Winners sit idle while you wait for the next test to finish. Compounding gains are delayed by months. And worst of all — when you eventually ship multiple winners together, you're releasing untested combinations anyway. Sequential testing doesn't even protect you from the thing it claims to prevent.
The New Way

Run 6-10 tests simultaneously using parallel testing — the same methodology Microsoft, Google, and Meta use internally.

Randomization ensures each experiment remains independent. Microsoft found that meaningful test interactions occur in only ~0.002% of cases. That's 1 out of 50,000 tests.

Result
3-5x more experiments than typical agencies. Faster learning. Faster compounding. What takes most programs a year, we accomplish in a few months.
SequentialParallel
Time for 3 tests3 months1 month
Tests per year~12~36+
CompoundingDelayedImmediate
Interaction errorsHigh (untested combos ship anyway)Very low (monitored)
Sequential
~12/yr
Parallel
~36+/yr

Parallel testing is not only possible — it's the scientifically correct approach

The biggest myth in A/B testing is that you can't run multiple tests on the same page. This myth keeps 99% of testing programs stuck.

Here's how it actually works: if you run 3 tests on the same page, each splitting traffic 50/50, every visitor is randomly assigned into one of 8 possible combinations (2×2×2). For any given test, its control and treatment groups are evenly balanced across the conditions of the other tests. Whatever influence those other tests have, it's equally distributed and cancels out — giving you a clean, unbiased uplift estimate.

This is just factorial design — the same methodology used in controlled experiments across scientific domains for decades.

Proof — Kickz

Kickz was doing €30M/year with a 0.59% conversion rate. After a rough Black Friday 2022, they went from 2 tests/month to 6-10 running at a time.

Within 4 months: +€510,000/month in additional revenue. Year 1: 32 tests, conversion rate from 0.59% → 1.9%. Year 2: 45 tests, 1.9% → 2.7%. Their improved profitability contributed to their acquisition by 11 Teamsports.

Execution quality determines whether your tests produce real insights or noise

A great test idea means nothing if the execution is messy. Every test we run gets: a full design brief (designers never guess), 3-5 design variations, mobile and desktop from the start, interactive clickable prototypes for client approval, and full QA across real devices using BrowserStack.

We have 10 people dedicated to quality assurance full-time. Tests don't just need to win — they need to ship clean.

Statistical rigor is your risk management system

Without a solid statistical framework, you're making decisions based on noise. We use a Frequentist approach with 80% confidence and power levels — balancing speed with reliability.

Every test has pre-planned duration and sample size. No peeking at results mid-test. Simple A/B splits (50/50). Minimum Detectable Effect planning for every experiment.

“Don't parallel tests interfere with each other?”

In theory, they can — in practice, it almost never matters. Microsoft's research across thousands of experiments found strong interactions in 0.002% of cases. We monitor for interactions and use guardrails to prevent conflicts (e.g., never testing the same element in two tests simultaneously).

“Do we need more traffic for parallel testing?”

No. Each test still needs its own sufficient sample size, but parallel testing doesn't multiply traffic requirements. You're running more tests in the same time window, not splitting traffic thinner.

3
Success Rate

Method 3: Iterative Prioritization

The Old Way

Most companies pick tests based on whoever argues loudest, whatever seems easiest, or whatever a competitor just launched.

Ideas get thrown into a massive backlog with no scoring system, no review cadence, and no mechanism to surface the highest-impact opportunities first.

Result
Money wasted on low-impact tests. Early wins missed. Design and development resources burned on experiments that never had a real chance. And no system to get smarter over time.
The New Way

Use a prioritization engine built on a database of 4,000+ documented experiments. Every test idea is evaluated against five factors: revenue exposure (where the test runs), scroll depth impact (how many visitors see the element), research indicators (how strongly the hypothesis is supported), implementation cost, and historical performance of similar tests across our database.

Result
You start with the tests that have the highest potential uplift, are most likely to succeed, and are least costly to build. And the system gets smarter with every test you run.

The prioritization engine is self-learning

As tests succeed or fail for your specific brand, the engine updates its understanding of what works for your funnel, your audience, and your industry.

Typical agencies maintain a 30-40% win rate. After 6 months with a calibrated system, win rates typically reach 55-65%.

A roadmap creates visibility, accountability, and alignment

Every prioritized test goes into a live roadmap. Product and marketing teams can plan around findings. Progress and ROI are tied to business goals. Everyone sees what's being tested, why it matters, and what the expected impact is.

Proof — OceansApart

OceansApart was acquired by SNOCKS out of insolvency. They were not profitable. With the right prioritization system: 34 experiments in 6 months, 17 wins.

€323,923 in extra monthly revenue. Their product page alone saw +€158,345/month in improvements across 7 winning tests.

“How is this different from a simple ICE scoring model?”

ICE (Impact, Confidence, Ease) is a starting point, but it relies on subjective scoring. Our engine uses actual performance data from 4,000+ experiments weighted by industry, page type, and element type — plus your own accumulating test data. It's quantitative, not opinion-based.

“What if we already have a backlog of test ideas?”

Great — we'll run them through the engine. You'll likely find that the ideas you thought were highest priority aren't the ones with the highest expected return. That realization alone saves months of wasted effort.

QResearchRTestingSPriority

The System in Three Lines

So what it comes down to is this:

  1. Understand your customers deeply — using research and psychological profiling, not guesswork — so every test you run is aimed at a real lever (Predictive Consumer Research).
  2. Test at 3-5x the velocity of your competitors — using parallel testing and rigorous execution — so you compound learnings and revenue faster than anyone else (Rapid A/B Testing).
  3. Pick the right tests first — using a self-learning prioritization engine built on 4,000+ experiments — so your win rate climbs over time instead of staying flat (Iterative Prioritization).

That's how brands generate 10%+ more revenue in 6 months. Not from one lucky test — from a system that compounds.

Three Ways to Get This Done

DIY

Do It Yourself

Everything in this guide is real. You could build this system internally.

The upside: low cost, full control.

The downside: it takes 12-18 months to build the research depth, testing infrastructure, statistical frameworks, and prioritization systems that produce consistent results. And if your CRO lead leaves, you're starting over.

TEAM

Build an In-House Team

You'd need at minimum: a fullstack developer, a UI/UX designer, a QA engineer, a data analyst, and a CRO manager.

That's €20K-€33K/month in salaries alone — before benefits, tools, and the 12-18 months it takes for the team to become fully operational.

It works, but it's slow and expensive to build.

DRIP

Work With a Team That Already Has the System

This is what we do. You get the research infrastructure, the testing velocity, the prioritization engine, and the 4,000+ experiment database from day one.

No ramp-up period. No hiring risk. And we guarantee a 10% uplift in 6 months — or we keep working for free until we hit it.

The Results Across 250+ Brands

SNOCKS

From €3M to €80M+/year

Started with low AOV and no testing infrastructure. Over 5 years and 450+ experiments, we added €8.2M in additional revenue.

SNOCKS reinvested into ads and influencers, becoming Germany's #1 sock & underwear brand. They didn't just keep working with us — they became investors.

€0M

Kickz

From 0.59% to 2.7% Conversion Rate

A basketball brand doing €30M/year but struggling to turn profit.

In 3 years: 77 tests, 3.6x conversion rate improvement, contributed to acquisition by 11 Teamsports.

0x

KoRo

€2.5M in 6 Months

No A/B testing program, rising acquisition costs.

We launched their first testing program and generated €2.5M in additional revenue within 6 months.

€0M

OceansApart

From Insolvency to Profitability

Acquired by SNOCKS out of insolvency.

34 experiments in 6 months, 17 wins, €323,923/month in additional revenue.

€0/month

What Changes When This System Is Running

Here's what happens when you have a properly built testing program in place:

  • Your conversion rate climbs predictably — not from guesswork, but from a compounding system that gets smarter every month.
  • Your average order value increases because you're testing specifically for the psychological drivers that influence how much people buy — not just whether they buy.
  • You can outbid competitors on ads because your unit economics are fundamentally better. Same traffic, more revenue per visitor.
  • Your team stops debating opinions and starts making decisions backed by data. Test results settle internal arguments faster than any meeting.
  • Every insight feeds back into the system. A winning test on your PDP informs your ad creative, your email copy, your product positioning. The learning compounds beyond just the website.

How We Actually Execute This

Research that actually changes what you test

Before, you'd base test ideas on best practices or competitor copying.

With the DRIP Research Hub, every hypothesis is built on analyzed customer data — psychological drivers, category entry points, brand perception, emotional journey mapping. The research produces a 20+ page report that becomes the foundation for your entire testing roadmap.

research.dripagency.dev
Research that actually changes what you test

Testing at velocity without sacrificing quality

Before, you'd run 1-2 tests a month and wait.

With our parallel testing protocol, 6-10 experiments run simultaneously with full design briefs, 3-5 variations per test, mobile/desktop from day one, interactive prototypes, and QA across real devices. 100 hours/month of dedicated design and development.

testing.dripagency.dev
Active Testing Dashboard
Live Experiments
Win Rate
62%
Active Tests
8
Monthly Uplift
+€47K
Experiments
34

Prioritization that gets smarter over time

Before, you'd pick tests based on who argued loudest.

With our prioritization engine, every idea is scored against revenue exposure, research support, implementation cost, and historical performance data from 4,000+ experiments. The system recalibrates as your results come in.

priority.dripagency.dev
Prioritization Engine
Ranked by expected impact
#1
PDP Trust Badges
94
HIGH
#2
Cart Cross-Sell
78
HIGH
#3
Hero CTA Rewrite
65
MED
#4
Nav Categories
52
MED
#5
Checkout Flow
41
LOW
ImpactConfidenceEase

Bi-weekly strategy calls and unlimited support

You're not left wondering what's happening. Bi-weekly calls walk through results, upcoming tests, and strategic direction. Analysis, strategy, and management support is unlimited.

Want to Build This for Your Brand?

If you're doing €500K+/month and want a compounding system for conversion rate and AOV growth — backed by a team that's done this across 250+ brands — let's talk.

Book a Discovery Call

The Newsletter Read by Employees from Brands like

Lego
Nike
Tesla
Lululemon
Peloton
Samsung
Bose
Ikea
Lacoste
Gymshark
Loreal
Allbirds
Join 12,000+ Ecom founders turning CRO insights into revenue

Common Questions

DRIP Agency measures the 10% revenue per user (RPU) uplift guarantee by summing the relative uplift of all positive A/B tests over the engagement period. Each test uses controlled experiments with Frequentist statistical methodology at 80% confidence and power levels. Control and variant groups experience identical conditions through randomized traffic splitting, isolating the actual impact of each change from seasonal effects, marketing campaigns, or external factors. This is the same measurement approach validated across our 4,000+ documented experiments.

DRIP Agency runs 6–10 A/B tests simultaneously using our Rapid Testing Protocol — parallel testing rather than sequential. Over a 6-month engagement, most clients complete 30–50+ experiments. This is 3–5x the velocity of traditional programs that run 1–2 tests per month (roughly 12 per year). The parallel approach follows factorial experimental design, the same methodology used by Microsoft, Google, and Meta. For example, Kickz completed 77 experiments over 3 years and saw conversion rates improve from 0.59% to 2.7%.

Yes. DRIP Agency, headquartered in Traunstein, Bavaria (Germany), works with e-commerce brands worldwide. Founded in 2019 by Fabian Gmeindl and Samuel Hess, our team of 50+ specialists operates in English and German. Our client portfolio of 250+ brands spans Europe, North America, and beyond, across verticals including fashion, food & beverage, health & wellness, sports, and home goods.

DRIP Agency primarily uses ABlyft and Kameleoon for A/B testing implementation. We never use visual editors — they introduce page speed degradation, implementation inconsistencies, and unreliable results across devices. Our tests integrate at the code level for clean, performant execution. QA is performed by a dedicated 10-person team using BrowserStack across real devices. Each test receives a full design brief, 3–5 design variations, interactive clickable prototypes for approval, and mobile/desktop coverage from day one.

DRIP Agency works alongside in-house CRO teams regularly. Our Research Hub, 4,000+ experiment database, and Weighted Impact Scoring prioritization engine augment what internal teams are already doing — we don't replace them. The research infrastructure provides customer psychology profiling (7 Psychological Drivers, Category Entry Points) that most in-house teams lack the tooling to produce, while the prioritization engine provides data-driven test selection based on cross-brand performance benchmarks.

Month 1 of DRIP Agency's engagement is the Research & Strategy Intensive, during which the first A/B tests are designed based on customer psychology profiling and funnel analysis. First tests go live by the end of Month 1. Most clients see their first winning tests within 2–3 months. Oceansapart generated +€323,923/month within 6 months. KoRo achieved €2.5M in additional revenue in 6 months. Results compound over time as the prioritization engine calibrates to your specific audience.

Losing tests are as valuable as winning tests in DRIP Agency's system — they reveal what doesn't work for your specific audience and feed directly into the Weighted Impact Scoring prioritization engine, improving future test selection. A well-run testing program expects roughly 40–50% of tests to be inconclusive or negative. DRIP maintains a 52.6% overall win rate across 4,000+ experiments — significantly above the 20–30% industry average. What matters is the net impact across all tests, which is why we guarantee a minimum 10% RPU uplift in 6 months.

DRIP Agency works with e-commerce brands across all verticals — fashion (SNOCKS, Oceansapart), food & beverage (KoRo, Livefresh), health & wellness (Blackroll), sports (Kickz), footwear (Giesswein), and more. Our methodology is audience-driven, not industry-driven. The 7 Psychological Drivers framework — Progress, Curiosity, Security, Status, Autonomy, Comfort, and Belonging — and Category Entry Point identification system adapt to whoever your customers are, producing relevant insights regardless of product category.

Yes — request a sample during your discovery call with DRIP Agency and we'll share an anonymized research report. The report demonstrates the full depth of our Research Hub analysis: customer psychology profiling using the 7 Psychological Drivers, Category Entry Point identification, quantitative funnel analysis, heatmap and session recording insights, and the prioritized opportunity roadmap with Weighted Impact Scores. The research report typically runs 20+ pages and serves as the foundation for the entire testing roadmap.

Drip Agency
About UsCareersResourcesBenchmarks
ImprintPrivacy Policy

Cookies

We use optional analytics and marketing cookies to improve performance and measure campaigns. Privacy Policy