How to detect affiliate fraud: a 5-step framework
Detecting affiliate fraud well is less about catching individual bad actors and more about building a system that flags patterns reliably with low false positives. This is the 5-step framework that catches what platform tools miss, plus a practical guide to what each step looks like in execution.
The 30-second answer
Five steps. Baseline normal partner behavior over 30 days. Monitor four signal families in parallel (redemption rate, traffic source, code distribution, conversion velocity). Require multi-signal confirmation before action. Verify with the partner before rejecting. Update detection with what you learned. Single-signal alerts produce 30 to 50 percent false positives; multi-signal pattern detection runs 5 to 15 percent and is actually actionable.
Why platform fraud tools are not enough
Every major affiliate platform (Impact, Everflow, Tune, Refersion, Tapfiliate, PartnerStack) has built-in fraud detection. They catch the obvious patterns: known bot IPs, blacklisted device fingerprints, blatant click farms. They miss most of the subtle patterns where the dollar exposure actually sits.
Three structural reasons:
- Rule-based, not pattern-based. Platform rules fire on fixed thresholds ("flag if IP is in range X"). Fraud patterns shift over time. A partner whose redemption rate creeps up slowly never triggers a static rule.
- Single-signal, not multi-signal. Platform rules typically fire on one signal at a time. Real fraud detection requires combining signals; each one in isolation is too weak to act on.
- Generic, not program-specific. Platform rules are universal across all customers. Your specific program has its own baseline; a platform tool does not know it.
The five-step framework below addresses each of these structural gaps.
Baseline normal partner behavior
Before you can detect anomaly, you need normal. For each partner active in the last 30 days, calculate: their typical conversion rate, their typical EPC, their typical traffic source breakdown (organic, paid, social, direct, coupon-aggregator), their typical conversion-time-of-day pattern, and their typical fraud-rule firing rate.
The baselining itself catches fraud. Partners whose baselines are wildly out of category (5x conversion rate, 10x EPC, 80 percent traffic from one referrer) often have something to hide. Investigate them before moving on.
What this looks like in practice: a weekly job that pulls the last 30 days of conversion data per partner, computes the five baseline metrics, stores them as the partner's "normal" profile, and flags any partner whose profile is outside the 95th percentile of category baseline.
Monitor four signal families in parallel
Pattern fraud is rarely visible in one dimension. Four signal families together catch the patterns single-signal rules miss:
Redemption rate. For each partner-specific code, calculate percentage of clicks that result in code redemption. Flag partners whose rate is 2x category average over a 14-day window.
Traffic source. For each conversion, capture the referring URL. Flag partners where 30 percent or more of conversions come from organic search containing brand-coupon keywords, or 25 percent or more from coupon-aggregator domains the partner does not own.
Code distribution. Monthly spot-check of the top 5 coupon aggregator sites (RetailMeNot, Honey, Capital One Shopping, CouponCabin, Slickdeals). Flag partners whose codes appear on sites they did not authorize.
Conversion velocity. Plot each partner's conversions hour-by-hour. Flag sudden spikes during off-peak hours (2 to 6 am Eastern), weekends with no content driver, or holiday windows where partner activity does not match their content cadence.
Require multi-signal confirmation before action
The discipline that separates effective fraud detection from operator burnout: never act on a single signal. Require at least two confirming signals before taking action. Three is better.
Example: Partner A has 8 percent redemption rate (Signal 1 flag). Investigate further before action. If Signal 2 also fires (traffic source shows 40 percent from Honey), confidence rises. If Signal 4 also fires (conversion spike during off-peak hours), this is a clear coupon abuse pattern with three confirming signals. Take action.
Single-signal alerts produce 30 to 50 percent false positives, which is why most operators ignore platform fraud alerts. Multi-signal pattern detection runs 5 to 15 percent false positives, which is actionable. The multi-signal discipline is the single biggest improvement you can make over default platform tooling.
Verify with the partner before rejecting
Even with multi-signal confirmation, do not auto-reject. False positive rates do not go to zero, and auto-reject damages partner relationships you may need later.
The five-step response workflow when fraud is confirmed:
- Verify the pattern. Re-check all confirming signals. Cross-reference with the partner's history.
- Pause the suspect commissions. Most platforms support a "hold" state. Use it. The clock stops; nothing is rejected yet.
- Contact the partner with specific evidence. Send the numbers (redemption rate vs baseline, referrer breakdown, code distribution screenshots, velocity chart). Ask for explanation.
- Decide based on response. If the partner explains plausibly and commits to fixing, keep the partnership and update controls. If they deny despite clear evidence, terminate cleanly and document.
- Process the held commissions. Approve commissions if the explanation holds; reject if not.
The human stays in the loop. AI surfaces signals; operator verifies; action follows verification.
Update detection with what you learned
Every fraud case teaches you something specific about your program. New tactics, new partner profiles, new aggregator sites, new attack patterns. Capture the learning and feed it back into detection.
Three update patterns:
- Refine baselines. If you discovered that your category's typical redemption rate is higher than you assumed, recalibrate. False-positive rate drops.
- Add new signals. If a partner used a tactic you had not monitored (specific aggregator site, specific time-of-day pattern, specific device fingerprint), add that signal to your detection.
- Run the same analysis across other partners. Whatever pattern you caught is probably present in other partners too. Apply the detection retrospectively.
Detection that does not update degrades over time as fraud patterns shift. Detection that updates compounds.
The four fraud pattern families, in detail
| Pattern | Description | Key signal |
|---|---|---|
| Coupon abuse | Partner-exclusive codes distributed publicly, search-arbitrage on brand-coupon keywords, partner self-purchase via own code. | Redemption rate 2x+ baseline, code on aggregator sites, traffic from brand-keyword organic search. |
| Attribution stuffing | Cookie-bombing site visitors with a partner affiliate cookie regardless of actual click. Customer who would have bought anyway attributes to the partner. | Conversion rate 5x+ baseline, low EPC despite high conversions, traffic from non-content sources. |
| Click farms | Bot or low-quality human traffic generating clicks for CPC-based programs without genuine intent. | Click volume spikes, conversion rate below 0.2 percent, IP-range clustering, device-fingerprint repetition. |
| Partner collusion | Two or more partners gaming overlap attribution. Same IP range across partners, identical device fingerprints, sequential cookie sets on the same converters. | Cross-partner IP overlap, identical user agents, conversion patterns synced across partners. |
What "good detection" looks like in numbers
For a 1M dollar annual affiliate program, healthy fraud detection metrics:
- Detection rate: 75 to 90 percent of actual fraud caught within 30 days of occurrence.
- False positive rate: 5 to 15 percent of flagged partners turn out to be legitimate on review.
- Time to detection: median 7 to 14 days from fraud start to flag.
- Recovered dollars: 8 to 15 percent of program spend reclaimed in the first quarter of running detection.
Programs that have not implemented detection typically run 5 to 15 percent of program spend lost to fraud they have not measured. The reclaim opportunity is real and compounds with program size.
The automation question
The framework above can be executed manually with significant analyst time (10 to 15 hours per week of dedicated fraud analysis for a mid-market program). Most operators do not have that capacity, so detection runs partially or not at all.
AI manager layers automate the baseline computation, multi-signal monitoring, and flag surfacing. The operator's time goes to verification and partner outreach, where judgment matters. Total fraud-detection time drops from 10 to 15 hours per week to 1 to 2 hours.
The framework does not change. The execution speed does.
Multi-signal fraud detection in Slack. Free during beta.
Try Ezra free