Marketing Attribution: The Complete Guide for Agencies in 2026
If your agency still reports last-click attribution to clients in 2026, you're losing accounts you don't even know about. Apple's iOS 14.5 ATT prompt — released in April 2021 — broke the deterministic ad-tracking model that powered every "ROAS dashboard" your team built between 2014 and 2020. Five years later, the cumulative damage is staggering: Meta's reported conversions are now ~30-40% lower than actual conversions on average, GA4 fills the gaps with modeled data nobody at the agency understands, and the conversion paths between TikTok, Instagram DMs, retargeting, branded search, and a 47-day buyer journey look like a Jackson Pollock to anyone trying to allocate a $200K monthly ad budget.
Cookie deprecation in Chrome — delayed multiple times but increasingly enforced through privacy sandboxes and third-party-cookie quarantine — finished what iOS 14.5 started. Walled gardens (Meta, Google, TikTok, LinkedIn) each report their own self-attributed conversions, double-count overlap, and refuse to share user-level data. The result: in 2026, marketing attribution is a multi-source modeled discipline, and the agencies that still report Google Analytics' last-non-direct-click as the source of truth are flying blind.
This guide covers the seven attribution models you actually need to know (with the math, not just buzzwords), the eight major attribution platforms that matter — Hyros, TripleWhale, Northbeam, Wicked Reports, RedTrack, AnyTrack, Branch, Rockerbox — and a decision framework for which combination fits your client base. We'll cover why GA4's modeled attribution is misleading at small data volumes, why server-side conversion APIs (Meta CAPI, Google Enhanced Conversions, TikTok Events API) are now mandatory, and how to attribute the channels that no platform tracks well — including Instagram DM conversions, podcast ads, and word-of-mouth referrals. By the end, you'll have a 5-step implementation plan and the FAQ answers you need to defend your model when a client asks why your numbers don't match Meta's.
What is marketing attribution?
Marketing attribution is the process of crediting specific marketing touchpoints — ad clicks, organic visits, email opens, DM replies, referral mentions — with the conversions and revenue they helped produce. Without attribution, you cannot answer the questions that determine whether a campaign continues or gets killed: Which ads work? Which channels deserve more budget? What's the actual ROAS of the YouTube spend versus the Instagram spend versus the email list?
A typical 2026 customer journey for a coaching offer at $2,000 looks like this: prospect sees an Instagram Reel ad on Tuesday, doesn't click. Sees a retargeting ad on Friday, clicks to a landing page, leaves. Searches the brand name on Google the next week, clicks the organic result, joins a free email opt-in. Reads three emails over two weeks. Sees a follow-up ad. Clicks. Books a sales call from a Calendly link in a DM thread that started after they replied to a Story. Buys after the call. That's six trackable touchpoints across four channels over 19 days, and at least two more touchpoints (the Story view, the original ad impression) that were never recorded as events anywhere.
Attribution is the framework that decides which of those touchpoints "deserves" credit for the $2,000 — and how much. Optimization is impossible without it: if you incorrectly assign 100% credit to the last click (in this case, the booked-call DM), you'll under-invest in the Reels ads that actually started the journey, the email nurture that built trust, and the brand-search visibility that closed the loop. Attribution is the difference between scaling what works and scaling what you happened to track.
It's hard in 2026 for four reasons: privacy regulation (GDPR, iOS ATT, state-level US privacy laws) limits cross-site identifiers; walled gardens hoard their own conversion data; multi-device journeys (mobile-to-desktop, in-app-to-browser) break cookie-based stitching; and modern buying cycles — especially in info-products, B2B, and high-ticket coaching — span weeks or months across channels nobody can fully instrument.
The 7 attribution models
There is no universally "correct" attribution model. Each one is a hypothesis about how marketing influences buyers — and each one is wrong in different ways. The seven models below cover the spectrum from naively simple (first-click) to scientifically rigorous (incremental lift). Pick the one whose blind spots you can live with.
1. First-touch attribution (first-click)
First-touch credits 100% of the conversion value to whichever marketing touchpoint a customer encountered first. If a buyer's journey was Reels Ad → Email → Branded Search → Conversion, the Reels Ad gets 100% of the $2,000.
Formula: credit_to_first_touch = 1.0, every other touchpoint gets 0.
When to use it: When you're explicitly trying to optimize top-of-funnel demand generation. Awareness-focused agencies, brand-launch campaigns, and anyone running cold-traffic experiments use first-touch to identify which creative or audience actually starts the buyer journey. It rewards the channels doing the hardest work — introducing your brand to strangers.
When NOT to use it: Anywhere closing matters. First-touch will systematically under-credit retargeting, email nurture, branded search, and any channel that operates in the middle and bottom of the funnel. It will also over-credit the first impression even when that channel did nothing else (a customer who saw an ad once, ignored it for 90 days, then converted from a referral, would still credit the ad).
Example math: A $200K monthly spend distributed across Meta cold ($80K), retargeting ($30K), Google Search ($50K), email ($10K), and YouTube ($30K). Last-click attribution might show Google Search driving 60% of revenue. First-click attribution often shows Meta cold and YouTube driving 50%+ — because that's where buyers actually first encountered the brand. Both are true; neither is the whole picture.
2. Last-touch attribution (last-click)
Last-touch is the inverse of first-touch: 100% of the conversion value goes to the final touchpoint before purchase. This is still the default in Google Ads, the default in most ad platforms' conversion tracking, and the lazy default in most agency reporting decks.
Formula: credit_to_last_touch = 1.0, every other touchpoint gets 0.
When to use it: Short, single-session sales (impulse e-commerce, sub-$50 products, urgency-driven offers). When the buyer's full journey fits in a single click-to-checkout window, last-click is approximately correct because there are no earlier touchpoints worth crediting.
When NOT to use it: Anywhere a buyer needs more than one session to convert. Coaching ($2K+), B2B SaaS, agency services, ed-tech, anything with consideration cycles. Last-click will systematically over-credit branded search and direct traffic — both of which are usually the result of upper-funnel work, not its cause. If a buyer sees ten ads then types your brand name into Google, last-click gives 100% credit to "Google / organic" and zero to the ten ads that built brand recognition.
Why it persists: It's simple, deterministic, and ad platforms default to it. It's also the most-criticized model in marketing analytics literature for a reason — and the model most likely to get an agency's media-buying decisions wrong.
3. Linear attribution
Linear distributes credit equally across every recorded touchpoint. If a buyer touched five marketing channels before converting on a $2,000 offer, each channel gets $400.
Formula: credit_per_touchpoint = conversion_value / total_touchpoints.
When to use it: As a sanity-check baseline against last-click. Linear is what you show a client to demonstrate that the "last-click winner" was actually one of five or six contributing channels. It's also a reasonable default when you have no opinion about which touchpoints matter more — it doesn't pretend to know.
When NOT to use it: When you do have opinions. Linear ignores intent (a 3-second ad impression counts the same as a 20-minute landing-page session) and recency (the touchpoint 60 days ago counts the same as the click that converted). For any business with a meaningful consideration cycle, linear under-credits the touchpoints that did the heavy lifting and over-credits touchpoints that may have been incidental.
Example math: Reels Ad → Retargeting Ad → Email Click → Branded Search → Conversion. Linear gives each touchpoint 25% of the $2,000 = $500 each. Compare to time-decay (next), which would give the branded search closer to 40% and the Reels ad 5%.
4. Time-decay attribution
Time-decay assigns more credit to touchpoints closer in time to the conversion, less to earlier ones, using an exponential decay function (typically a half-life of 7 days, configurable).
Formula: credit_i = 2^(-Δt_i / half_life), then normalize so all credits sum to 1.0.
When to use it: Lead-gen and consideration funnels where the closing touchpoint matters more than the awareness touchpoint, but the awareness still deserves some credit. A B2B agency with 30-90 day sales cycles often uses time-decay because the salesperson's email seven days before the contract signature deserves more credit than the LinkedIn ad that started the journey 60 days earlier.
When NOT to use it: When buyers have very long consideration cycles where the first impression is genuinely the most important touchpoint (e.g., a buyer who sees a YouTube ad, watches it fully, and converts 6 months later because of that single ad — time-decay will misattribute almost all credit to the conversion-day touchpoint, which may have been a trivial branded search).
Example math: With a 7-day half-life, a touchpoint 7 days before conversion gets weight 0.5. Fourteen days = 0.25. One day = 0.91. So a journey that was Reels Ad (day -30), Email (day -10), Search (day -1) gets weights 2^(-30/7) ≈ 0.05, 2^(-10/7) ≈ 0.37, 2^(-1/7) ≈ 0.91. Normalized: 4%, 28%, 68%. The branded search at day -1 gets the lion's share, but the Reels ad still gets non-zero credit for starting the journey.
5. Position-based attribution (U-shaped, 40/20/40)
Position-based — also called U-shaped — gives 40% of the credit to the first touchpoint, 40% to the last, and distributes the remaining 20% equally across every touchpoint in between. The intuition: the first touchpoint started the relationship and the last touchpoint closed it, both deserve outsized credit, and the middle touchpoints kept the relationship alive.
Formula: first_touch = 0.4, last_touch = 0.4, each_middle_touch = 0.2 / number_of_middle_touchpoints.
When to use it: Mid-funnel-heavy businesses where both demand generation and closing matter, but the middle touchpoints are nurture rather than primary drivers. Lead-gen agencies, coaching offers with email nurture sequences, and most B2B journeys fit this shape well. It avoids the extremes of first-click (under-credit closing) and last-click (under-credit awareness) without the false equality of linear.
When NOT to use it: Single-touch journeys (under 3 touchpoints) where U-shaped reduces to 50/50 between first and last and ignores any middle. Also, if your business genuinely has a "hero" middle touchpoint — say, a webinar that 80% of converters attended — U-shaped will bury its impact in the 20% middle bucket.
Example math: Reels Ad → Retargeting Ad → Email → Branded Search → Conversion. First (Reels) = 40%. Last (Search) = 40%. Middle (Retargeting + Email) split 20% = 10% each. On a $2,000 conversion: Reels $800, Search $800, Retargeting $200, Email $200.
6. Algorithmic / data-driven attribution (DDA, Markov chain)
Data-driven attribution uses a machine-learning model to assign credit based on the observed marginal contribution of each touchpoint across thousands of conversion paths. Google Ads' DDA, Google Analytics 4's DDA, and Markov-chain-based attribution (used by tools like RedTrack and many in-house data teams) all fall into this category.
How Markov chains work: The model treats each touchpoint as a state in a Markov chain. By computing the "removal effect" of each state — i.e., how much the conversion rate drops if you remove that touchpoint from the graph — you derive a credit weight for each channel. A channel whose removal causes a 30% drop in conversions gets 30% of the credit.
When to use it: When you have enough data — at minimum a few thousand conversions per month, ideally tens of thousands — and a tool that genuinely runs a model rather than rebranding rules-based attribution as "data-driven." Mid-to-large e-commerce, mature SaaS, and high-volume lead-gen all benefit.
When NOT to use it: Low-volume businesses (under ~500 conversions/month). Without enough data, the ML model overfits and produces credit allocations that swing wildly month-to-month for no real reason. Google's DDA explicitly requires 300+ conversions in 30 days per conversion action before it activates — and even at the threshold, the model is shaky. Smaller businesses are better served by rules-based models (time-decay, position-based) where the assumptions are at least transparent.
Real example: A DTC e-commerce brand running Markov-chain attribution finds that removing email from the conversion graph drops conversions by 22%. Removing Meta retargeting drops them by 31%. Removing Meta cold traffic drops them by 12%. Allocate budget accordingly — the data-driven model is telling you retargeting is the highest-leverage channel, even though last-click would have credited Google branded search.
7. Incremental / lift-based attribution (geo-experiments, holdout tests)
Incremental attribution is the only attribution model based on causal inference rather than correlation. Instead of trying to assign credit to touchpoints based on observed paths, you run controlled experiments — geo-holdouts, conversion-lift studies, ghost bid tests — to measure the actual causal lift a channel produces compared to a counterfactual where it didn't run.
How it works: Take 20 designated market areas (DMAs) similar in baseline conversion volume. In 10 of them, run your campaign. In the other 10, suppress it (the holdout). After 4-8 weeks, compare conversion volume between the two groups. The difference — adjusted for baseline drift and seasonality — is the incremental lift attributable to the campaign.
When to use it: Quarterly or for any meaningful spend ($50K+/month per channel). Lift-based attribution is the gold standard because it's the only method that actually answers the question every CFO eventually asks: "If we hadn't spent that money, what would have happened anyway?" Last-click, first-click, time-decay, and even DDA all measure correlation. Lift measures causation.
When NOT to use it: Daily or weekly optimization decisions — the experiments take weeks and require statistical power. Also, channels with national-only targeting (you can't geo-holdout a Super Bowl ad) or very low spend (under $5K/channel/month, the lift signal is noise).
Real example: Meta Conversion Lift studies (free if you run >$10K/week through Meta) typically reveal that platform-reported conversions overstate true incremental conversions by 30-60%. A campaign reporting 1,000 conversions in Meta Ads Manager may have driven only 600 incremental conversions — the other 400 would have happened anyway via direct, organic, or other channels. This is why agencies running serious media budgets validate every Meta reported conversion against periodic lift studies.
Comparison table
| Model | Data Needed | Computational Complexity | Accuracy | Best For Business Size |
|---|---|---|---|---|
| First-touch | UTM tracking | Trivial | Low (one-sided) | Any size, demand-gen focus |
| Last-touch | UTM tracking | Trivial | Low (one-sided) | Single-session impulse buys |
| Linear | UTM tracking | Trivial | Medium (no recency weight) | Any size as a baseline |
| Time-decay | Path-level tracking | Low | Medium-high | Lead-gen, B2B with cycles |
| Position-based | Path-level tracking | Low | Medium-high | Mid-funnel-heavy businesses |
| Data-driven (Markov) | 500+ conv/mo, full path | High (ML model) | High if data sufficient | Mid-to-large, high-volume |
| Incremental lift | Test budget, geo split | Highest (experiment design) | Highest (causal) | $50K+/mo per channel |
Why GA4 attribution is broken for most agencies
Google Analytics 4 became the only Google-supported analytics platform on July 1, 2023, when Universal Analytics stopped processing data. For agencies that built reporting workflows on UA's straightforward last-non-direct-click attribution, GA4's modeled, machine-learning-driven attribution was a step backward in transparency and — for most clients — a step backward in accuracy.
The cookieless modeling problem. GA4 fills gaps caused by missing cookies (iOS Safari, Firefox ETP, Chrome incognito) with modeled conversions — synthetic conversion events generated by an ML model that estimates what would have been observed if tracking had worked. Google does not disclose the training data, the model architecture, or the confidence intervals. Agencies running GA4 reports often see modeled conversions making up 15-40% of total reported conversions, and have no way to validate them. When client revenue numbers don't match Stripe or Shopify, the modeled bucket is usually where the discrepancy lives.
The 28-day attribution window cap. GA4 caps lookback windows at 30 days for acquisition reports and 90 days for conversions, but in practice the cookies and identifiers driving those reports often expire much earlier (Safari ITP enforces 7-day cookie expiration on link decoration). For coaching offers, B2B services, or any business with consideration cycles longer than a month, GA4's reported attribution is structurally incomplete. Conversions from 60-day-old first-touch ads simply do not appear in your acquisition reports.
Cross-device gaps. GA4 stitches cross-device journeys only when users are signed into a Google account and you have Google Signals enabled (which has its own privacy implications). Without that, a buyer who sees an ad on mobile, switches to desktop to research, and converts on desktop appears as two separate users — and the mobile-side ad gets no credit. For B2B and high-ticket consumer (which are often researched on mobile, purchased on desktop), this systematically undercounts mobile-driven attribution.
Sampling on free-tier accounts. GA4 free-tier accounts apply sampling to reports above 10 million events per query. Agencies running aggregate cross-client reports often hit this threshold and don't realize the numbers in front of them are extrapolated from a sample.
Why first-party data wins: A first-party dataset — your CRM, your e-commerce backend, your billing system — has none of these problems. The conversion definitely happened, you know exactly when, and you know exactly which UTM the visitor first arrived with. The challenge is connecting first-party conversion data back to ad-platform spend and impressions, which is exactly what dedicated attribution platforms (Hyros, TripleWhale, Northbeam, Wicked Reports) are built to do.
Server-side tracking and the conversion API era
The single biggest infrastructure shift in attribution since 2021 is the move from browser-side pixel tracking to server-side conversion APIs. If you're not running server-side tracking on every client by 2026, you're not just leaving 20-30% of conversion signal on the table — you're feeding incomplete data to your ad platforms' bidding algorithms, which means they can't optimize delivery toward actual converters.
Meta Conversion API (CAPI) is Meta's server-side endpoint that lets you send conversion events directly from your backend to Meta's systems, bypassing the browser entirely. Browser pixel events are subject to ad-blockers, ATT opt-outs, ITP cookie restrictions, and JavaScript failures; CAPI events arrive 100% of the time because they're triggered server-side after the actual conversion (a Stripe webhook, a Shopify order webhook, a CRM lead-created event). Meta deduplicates browser-pixel events with CAPI events using event_id, so you don't double-count. Properly implemented, CAPI typically restores 25-50% of conversion volume that was previously lost to browser-tracking limitations.
Google Enhanced Conversions is Google's analog to CAPI. Instead of relying solely on browser-side cookies, you send hashed PII (email, phone) along with conversion events. Google matches the hashed data against signed-in Google users and restores cross-device, cross-browser attribution that cookies alone can't see. Google reports clients implementing Enhanced Conversions see 3-5% lift in observed conversions and meaningfully better Smart Bidding performance.
TikTok Events API mirrors the same architecture for TikTok's pixel. Critical for any agency running TikTok Ads at scale — without it, TikTok's reported conversions are even less reliable than Meta's, because TikTok's audience skews young, mobile, and aggressively privacy-protected.
Tooling options. Most agencies don't build CAPI integrations from scratch — they use one of these:
- Stape (~$30-300/month per data source). Fully managed server-side Google Tag Manager hosting. Handles CAPI, Enhanced Conversions, TikTok Events, plus dozens of other integrations. Best fit for agencies running 10+ client sites who want a turnkey solution.
- Google Tag Manager Server (Google Cloud cost, ~$120/month minimum for a small site). The DIY option — you set up server-side GTM yourself on Google Cloud Platform. More flexible, more work, no per-event fees.
- Hyros' built-in tracker ships with CAPI, Enhanced Conversions, and similar APIs included in the $497-$2K+/month price.
- RedTrack's tracker does the same, plus its own server-side click-tracking on top.
If you implement nothing else from this guide in the next 30 days, implement CAPI. It is the foundational layer underneath every modern attribution model.
Multi-touch attribution platforms
The eight platforms below cover the modern attribution-tooling landscape from $50/month bootstrap tools to $5K/month enterprise platforms. We've used or evaluated all eight; pricing is current as of early 2026 and reflects publicly listed tiers, not custom enterprise quotes. Before you pick a tool, read the channel-specific section after this one — there are conversion paths none of these platforms track well.
1. Hyros — best for info-product / coaching / high-ticket
Pricing: $497/month entry tier (small accounts), scaling to $2,000+/month for high-volume tracking. Custom annual contracts above $50K ARR are common.
Best for: Coaches, info-product sellers, course creators, agencies serving those niches, and anyone running long-cycle high-ticket sales where the buyer journey involves multiple ad clicks, email, calls, and DMs over 30-90 days.
Methodology: Hyros pioneered first-party server-side attribution for the info-product space. They drop their own first-party tracker on your site (rather than relying on third-party cookies), capture every click and pageview at the user level, and stitch identities across devices using email match. Conversion data is tied back to the originating UTM and ad ID via Hyros' own attribution model — typically a hybrid of first-touch and time-decay.
Integrations: Native integrations with Stripe, ClickFunnels, Kartra, GoHighLevel, Calendly, Kajabi, and most call-booking and payment tools used by info-product brands. Direct ad-platform integrations push data back to Meta, Google, TikTok, YouTube via CAPI/Events API.
Real weakness: Hyros is opinionated about your tech stack — if you don't run a funnel-builder ecosystem (CF/Kartra/GHL/Kajabi), integrations get clunky. The interface is dense and not designed for non-technical users; agencies typically need a dedicated implementation specialist for the first 30 days. Pricing is steep for businesses under $1M revenue.
2. TripleWhale — best for e-commerce DTC
Pricing: $129/month "Pixel" tier (small Shopify stores under $1M GMV), $399/month "Brands" tier (mid-market), $799/month "Plus" tier with Northbeam-style mid-market features. Custom enterprise tiers above.
Best for: Shopify-native e-commerce brands doing $500K-$20M annual revenue, especially DTC consumer products that run heavy Meta + TikTok + Google ad budgets.
Methodology: TripleWhale's "Total Impact" attribution model is a hybrid of last-click, first-click, and a proprietary blended view that weights based on customer journey signals. They also offer a "Triple Pixel" — their own first-party server-side pixel — that supplements Meta/Google's pixels and captures conversions even when ad-platform pixels miss them.
Integrations: Deep Shopify integration is the killer feature; TripleWhale knows your COGS, inventory, customer LTV, and order data, which makes profit-attribution (not just revenue-attribution) meaningfully better than competitors. Direct integrations with Meta, Google, TikTok, Klaviyo, Postscript, Recharge.
Real weakness: Shopify-only means non-Shopify e-commerce (BigCommerce, custom carts, WooCommerce at scale) is a forced fit. The "Total Impact" model is opaque — you can't audit how it weights touchpoints, and the credit allocation moves around with software updates. Pricing creeps fast as ad-spend grows.
3. Northbeam — best for venture-funded / data-mature e-commerce
Pricing: Starts around $1,000/month for sub-$5M brands, with mid-market tiers in the $2K-$3K/month range. Enterprise tiers above $5K/month are common for $20M+ brands.
Best for: E-commerce brands with internal analytics teams or technical CMOs, particularly venture-backed DTC brands where the marketing team has appetite for sophisticated MTA + MMM blending.
Methodology: Northbeam runs a proprietary multi-touch attribution model combined with media-mix-modeling (MMM) for top-down validation. Unlike TripleWhale's Total Impact, Northbeam's methodology is more transparent — they publish white papers explaining the algorithm — and they offer custom attribution modeling on enterprise tiers.
Integrations: Shopify, BigCommerce, custom carts via API, all major ad platforms, Klaviyo, Iterable, custom data warehouses (Snowflake, BigQuery). Best-in-class warehouse integrations let you stream attribution data into your own BI stack.
Real weakness: Pricing puts it out of reach for sub-$5M brands. The interface assumes analytical sophistication — agencies and clients without a dedicated analyst find Northbeam overwhelming. Implementation takes 4-8 weeks.
4. Wicked Reports — best for older e-commerce / lead-gen / email-heavy
Pricing: $497/month entry tier, scaling to $1,500/month+ for high-volume tracking. Annual contracts get meaningful discounts.
Best for: Email-heavy e-commerce brands (especially $1M-$10M brands with mature email programs), info-product businesses, lead-gen agencies, and any business where customer journey extends well beyond a single session.
Methodology: Wicked Reports specializes in long-cycle attribution — they'll tie a $5K conversion today back to an ad click 180 days ago, which most platforms can't do. They're particularly strong on email/SMS attribution (deep Klaviyo, ActiveCampaign, Drip, Postscript integrations). Their "Wicked Score" is a proprietary attribution model that weights first-click, last-click, and lifetime customer value into a single score.
Integrations: Strong on email/SMS (Klaviyo, ActiveCampaign, Drip, Mailchimp, Postscript, Attentive), good on e-commerce platforms (Shopify, WooCommerce, BigCommerce), and standard ad-platform integrations (Meta, Google).
Real weakness: UI is dated and the dashboards feel like a 2015 SaaS product. Setup is more manual than newer platforms. Best for clients who want long-cycle attribution and don't care about a polished experience.
5. RedTrack — best for affiliate / agency tracking
Pricing: $124/month "Solo" tier (250K events), $224/month "Team" (1M events), $524/month "Agency" tier (5M events, multi-workspace). Volume-priced custom tiers above.
Best for: Performance-marketing agencies, affiliate marketers, media buyers running multiple client accounts, and agencies that need workspace isolation between clients. Also strong for in-house teams running aggressive campaigns across non-mainstream traffic sources (native ads, push, popunder).
Methodology: RedTrack offers multiple attribution models out of the box — first-click, last-click, linear, time-decay, position-based, and a configurable rules-based model. They also run a Markov-chain-based DDA model on Agency-tier and above. Server-side conversion tracking and CAPI are built in.
Integrations: Deep affiliate-network integrations (CJ, Awin, Impact, ClickBank), all major ad platforms, custom postbacks for any tracking partner, plus native Shopify/WooCommerce/Stripe.
Real weakness: RedTrack's UX leans technical — designed for media buyers fluent in CPA-affiliate jargon, not for agency client reporting. Client-facing reports require setup work to make presentable.
6. AnyTrack — best for budget-conscious agencies
Pricing: $50/month "Lite" (15K events), $150/month "Plus" (50K events), $300/month "Pro" (250K events). Annual plans get 20% off.
Best for: Smaller agencies, freelancers, in-house marketers at sub-$1M brands who want server-side conversion-API tracking without paying $500+/month for a full attribution platform.
Methodology: AnyTrack's primary value is dead-simple server-side tracking + CAPI/Events API integration. Attribution is rules-based (first-click, last-click, linear, time-decay configurable per conversion goal); there's no ML model. Think of it as a managed Google Tag Manager Server with native CAPI/Events API integrations.
Integrations: All major ad platforms, Shopify, WooCommerce, Stripe, ClickFunnels, Kartra, custom webhooks. The integration surface is narrower than Hyros or TripleWhale but covers the 80% case.
Real weakness: Limited path-level attribution — you can see conversion paths but can't run sophisticated cross-channel analysis. Best as a pragmatic CAPI/Events API tool, not as a strategic attribution platform.
7. Branch — best for app-first attribution
Pricing: Custom enterprise pricing; effectively a paid platform starting at ~$500/month for small apps and scaling into the thousands for large apps.
Best for: Any business where the conversion happens in a mobile app (gaming, fintech, dating, food delivery, fitness apps). Branch is the dominant deep-linking and mobile-attribution platform after AppsFlyer and Adjust. If your client's primary conversion is an in-app event, Branch is non-negotiable.
Methodology: Branch's attribution is mobile-native — they handle deep links, deferred deep links (where a user installs the app via an ad, then opens it for the first time and gets dropped into the right content), cross-platform identity stitching (web-to-app), and the messy mobile-attribution standards (SKAdNetwork on iOS, Google Play Install Referrer on Android).
Integrations: All major mobile-ad networks (Meta, Google App Campaigns, TikTok, Apple Search Ads, Snap, Reddit), MMP integration partners, deep-linking SDKs for iOS and Android.
Real weakness: Web-only businesses get nothing from Branch — it's a mobile-first platform. The interface and concepts (SKAN postbacks, deferred deep links) require mobile-marketing expertise. Pricing scales steeply with monthly active users.
8. Rockerbox — best for mid-market multi-channel
Pricing: Enterprise pricing only, typically $3K-$10K/month based on data volume and feature tier.
Best for: Mid-to-large brands ($10M-$200M revenue) running diversified channel mixes — Meta, Google, TikTok, podcast ads, OOH, TV, direct mail, affiliate, and offline channels. Especially strong for brands that need to attribute offline (TV, radio, OOH) alongside digital.
Methodology: Rockerbox blends multi-touch attribution (path-level digital tracking) with media-mix-modeling (top-down regression analysis of all channels including offline). The MMM layer is what makes them stand out — they'll attribute the impact of a TV campaign or a podcast sponsorship using statistical modeling, not click tracking.
Integrations: All major digital ad platforms, e-commerce platforms, custom data warehouses, plus offline-channel data ingestion (TV airing logs, podcast download data, OOH impression estimates).
Real weakness: Pricing puts it firmly in the enterprise tier — no smaller brand will get value at $5K/month minimum. Implementation is 6-12 weeks. The MMM methodology requires statistical literacy to interpret correctly.
Channel-specific attribution: what no platform tracks well
Even with the best platform implementation, some channels are structurally invisible to standard attribution tooling. Agencies that ignore these channels are systematically under-investing in real revenue drivers; agencies that account for them gain a competitive edge in budget allocation.
DM and inbound social. Instagram DMs, TikTok DMs, LinkedIn DMs, and WhatsApp messages are not tracked by Meta's pixel, Google's tag, or any third-party attribution platform listed above. When a buyer DMs your client after seeing a Story, replies to a Reel, or sends a WhatsApp message after seeing a Facebook ad, the conversion path effectively disappears — the inbound DM becomes "direct" or "unknown" in every dashboard. For coaching, info-product, SMMA, and high-ticket service businesses, this is often the dominant conversion path. For Instagram-DM-driven funnels — common in coaching, info-product, and SMMA agencies — none of the major attribution platforms above track DM conversions well, because DMs aren't fired as standard pixel events. Inflowave's unified inbox + lead pipeline tracks the DM-to-close journey natively, then you can pipe events to your attribution platform via Zapier or webhook so the DM appears as a tracked touchpoint in your existing model.
Word-of-mouth and community referrals. Slack communities, private Discord servers, in-person events, and personal referrals are responsible for an estimated 20-50% of high-ticket B2B and coaching revenue, but no platform tracks them. The pragmatic workaround is post-purchase survey ("How did you hear about us?") with structured response options. Surveys are imperfect — they undercount upper-funnel touchpoints buyers don't remember — but they're the only signal available.
Podcast ad attribution. Podcast ads remain measurement's hardest problem. The only practical attribution methods are unique promo codes, dedicated landing-page URLs (vanity URLs like brand.com/podcast), and post-purchase surveys. Newer tools (Podscribe, Magellan AI, Spotify Ad Analytics) attempt impression-based attribution but the signal is noisy. Agencies running podcast spend should expect 30-50% of attribution to live in promo codes and URLs, not in their attribution platform.
Branded search. When a buyer types the brand name into Google after seeing an Instagram ad, last-click attribution credits "Google / paid" or "Google / organic" — when the actual driver was the Instagram ad. Disentangling branded search from upper-funnel is one of the hardest problems in attribution. The only reliable answer is incremental lift testing on branded search itself (turn off branded-search ads in 50% of geos for 4 weeks; measure delta in branded-search organic clicks vs delta in conversions).
Long sales cycles >90 days. B2B SaaS, enterprise services, agency engagements, and high-ticket consulting often have 90-180 day buyer journeys. Cookie-based tracking dies inside 7 days on Safari, 30-90 days everywhere else. The only viable approach for long cycles is first-party identity (email-based, CRM-linked) rather than cookie-based — which is exactly what Hyros, Wicked Reports, and Northbeam invested in.
How to actually pick an attribution model
The right attribution model is the one your business stage and budget can support. The decision framework below cuts through the marketing-platform sales pitches.
E-commerce under $500K/year revenue: Stick with last-click attribution + clean UTMs + Meta Conversion API. Save the $500/month an attribution platform would cost. At your data volume, no model will produce statistically meaningful path-level attribution — you don't have enough conversions for ML, and rules-based attribution beyond UTM-tagged last-click is ceremony without insight.
E-commerce $500K-$5M/year: TripleWhale at $399-$799/month. The Shopify-native integration, Triple Pixel server-side tracking, and profit-attribution layer (using your COGS data) make TripleWhale the obvious fit for this stage. Northbeam is technically more sophisticated but priced for $5M+ brands.
E-commerce $5M-$50M/year: Northbeam ($1K-$3K/month). At this stage you have enough data for genuine MTA + MMM blending, and Northbeam's transparent methodology + warehouse integrations let your team build trust in the numbers. TripleWhale Plus is also a credible choice if you prefer Shopify-native UX over warehouse integration. For an in-depth comparison of the major attribution platforms — including up-to-date pricing benchmarks and feature differences — see our best ad tracking and attribution software roundup, which covers the same eight platforms in greater feature-by-feature detail.
E-commerce $50M+: Rockerbox or a bespoke MMM build. At this scale you need offline-channel attribution (TV, podcast, OOH) and customized statistical modeling. Hire a marketing scientist or contract a firm.
Coaching, info-product, course creators: Hyros ($497-$2K/month). The first-party tracking, ClickFunnels/Kartra/Kajabi/GHL integrations, and long-cycle attribution are explicitly built for this category. There is no close substitute.
Agency client work (running multiple client accounts): RedTrack Agency tier ($524/month) or Wicked Reports. Workspace isolation, multi-account management, and reseller-friendly pricing matter when you're the one running the platform across clients.
Lead-gen with email-heavy funnels: Wicked Reports ($497/month). The Klaviyo/ActiveCampaign integrations and long-cycle attribution are the strongest in the market for this use case.
Mobile-first business: Branch or AppsFlyer or Adjust. Web attribution platforms simply can't track mobile properly; you need an MMP.
Multi-brand or enterprise: Rockerbox + custom MMM build. Above $50M annual revenue, the right answer is usually a hybrid (path-level MTA + top-down MMM) configured to your specific channel mix.
The most important constraint is that the attribution platform must integrate with the actual systems your business runs on. Hyros + a Shopify store is friction; TripleWhale + a coaching offer with manual sales is friction. Pick the platform whose integration model matches your business model.
A practical 5-step attribution implementation
A clean attribution implementation takes 30-60 days. Here's the order to do it in.
Step 1: Audit current data
Before installing anything, document what you have. For each client (or your own business), answer:
- Are UTM parameters consistently applied to every ad, email, and external link? Pull a 30-day sample of inbound traffic from Google Analytics or your existing analytics; if more than 5% of paid-channel traffic has missing or malformed UTMs, fix that first.
- Is the Meta pixel installed and firing? Is the Google tag installed? Is the TikTok pixel installed? Are conversion events configured correctly? Use Meta Pixel Helper, Google Tag Assistant, and TikTok Pixel Helper to verify.
- Is server-side tracking running on any channel? If yes, what's the deduplication setup with browser pixels? If no, that's Step 4.
- What's the source of truth for conversion data? Stripe? Shopify? Salesforce? HubSpot? CRM system? You need to know which system has the actual conversion record before you can attribute back to ads.
Step 2: Pick a model + tool aligned with business stage
Use the decision framework above. Pick the simplest option that solves the actual problem. A common mistake is over-buying — installing a $2K/month platform on a $200K/year business creates more confusion than insight.
Step 3: Standardize UTM conventions
This is the cheapest, highest-leverage step in the entire process. Every link your team or your client's team produces should follow a consistent UTM template. Here's a template that works for 90% of agencies:
utm_source = the platform (facebook, google, tiktok, youtube, email, podcast)
utm_medium = the ad type (cpc, cpm, video, organic, email, social)
utm_campaign = the campaign name (summer-launch-2026, evergreen-coldtraffic)
utm_content = the ad creative (variant-a-hook-1, variant-b-hook-2)
utm_term = the audience or keyword (lookalike-1pct, broad-25-45)
Document this convention in a Notion page, share it with everyone running ads, and audit weekly for the first 30 days. Build a UTM-builder tool or Google Sheet template so nobody hand-types UTM parameters (and consistently mistypes them).
Step 4: Implement server-side conversion API
Pick one of: Stape (managed), Google Tag Manager Server (DIY), or your attribution platform's native server-side tracker (Hyros, TripleWhale, RedTrack all include their own). Install Meta CAPI, Google Enhanced Conversions, and TikTok Events API in that order — Meta gives the biggest immediate lift, Google enhances Smart Bidding, TikTok comes last because TikTok's audiences are the most privacy-protected and CAPI lift is largest.
Validate event match quality (EMQ) in Meta Events Manager — aim for 7+/10. If EMQ is below 6, you're missing customer parameters (email, phone, name, address) that should be sent server-side.
Step 5: Run a holdout test quarterly
Once your attribution platform is reporting numbers, validate them with a real holdout test at least once per quarter. The simplest version: turn off Meta retargeting in 25% of your DMA list for 4 weeks, while keeping it running everywhere else. Compare conversion volume in holdout DMAs vs control DMAs. The delta is your true incremental lift from Meta retargeting — and it's almost always meaningfully different from what your attribution platform reports.
If the platform reports retargeting drives 30% of revenue but the holdout shows it only drives 12% incremental revenue, you know the platform is over-crediting retargeting. Adjust budget accordingly. This kind of validation is the difference between an agency that runs ads and an agency that runs ads with confidence.
Common attribution mistakes
After auditing dozens of agency attribution setups, these mistakes appear over and over:
Using last-click as the default report. Already covered in detail above, but worth repeating: last-click is wrong for any business with a multi-touch buyer journey, which is essentially every business above $500K revenue. If your client reporting deck still leads with last-click, you're showing the client the report most likely to lead them to wrong budget decisions.
Trusting GA4's modeled conversions without sanity check. GA4 will happily fill in missing data with synthetic conversions. Always cross-check GA4-reported conversions against the actual source-of-truth system (Stripe, Shopify, CRM). If GA4 reports 1,200 conversions but Stripe shows 980, the difference is modeled or duplicated — figure out which before reporting either number to a client.
Not running geo-holdout tests. Without periodic lift testing, you have no way to validate the platform's reported attribution. Most agencies skip this because it requires turning off some ads, which feels uncomfortable. The cost of not testing is bigger: you're flying on a dashboard that may be wrong by 30-60% in either direction.
Forgetting branded search cannibalization. Paying for branded search ads when the buyer was already going to find you organically is one of the most common silent budget leaks. Run a branded-search holdout test once per year — turn off your branded search ads for 2-4 weeks and see what happens to total branded-search clicks (paid + organic combined). If organic absorbs 80%+ of the lost paid clicks, you're paying Google for traffic that was already yours.
Comparing platforms-of-record without reconciling counting differences. Meta, Google, GA4, and your attribution platform will all report different conversion counts for the same campaign. They use different attribution windows, different deduplication logic, and different definitions of a conversion (Meta counts view-through within 1 day; Google counts click-through within 30; GA4 attributes via DDA). Before comparing two reports, document each platform's attribution-window settings and deduplication logic. The "Meta says 1,000, GA4 says 600" gap is usually 80% explained by configuration differences, not measurement bugs.
FAQ
Q: What's the difference between attribution and tracking?
Tracking and attribution are often confused but they're distinct steps in the data pipeline. Tracking is the process of capturing marketing events — pixel fires, UTM parameters, server-side conversion API calls, click logs. Attribution is the process of analyzing those tracked events to assign credit for conversions across multiple touchpoints. You can have great tracking and bad attribution (you collected all the data but apply a naive last-click model that produces wrong conclusions), or you can have bad tracking and good attribution methodology (your model is sophisticated but your input data is missing 30% of conversions due to ad-blockers and ITP). Most agency attribution problems in 2026 are tracking problems — missing server-side data, broken UTMs, ad-blocked pixels — not attribution-model problems. Fix tracking first, then refine your attribution model.
Q: What's the best attribution model for a small agency or business?
For agencies serving sub-$500K-revenue clients, the best practical attribution model in 2026 is last-click + position-based as a comparison view, both fed by clean UTMs and a Meta CAPI implementation. Don't pay for an attribution platform; the data volumes don't justify it. Instead, invest in a tagged UTM convention, a server-side tag manager (Stape's $30/month tier is plenty), and quarterly holdout testing on the largest channel. As clients grow above $500K, the right answer shifts to TripleWhale (e-commerce) or Hyros (info-product). The biggest mistake small agencies make is buying a $1,500/month attribution platform for a client whose total ad spend is $5,000/month — the math doesn't work and the platform produces statistically noisy attribution at low data volumes.
Q: How does iOS 14.5 (App Tracking Transparency) affect attribution in 2026?
iOS 14.5 — released in April 2021 and still in force in 2026 — requires apps (including Meta's Facebook and Instagram apps) to ask users for explicit permission before tracking them across other apps and websites. The opt-in rate has stabilized around 25-30% globally. The remaining 70-75% of iOS users opt out, which means Meta cannot match those users to your pixel events on your website. Five years later, the cumulative effects are: Meta's reported conversions are systematically lower than actual conversions (often 30-40% lower in iOS-heavy audiences); Meta's audience-targeting precision degraded for opted-out users; Meta's attribution windows shrank from 28-day click + 1-day view to 7-day click + 1-day view by default; and Aggregated Event Measurement (AEM) caps you at 8 conversion events per domain. Server-side conversion APIs (Meta CAPI) recover most of this loss because they don't require browser-level tracking — they fire from your backend, which sees the actual conversion regardless of ATT status.
Q: Can I do attribution without paying for an attribution platform?
Yes — and for businesses under $500K revenue, you should. The DIY attribution stack: clean UTMs on every link, server-side Google Tag Manager hosted on Stape ($30/month) or self-hosted on Google Cloud (~$120/month), Meta CAPI + Google Enhanced Conversions + TikTok Events API integrations through GTM Server, GA4 for free analytics, and Looker Studio (free) for dashboards. Add a quarterly geo-holdout test to validate platform-reported numbers. This stack costs $30-200/month total and produces good-enough attribution for most small to mid-sized businesses. The point at which you outgrow DIY is usually when you're doing $1M+/year revenue with 5+ active channels and need profit-aware attribution (LTV, COGS) to make budget decisions — that's when TripleWhale, Hyros, or Northbeam earn their cost.
Q: What's incremental lift and why does it matter?
Incremental lift measures the causal impact of a marketing channel by comparing actual conversion outcomes to a counterfactual where the channel didn't run. A typical lift test: in 50% of designated market areas (DMAs), run the campaign normally; in the other 50%, suppress it. After 4-8 weeks, the difference in conversion volume — adjusted for baseline drift — is the campaign's true incremental contribution. Why it matters: every other attribution model (last-click, first-click, time-decay, DDA) measures correlation between touchpoints and conversions. Lift measures causation. Empirically, platform-reported conversions overstate true incremental conversions by 30-60% in most studies — meaning a Meta campaign showing 1,000 conversions in Ads Manager probably drove 400-700 actual incremental conversions, with the rest happening anyway via other channels. Without lift testing, you're optimizing toward platform-reported conversions, which over-credit the platform and lead to over-investment.
Q: How do I attribute conversions to Instagram DMs?
Standard attribution platforms — Meta Ads Manager, GA4, Hyros, TripleWhale, Northbeam — do not track Instagram DMs as conversion events because DMs aren't fired as pixel events. When a buyer DMs your client after seeing an ad, the conversion path goes dark. The practical solution is a CRM-style tool that natively connects DM conversations to ad-source UTMs and conversion outcomes. Inflowave is purpose-built for this — it captures every Instagram DM, links it to the ad UTM that drove the click-to-DM moment (using Meta's Click-to-Message ad data), and tracks the DM-to-booked-call-to-paid-customer path inside a unified pipeline. The conversion event can then be piped to your attribution platform via Zapier or webhook so the DM appears as a real touchpoint in your existing model. Without a tool like this, agencies running DM-driven funnels — common in coaching, info-product, and SMMA — systematically under-attribute Meta and Instagram budget because the closing channel is invisible to standard tracking.
Q: Is data-driven attribution (DDA) accurate?
DDA is accurate when you have enough data; below that threshold it's worse than rules-based attribution because it overfits noise. Google's DDA requires 300+ conversions in 30 days per conversion action before activating, and even at 300 conversions the model is statistically shaky — the standard error on credit allocation is large enough that month-over-month changes in DDA-reported attribution are often noise, not real shifts. At 1,000+ conversions/month per action, DDA stabilizes and meaningfully outperforms last-click. At 10,000+ conversions/month, DDA approaches the upper bound of what observational attribution can do. The key insight: DDA is correlation-based, not causal. It models how channels predict conversions in your data; it does not measure how channels cause conversions. For causal accuracy, layer a quarterly incremental-lift test on top of DDA. For most agencies, the right framing is "DDA is the best attribution model for routine optimization decisions, lift testing is the periodic validation."
Q: How do I attribute conversions across multiple devices?
Cross-device attribution is one of the biggest 2026 attribution challenges. The deterministic answer is first-party identity matching: when a buyer signs up, signs in, or completes a transaction, capture their email and use it as the cross-device identifier. Hyros, Wicked Reports, and most enterprise attribution platforms stitch identities this way — once a single email shows up on mobile and desktop, those sessions get merged into a unified customer journey. The probabilistic answer (used by GA4, Meta, Google Analytics) is signal-based stitching: matching IP, device fingerprint, signed-in-Google-account, and behavioral signals. Probabilistic stitching catches some cross-device journeys but misses others, especially when buyers don't sign in to Google or Meta consistently. The tactical answer for agencies: invest in opt-in moments (newsletter signup, lead magnet download, account creation) on every client site to maximize email-based identity capture, then route all conversion data through tools that prioritize first-party identity over cookies.
Q: What's the difference between MTA (multi-touch attribution) and MMM (media mix modeling)?
MTA — multi-touch attribution — tracks individual customer journeys across touchpoints and assigns credit at the path level. Tools: Hyros, TripleWhale, Northbeam, Wicked Reports. Strengths: granular, near-real-time, good for daily/weekly optimization. Weaknesses: depends on tracking pixels and cookies (which break in privacy-restricted environments), can't see channels without trackable click events (TV, OOH, podcast). MMM — media mix modeling — uses statistical regression to model the relationship between aggregate marketing spend (across all channels including offline) and aggregate revenue over time. Strengths: works for any channel including offline, doesn't depend on individual-level tracking, captures long-term effects. Weaknesses: aggregate, not real-time (typically weekly or monthly outputs), requires statistical expertise to implement and interpret. Modern best practice is to blend both: MTA for tactical optimization, MMM for strategic budget allocation across channels. Rockerbox and Northbeam offer blended MTA + MMM products for enterprise customers; smaller businesses can build an in-house MMM with open-source tools (Meta's Robyn, Google's Meridian, Uber's Orbit) on their own data warehouse.
Q: How accurate is Hyros / TripleWhale / Northbeam compared to ad-platform reporting?
In our experience auditing client implementations, Hyros, TripleWhale, and Northbeam typically report 20-50% more conversions than Meta Ads Manager reports for the same campaign — and those extra conversions are real (validated against Stripe and Shopify). The reason: each platform supplements ad-platform pixels with their own first-party server-side tracker, which captures conversions that browser-side pixels miss due to ad-blockers, ITP, ATT opt-outs, and JavaScript errors. Where they differ from each other: Hyros tends to emphasize first-touch-weighted attribution (which is more generous to upper-funnel channels), TripleWhale's "Total Impact" model is closer to a hybrid first-and-last weighting, and Northbeam's approach is more transparent and customizable. None of them are perfectly accurate — they're all observational models, not causal — but they're meaningfully better than ad-platform-self-reporting alone. Validate periodically against Stripe/Shopify (revenue should match within 5%) and against quarterly geo-holdout tests (lift should track directionally).
Q: What's a UTM and how should agencies standardize them?
A UTM (Urchin Tracking Module) is a set of URL parameters appended to a destination link that captures the source, medium, campaign, content, and keyword of the inbound traffic. The five standard parameters are utm_source (the platform), utm_medium (the ad type), utm_campaign (the campaign name), utm_content (the creative variant), and utm_term (the keyword or audience). Standardization is essential because attribution platforms group reports by the utm fields exactly as captured — "facebook" and "Facebook" become two different sources in your reports, ad-spend allocation gets fragmented, and your dashboards become unreadable. Best practice: lowercase everything, use hyphens not spaces or underscores, document a naming convention in a shared doc, build a UTM builder tool or Google Sheet template so nobody hand-types parameters, audit weekly during the first 30 days of a new client, and quarterly afterward. A clean UTM dataset is the cheapest, highest-impact attribution improvement most agencies can make.
Q: How does GA4's attribution differ from Universal Analytics?
GA4 replaced Universal Analytics' last-non-direct-click default with data-driven attribution (DDA) as the new default. Practically, this means GA4 reports significantly different attribution numbers than UA did for the same conversion paths — often 10-25% different on top channels. GA4 also uses event-based data modeling (every interaction is an event) instead of UA's pageview-based model, which changes how funnels and conversions are defined. Other significant differences: GA4 caps attribution windows at 90 days for conversions (UA was unlimited via custom configurations); GA4 uses cookieless modeling to fill data gaps from privacy-restricted users (UA didn't); GA4's free tier has a 10-million-events-per-query sampling threshold; and GA4 requires explicit conversion event configuration (UA had goals built in). For agencies migrating from UA to GA4, the biggest practical issue is that GA4 numbers don't reconcile with historical UA numbers — clients see "different" metrics and assume something broke. The honest answer is GA4 is measuring differently, not better, and the right move is to establish GA4 baselines fresh rather than trying to reconcile to UA history.
Conclusion
Marketing attribution in 2026 is a discipline of trade-offs. There is no "correct" attribution model — only models that fit your business stage, data volume, and decision cadence. The agencies that win are the ones who understand the tradeoffs explicitly: which model they're using, where it's wrong, and what validation tests catch the errors before they become bad budget decisions.
The implementation order that consistently works: clean UTMs first, server-side conversion APIs second, an attribution platform that fits your business stage third, and quarterly incremental-lift tests fourth. Layer in the channel-specific attribution gaps (DMs, podcast, branded search, word-of-mouth) where your client's actual revenue path lives — these are the places where most attribution platforms produce zeros and where competitive advantage hides.
If your agency or client base runs Instagram-DM funnels — coaching, info-product, SMMA, high-ticket service businesses — attribution starts in the inbox. Inflowave tracks every DM conversion alongside ad-source UTMs, then feeds events to your existing attribution platform via webhook or Zapier so the DM-to-close path appears in your reports. See Inflowave's pricing for plan details. For deeper reading, see our comparison of the best ad tracking and attribution software for 2026, our guide to setting up Facebook Conversion API, and our breakdown of the best CRM platforms for marketing agencies. Pick the model that fits your stage, implement clean infrastructure, and run lift tests quarterly. Everything else is execution.