Your bidding algorithms are training on broken signal. Your CAC is showing it.

Most paid-media stacks lose 15 to 40 percent of their conversion signal between the user's browser and the ad platforms optimizing your spend. Performance Max and Advantage+ keep bidding against what's left. The result is a slow, invisible drift in CAC, ROAS, and pLTV that nobody on the team can attribute to a specific cause. Signal Quality finds the leak, quantifies it in dollars, and closes it.

Book a Signal Fracture Audit
When the signal feeding your bidding is clean, the numbers move.

Meta and Google have published their own case studies showing what high-fidelity, server-side conversion signal does to paid-media performance. The pattern is consistent across categories.

These are not Signal Quality numbers. They are the platforms' own published figures for what their algorithms do when they receive accurate, deduplicated, server-side conversion signal. The inverse is also true: when the signal is degraded, the same algorithms underperform, silently, against the same media plan. Most stacks are running closer to the degraded state than the clean one and have no instrumentation to tell them which.

Signal quality is not one thing. It's six.

Every conversion that reaches Meta, Google, or your warehouse passes through these six axes. Each one is independently measurable. Each one is independently breakable. The worst-scoring axis sets the ceiling for the entire system.

01
Consent state integrity
Whether a user's consent choice is captured at the edge and applied consistently across every downstream tag, server, and vendor. Drives the share of conversions that arrive observed vs. modeled.
02
Event integrity
Whether one conversion is counted as one conversion with a stable identity across browser and server. Encompasses event match quality, deduplication, event ID parity, and server-client reconciliation. Failures here are the single largest source of inflated conversion counts and corrupted bid optimization.
03
Identity resolution
Whether email, phone, and identifiers are normalized, hashed, and persisted across sessions and platforms. Governs Enhanced Conversions, CAPI match rates, and audience-sync fidelity.
04
Attribution parameter capture
Whether click IDs, UTMs, and session context survive the journey from ad click to conversion. Failure shows up as inflated "Direct" traffic and missing source attribution in MMM and reporting.
05
Payload completeness
Whether each event carries the fields the platform actually uses to optimize: currency, value, content IDs, transaction granularity, behavioral context. Skinny payloads train bidding on a fraction of the signal you paid to generate.
06
Value fidelity
Whether the value parameter passed to the platform represents what the business actually wants to optimize for: gross revenue, margin-adjusted revenue, or predicted lifetime value. Passing the wrong value teaches Performance Max and Advantage+ to acquire the wrong customers, permanently.
What this looks like inside a marketing org.

These are the surface patterns we see most often. Each one is a symptom of one or more of the eight axes failing upstream.

CAC is drifting up. Your agency says campaigns look fine.
From their dashboard, they're not wrong. The dashboard inherits whatever signal made it to the platform. Performance Max and Advantage+ are optimizing against the conversion shape your stack permitted, not the one you designed.
Modeled conversions are climbing while observed conversions stay flat.
Consent Mode v2 is doing its job: translating denied-consent traffic into a statistical approximation. If the underlying signal is corrupted, the model is dutifully transducing distorted input into modeled output that your bidding layer treats as ground truth.
Every platform claims credit for the same conversions, and none of them match your CRM.
Each platform is grading its own homework against a different fragment of the same underlying signal. Identity fragmentation, deduplication failure, and parameter decay all compound into MMM and attribution numbers that nobody trusts.
Your "Direct" traffic is suspiciously high.
When 30 to 40 percent of traffic shows up with no source, it doesn't mean people are typing your URL. It means attribution context is being stripped between click and conversion, and pLTV models built on that data inherit the same blindness.
A 40-percent purchase variance appeared between platforms with no campaign change.
This is the pattern from our recent DSC engagement. Root cause: Consent Mode v2 silently shifted reporting from observed to modeled conversions after a CMP update. The bidding layer trained on the new, lower-fidelity signal for weeks before the variance was diagnosed. Read the case study.
Audit, build, monitor.
01 · Audit
A Signal Fracture Audit measures all six axes against your live stack and translates the findings into projected CAC and ROAS impact at your current spend. Public browser data plus a single console capture is enough to score most of them. No platform credentials required. You get a single self-contained deliverable with a prioritized list of fractures, dollar estimates, and the exact next move.
02 · Build
Targeted remediation of the worst-scoring axes: server-side container fixes, CAPI configuration, deduplication keys, hashing, consent propagation, parameter capture. No platform migrations. No six-month overhauls. The signal feeding your bidding starts matching what the business already thinks it's sending.
03 · Monitor
Stacks drift. Platforms change APIs. Tags get added without review. A configuration that scored clean last quarter degrades silently. Ongoing monitoring catches axis-level regressions before they show up in performance reports, instead of after a quarter of distorted optimization.
The frame
Your paid-media platforms are training their bidding algorithms on whatever conversion signal makes it through your stack, not the signal you intended to send. When the signal is degraded, the algorithms optimize against fiction: CAC drifts up, ROAS reporting fragments, pLTV models inherit the distortion, and the team has no instrumentation to tell them which axis is leaking. Signal Quality is the calibration layer beneath all of it: audit-grade measurement that turns paid-media leakage into recovered signal, and signal decay into dollar-denominated impact.
The person behind the practice.
Erich Eisenhart

I'm Erich Eisenhart. I diagnose the layer between a user's browser and the ML systems your spend trains. The six-axis model, the audit format, and the dollar-translation framework are mine.

The same patterns keep showing up. A DTC brand running Performance Max discovers their CMP is decorative: every consent category fires the same way regardless of what the user clicks. A sophisticated B2B operator finds GPC captured at the edge but never propagated to vendors. An enterprise advertiser realizes Consent Mode v2 has been transducing a corrupted primary signal into modeled conversions for months, and the bidding layer has been training on it.

These are not edge cases. Broken signal is the default state. Marketing stacks are built tool by tool, integration by integration, over years; nobody steps back to verify the pipe is delivering what the business thinks it's sending. That is the practice. I score the six axes, document what is inheriting from each one, and rebuild the pipe so your downstream measurements actually correspond to reality.

Find out what your signal is actually worth.
A Signal Fracture Audit takes a few days, requires no platform access, and produces one self-contained deliverable: an axis-by-axis score of your current signal quality, the projected CAC and ROAS impact in dollars at your current spend, and a prioritized remediation plan. No retainer. No commitment. A clean read on what your stack is doing.
Book a Signal Fracture Audit
Or connect on LinkedIn