Attribution 11 min read Feb 5, 2026

The GTM Audit: What's Probably Hiding in Your Tag Manager

After years managing GTM across multiple markets and compliance environments, the same categories of problems appear every time. Here's what we look for first, and why it matters more than most marketers realise.

Google Tag Manager was designed to make it easy for marketers to deploy tracking without needing a developer for every change. That's exactly what happened — and it's exactly why most containers we audit are in poor shape.

The ease of adding tags is not matched by any equivalent discipline for documenting, reviewing, or removing them. Tags accumulate across campaigns, agencies, and platform trials. Triggers are copied and modified without testing. Data layers are partially implemented and left. Consent integrations are wired incorrectly and nobody notices because the platform still reports conversions.

The result, in most containers we look at, is a system that is actively producing incorrect data — sometimes in ways that flatter the numbers (duplicate conversions inflating ROAS) and sometimes in ways that create legal exposure (tracking pixels firing without valid consent).

A GTM audit is a systematic review of every tag, trigger, and variable in a container against what the business actually needs it to do. It is, in Spec-Driven Development (SDD) terms, a gap analysis between the spec (what tracking was supposed to accomplish) and the current state (what the container is actually doing). You cannot write a credible Blueprint for improving a marketing infrastructure until you've done this honestly.

The five failure categories we find every time

These aren't edge cases. These are the structural problems present in the majority of containers we've reviewed. The distribution varies — some containers have one category in severe form, others have all five at low severity — but the categories themselves are consistent.

Tags firing on the wrong pages

Conversion tags attached to sitewide "All Pages" triggers — or triggers scoped to URLs that were restructured two years ago. The tag fires. The conversion records. The data is wrong. This is the most common failure mode and the hardest to catch without a full container audit because it looks fine in the interface.

SIGNAL: Conversion rate looks suspiciously high. Cost-per-acquisition is implausibly low.

Duplicate conversion events

A Google Ads conversion tag and a GA4 goal measuring the same action, both imported into the same campaign. Or a pixel firing twice — once from GTM, once hardcoded in the page template — because the developer who added it didn't know the tag manager already covered it. Duplicate events inflate reported conversions and corrupt every attribution model downstream.

SIGNAL: Platform-reported conversions significantly exceed CRM or checkout-confirmed conversions.

Broken or missing data layer

The data layer is the contract between the website and the tag manager. When it's incomplete — missing transaction IDs, product data, user signals — every downstream tag that depends on it is working from guesswork or defaults. Most GTM containers we audit have a data layer that was partially implemented and never finished. The tags that depend on it fail silently.

SIGNAL: Enhanced ecommerce data is absent or incomplete. Dynamic remarketing audiences are thin.

Abandoned tags from past campaigns and agencies

Every agency, every campaign platform, every retargeting vendor leaves a tag behind. The campaign ended. The agency relationship ended. The tag is still there, firing on every page view, sending data to a platform nobody monitors, potentially to a vendor whose data processing agreements have since expired. A mature container accumulates these over years. We've audited containers with 80+ tags where 30% were firing to platforms the current team had never heard of.

SIGNAL: GTM container has tags added by previous agencies. Nobody can explain what half of them do.

Consent not gating tags correctly

The consent banner fires. The user declines. The marketing pixels fire anyway — because the consent mode integration was never properly implemented in GTM, or because the tag's consent check was bypassed "temporarily" during a campaign launch and never fixed. This is a compliance exposure, not just a data quality issue. Under GDPR and equivalents, firing tracking pixels without consent is a regulatory event.

SIGNAL: Tags fire immediately on page load before consent interaction. Consent mode signals not visible in GA4 debug.

Why the consent problem is different from the others

The first four failure categories are data quality problems. Bad data leads to bad decisions — wasteful spend, misallocated budget, misleading reports. They're serious, but they're bounded. You can fix them, clean up the data, and move forward.

Consent failures are different in kind. Under GDPR, the UK GDPR, and equivalents in Australia (Privacy Act), Canada (PIPEDA/CPPA), and elsewhere, firing tracking pixels on users who have not consented — or whose consent mechanism was non-functional — is a regulatory event. It doesn't matter that the failure was accidental. The tag fired. The data was collected. The legal exposure exists.

The specific failure mode we see most often is this: a consent management platform is installed, the consent banner displays, and the business believes it is compliant. But the GTM consent mode integration was never properly configured — or was configured for Google tags but not for the 12 other vendor tags in the container. The banner is theatre. The tracking is happening regardless of user choice.

This is why the consent layer is the first thing we look at in an Infrastructure Audit. The compliance exposure needs to be understood before any optimisation work begins.

A level of audit this deep requires access — read-only access to your GTM container, your analytics, and your consent infrastructure. That's a different engagement from the automated diagnostic, which tests from the outside. The diagnostic surfaces the symptoms. The Infrastructure Audit opens the container and confirms the causes.

What a proper GTM audit actually covers

An audit is not a skim of the container looking for obvious problems. It's a structured review against a defined scope. At minimum, it needs to cover:

All tags — what they are, what triggers them, where they fire
Trigger logic — correct scoping, collision risks, timing issues
Data layer implementation — completeness, accuracy, consistency
Consent mode configuration — is blocking actually blocking?
Conversion mapping — tags vs platform-reported conversions vs CRM actuals
Ownership — who added each tag, when, and whether the relationship is still active

On ownership: Every tag in a container was added by someone, for a reason, at a point in time. Knowing that history is part of the audit. Tags from agencies whose contracts have ended, from platforms the business no longer uses, or from campaigns that wrapped years ago need to be identified, understood, and — if they have no current purpose — removed. An audit is also a container archaeology exercise.

What comes after the audit

The audit produces a findings document. The findings document is not a to-do list — it's a diagnostic baseline. It tells you what the container is doing, where it diverges from what it should be doing, and what the consequence of each divergence is.

From that baseline, remediation work can be scoped. This is where Spec-Driven Development becomes directly relevant: before any tag is fixed, reconfigured, or removed, there needs to be an agreed description of what it's supposed to do and how we'll verify it's doing that correctly. "Fix the conversion tag" is not a spec. "Ensure the purchase conversion tag fires exactly once per confirmed transaction, passes order_id and revenue values from the data layer, is gated by ad_storage consent, and maps to Google Ads conversion action #XXXXX" is a spec.

The discipline of writing that down before touching the container is what separates a remediation that sticks from a remediation that introduces new problems — which, in our experience, is what happens when GTM is treated as a place where you just "make quick changes."

How to get a GTM audit

1

Run the free diagnostic

The automated diagnostic tests your tracking infrastructure from the outside — no access required. It surfaces the symptoms: attribution inflation, consent failures, tracking gaps. Five minutes.

2

Book an Infrastructure Audit

If the diagnostic surfaces tracking issues worth investigating, the Infrastructure Audit goes inside. You grant read-only access to your GTM container, analytics, and consent infrastructure. We scope the engagement based on what the diagnostic found — not a fixed checklist. From $750 (5 hours minimum).

3

Remediate — or hand off to your team

The audit produces a findings baseline. From there, remediation can be scoped as an Architect engagement — or you take the findings and execute with your own team. The data is yours either way.

Start with what the diagnostic can see

The free diagnostic checks your tracking infrastructure, consent posture, and attribution setup from the outside. If it surfaces problems worth opening the container for, we'll scope that conversation together.