<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="/feeds/rss-style.xsl"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>Archie</title>
        <link>https://archie6.com</link>
        <description>Archie's Blog</description>
        <lastBuildDate>Wed, 15 Apr 2026 07:27:02 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>Astro Chiri Feed Generator</generator>
        <language>en-US</language>
        <copyright>Copyright © 2026 Archie</copyright>
        <atom:link href="https://archie6.com/rss.xml" rel="self" type="application/rss+xml"/>
        <item>
            <title><![CDATA[When Browser Tracking Stops Being Reliable: Building Server-side Tracking with sGTM]]></title>
            <link>https://archie6.com/sgtm-architecture</link>
            <guid isPermaLink="false">https://archie6.com/sgtm-architecture</guid>
            <pubDate>Wed, 25 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[After the Shopify Headless migration, browser-side tracking broke at scale. We put together a server-side tracking setup with sGTM (Server-side Google Tag Manager) to patch it, covering GA4, Google Ad...]]></description>
            <content:encoded><![CDATA[<p>After the Shopify Headless migration, browser-side tracking broke at scale. We put together a server-side tracking setup with sGTM (Server-side Google Tag Manager) to patch it, covering GA4, Google Ads, Meta, Reddit, and X. This is the full postmortem—not just what the final solution looks like, but more importantly the walls we hit along the way. It kept feeling like the next step should be simple, and then we would run into another undocumented wall. If you’re also evaluating sGTM, or already wrestling with server-side tracking, some of these detours might save you time.</p>
<p>Start with the full architecture. You only need to watch three layers: the entry point (Worker), the routing hub (sGTM), and the fallback layer (Webhook + Firestore). We’ll unpack them one by one below:</p>
<p>If you’re short on time, jump straight to:</p>
<ul>
<li><a href="#why-browser-tracking-stopped-being-enough">Why Browser Tracking Stopped Being Enough</a> — the background and our three client-side attempts</li>
<li><a href="#architecture-overview-and-the-key-decision">Architecture Overview and the Key Decision</a> — why we picked sGTM</li>
<li><a href="#worker-outsmarting-ad-blockers">Worker</a> — how we got around ad blockers</li>
<li><a href="#sgtm-one-event-in-five-platforms-out">sGTM</a> — how one event gets split five ways, and how ecommerce event data gets standardized</li>
<li><a href="#browser-signal-bridging-cookies-and-click-ids">Browser Signal Bridging</a> — how Cookies and Click IDs make it to the server</li>
<li><a href="#firestore-deduplication">Firestore Deduplication</a> — how Pixel + Webhook avoid double reporting</li>
<li><a href="#sgtm-sandbox-the-two-limits-that-hurt-dev-experience-the-most">sGTM Sandbox Pitfalls</a> — this section may be why you came here</li>
<li><a href="#if-i-had-to-do-it-again">If I Had to Do It Again</a> — what I would pick now</li>
</ul>
<hr />
<h2>Why Browser Tracking Stopped Being Enough</h2>
<p>If you do not work on media buying day to day, you can reduce the problem to one sentence: after a user clicks in from an ad, their on-site actions—browsing, adding to cart, checking out, completing payment—need to be sent back to ad and analytics platforms. Those platforms need the signals for attribution, optimization, and dynamic remarketing. The most obvious example: if someone clicks a Google Ads ad and buys something, but that conversion never gets sent back, Ads has no idea the ad worked. All downstream optimization goes blind.</p>
<p>There are two reporting paths. Client-side tracking runs JavaScript in the user’s browser (basically each platform’s Pixel), and the browser sends events directly to the ad platform. Server-side tracking has your own server call each platform’s Conversions API (CAPI) instead. The client-side path has one big advantage: it sees the full browser context—Cookies, Click IDs, User-Agent—but it is easy for ad blockers to kill, and it also gets weakened by browser privacy policies such as Safari ITP and Chrome’s third-party Cookie restrictions. The server-side path is harder to block, but it is missing those attribution signals by default, so you have to bridge them back manually. Mature setups usually run both paths at the same time so they can cover for each other.</p>
<p>Once that premise is clear, the three stages we went through make sense. The first three attempts all stayed on the client side, and every one of them ran into the same wall.</p>
<p><strong>Stage 1: relying on Shopify’s built-in tracking Apps.</strong> Back when the Shopify Storefront and the Gatsby Headless storefront were running side by side, tracking depended entirely on the official Apps provided by each ad platform inside Shopify. Gatsby had no instrumentation at all—traffic came in, but only the conversions that passed through the Storefront side could be reported.</p>
<p><strong>Stage 2: putting client-side Pixels into Gatsby.</strong> Once it became obvious Gatsby was a tracking blind spot, we started instrumenting it: based on the event IDs configured in the Shopify App backend, the Gatsby frontend injected the corresponding reporting code. The problem was that the whole setup was loose. Each platform had its own implementation, and ecommerce event payloads like <code>view_item</code> and <code>add_to_cart</code> were not standardized. Maintenance cost shot up fast.</p>
<p><strong>Stage 3: Web GTM + Google Tag Gateway.</strong> We wanted GTM to manage all tracking code in one place, while enabling Google Tag Gateway (a server-side proxy) to get around ad blockers. But Gateway only proxies Google’s own requests—Meta, Reddit, and X Pixels still go to third-party domains and still get blocked. In practice that only solved two-fifths of the problem.</p>
<p>Those three attempts separately improved coverage, unification, and block avoidance, but they never fixed the root issue: <strong>browser signals are inherently unstable.</strong> Ad blockers kill requests outright (with uBlock Origin’s default rules, pure client-side data loss is typically 15-30%), Safari ITP compresses Cookie lifetime, and Shopify Checkout locks Custom Pixels into a sandboxed iframe where attribution Cookies are no longer readable. That is the ceiling of browser tracking—and that is where the move to server-side starts.</p>
<hr />
<h2>Architecture Overview and the Key Decision</h2>
<p>Once we decided to move server-side, the first question was: how exactly?</p>
<p>We were running ads on four platforms while using GA4 for analytics, and each platform’s CAPI format and deduplication logic is different. Shopify’s official ad Apps already come with their own server-side tracking (conversions reported through Webhooks). We had used them in the first stage, so we already knew the server-side path worked—but every App ran its own little kingdom, and the data format plus ecommerce payloads never lined up. If we built a separate server-side setup for every platform:</p>
<p>So what we needed to patch was not a single platform’s Pixel. We needed one unified server-side pipeline: first keep the request alive on the way in, then normalize ecommerce data and attribution signals in the middle, then fan the event out downstream in each platform’s required format.</p>
<h3>Why sGTM as the Routing Hub</h3>
<p>We were already using Web GTM, so sGTM felt like the most natural extension: client-side events can be sent directly into a Server Container, without rebuilding the entire event model from scratch. GTM’s UI also makes it easier for non-technical teammates to maintain Tag configuration, instead of asking engineering every time tracking needs to change.</p>
<p>At the data-flow level, Pixel is the primary path—it has the full browser context, so attribution quality is best there. Shopify’s <code>orders/paid</code> Webhook has no browser signals by nature, so it only exists as a fallback: if Pixel does not send successfully, Webhook fills the gap. How those two paths avoid double reporting is covered in the <a href="#firestore-deduplication">Firestore Deduplication</a> section.</p>
<p>Each layer does its own job—if Worker fails, it does not stop sGTM from handling Webhooks; if Firestore has issues, it does not block the main Pixel path—so failures do not cascade across the whole pipeline. We will break down each layer below, starting with Worker: the request has to reach sGTM alive first, otherwise nothing downstream matters.</p>
<hr />
<h2>Worker: Outsmarting Ad Blockers</h2>
<p>Worker is basically a reverse proxy that sits between the browser and Cloud Run (sGTM). It has one job: make tracking requests look like something other than tracking requests.</p>
<p>Ad blockers work by pattern matching. EasyList, which uBlock Origin loads by default, mostly blocks ad-related paths such as <code>/pagead/</code> and <code>googlesyndication.com</code>. EasyPrivacy focuses on tracking, so it blocks things like <code>/g/collect</code>, <code>/gtag/js</code>, <code>google-analytics.com</code>, and parameter combinations such as <code>cx=c&amp;gtm</code>. When we were investigating this, both rule sets had to stay open side by side: which paths get matched, which query parameter combinations trigger blocking, and then how to design aliases one by one to get around them. For example, EasyPrivacy has a rule <code>||googletagmanager.com/gtag/js</code>, which directly blocks the gtag loader request. Another rule matches <code>&amp;cx=c&amp;gtm=</code>, which shows up in GA4 collection requests. If you do not handle both, GA4 collection and Google Ads conversion measurement both break.</p>
<p>Worker does four concrete things:</p>
<p><strong>Path rewriting.</strong> It replaces Google’s fixed paths like <code>/g/collect</code> and <code>/gtag/js</code> with meaningless abbreviations. In production there are 20+ path aliases, covering GA4 collection, Google Ads conversion measurement, Consent Mode, gtag destination, and more. Miss even one, and the corresponding feature gets blocked.</p>
<p><strong>Parameter rewriting.</strong> EasyPrivacy also matches query parameters like <code>cx=c&amp;gtm</code>. Worker rewrites them into aliases before forwarding, and sGTM restores them on its side.</p>
<p><strong>Runtime JS replacement.</strong> GTM’s JavaScript hardcodes <code>www.googletagmanager.com</code> and a bunch of Google paths. Before returning that JS, Worker replaces the domain, path names, and parameter names with our own aliases.</p>
<pre><code class="language-javascript">// Path + parameter rewriting (condensed)
const PATH_MAP = {
  '/main.js': '/gtm.js?id=GTM-XXXXXXX',
  '/d/c': '/g/collect',
  '/a/s': '/gtag/js',
  '/x/pa': '/pagead/viewthroughconversion'
}

function rewriteJS(body) {
  return body
    .replace(/\/g\/collect/g, '/d/c')
    .replace(/\/gtag\/js/g, '/a/s')
    .replace(/cx=c&amp;gtm/g, '_cx=c&amp;_g')
    .replace(/www\.googletagmanager\.com/g, 'tracking.example.com')
}
</code></pre>
<p><strong>Header passthrough.</strong> Worker also passes browser context through to sGTM. Without it, the server sees incomplete data:</p>
<ul>
<li><code>CF-Connecting-IP</code> -&gt; <code>X-Forwarded-For</code> (the real user IP)</li>
<li><code>Sec-CH-UA-*</code> Client Hints (device and browser info)</li>
<li>bidirectional <code>Cookie</code> passthrough (<code>_ga</code>, <code>_fbp</code>, and other first-party Cookies)</li>
<li><code>X-Country-Code</code> (the user’s country, used by the Enricher to match Merchant Center feed language)</li>
</ul>
<p>One detail matters here: <strong>not every path can be proxied into sGTM.</strong> Some Google Ads side paths—conversion measurement, CCM (Consent Mode-related Cookie management), remarketing pixels, and so on—are not inbound endpoints of the sGTM container. If you blindly route them into sGTM, they will just 400. Worker has to recognize those paths and send them back to Google’s original upstream hosts (<code>googleadservices.com</code>, <code>googlesyndication.com</code>, and so on). Worker also performs CORS origin validation; requests from non-whitelisted domains get a straight 403.</p>
<p>In live testing with uBlock Origin’s default rule set (EasyList + EasyPrivacy), every rewritten tracking request passed normally. Looking at GA4, Safari and Chrome funnel conversion rates are basically in line with each other (Safari 0.72% vs Chrome 0.56%), which suggests Safari ITP’s attribution damage was also successfully bridged through the server-side path.</p>
<hr />
<h2>sGTM: One Event In, Five Platforms Out</h2>
<p>sGTM runs on GCP Cloud Run. Once the GA4 Client receives an event, it first passes through the Items JSON Enricher for normalization, and then each platform Tag consumes the same event payload independently—GA4 Tag sends analytics events back, Google Ads Tag reports conversion plus cart details, and Meta / Reddit / X Tags each call their own CAPI. Each Tag fires on its own and does not depend on the others.</p>
<p>But for the event to fan out correctly, the ecommerce data itself has to line up first.</p>
<h3>Standardizing Ecommerce Event Data</h3>
<p>How messy was it? The <code>item_id</code> emitted by Shopify’s official tracking Apps was an internal ID string, and it did not match the Merchant Center feed offer ID at all—so Google Ads dynamic remarketing could not pull the correct product image and price. GA4 had a different problem: the same product would show up under multiple localized names, so Ecommerce reports split one product into multiple rows and the analysis data became fragmented.</p>
<p>Meta CAPI also needs <code>content_id</code> to match Catalog data if you want DPA to work. So ecommerce event data had to be standardized on two layers:</p>
<p><strong>First layer: the front-end Pixel.</strong> Inside the Custom Pixel, there is a <code>PRODUCT_NAMES</code> dictionary that maps Shopify SKU to a canonical English product name. That keeps <code>item_name</code> consistent at the source, instead of mixing localized names.</p>
<p><strong>Second layer: the Items JSON Enricher in sGTM.</strong> It parses the <code>items_json</code> string back into an <code>items</code> array, validates that <code>item_id</code> = Shopify SKU = Merchant Center offer ID, and fills <code>aw_feed_country</code> plus <code>aw_feed_language</code> for Google Ads dynamic remarketing.</p>
<p>That way, no matter where the event goes next, every platform consumes the same cleaned-up ecommerce payload.</p>
<h3>Integrating Platform CAPIs</h3>
<table>
<thead>
<tr>
<th>Platform</th>
<th>Event scope</th>
<th>Deduplication</th>
</tr>
</thead>
<tbody>
<tr>
<td>GA4</td>
<td>Funnel events (Purchase stays browser-primary)</td>
<td>event_id</td>
</tr>
<tr>
<td>Google Ads</td>
<td>Purchase + cart line items</td>
<td>transaction_id</td>
</tr>
<tr>
<td>Meta</td>
<td>Purchase + funnel events</td>
<td>48h event_id</td>
</tr>
<tr>
<td>Reddit</td>
<td>Full funnel + download_click</td>
<td>event_id</td>
</tr>
<tr>
<td>X</td>
<td>Full funnel + download_click</td>
<td>event_id</td>
</tr>
</tbody>
</table>
<p>The clearest proof is in Google Ads. The observed conversion rate of the sGTM server-side Purchase Tag stays stable at 95-100%, while Purchase conversions imported through GA4 in the same period were almost entirely being filled in by Google’s modeling (observed rate 0-8%). In plain terms: the server-side path was sending conversion data directly into Google Ads, instead of making the platform guess.</p>
<p>Some walls we hit:</p>
<ul>
<li>Reddit CAPI requires <code>eventType</code> to be capitalized as <code>"Purchase"</code>. Lowercase <code>purchase</code> gets silently ignored—no error, no logs.</li>
<li>X CAPI is now signed directly inside the sGTM template with OAuth 1.0a HMAC-SHA256 instead of going through an external proxy. If credentials or signing parameters are off, the request just fails.</li>
<li>For Google Ads purchase conversions, if you want to report cart line items, you must enable <code>enableProductReporting</code> on the Tag and attach Items JSON Enricher as the setupTag.</li>
</ul>
<h3>The Sandbox Limits We Hit While Writing Tags</h3>
<p>Items JSON Enricher needs to parse JSON, walk arrays, and do type conversion. That sounds basic, but in our sGTM template setup, nearly every step ran into something:</p>
<ul>
<li><strong>The standard JS global surface is incomplete.</strong> If <code>parseFloat</code> is not available, you have to switch to <code>require('makeNumber')</code>. We also hit a real <code>String.prototype.charCodeAt()</code> compatibility issue in production, and ended up rewriting that logic with <code>trim()</code> or <code>charAt()</code> + <code>indexOf()</code>.</li>
<li><strong><code>addEventData</code> is now part of the main path.</strong> The current live Items JSON Enricher uses <code>addEventData</code> inside the Tag template to write back <code>items</code>, <code>ecommerce_items</code>, <code>aw_feed_country</code>, and <code>aw_feed_language</code>. So the real pitfall here is not “you cannot use it,” but that setupTag execution order and field sources have to line up.</li>
</ul>
<p>Those little constraints piled up into a two-day detour for an Enricher that should have taken half a day. The sandbox section later gets even more ridiculous.</p>
<hr />
<h2>Browser Signal Bridging: Cookies and Click IDs</h2>
<p>sGTM solves the “who should receive the event” problem, but platforms still need browser-side Cookies and Click IDs for attribution. The annoying part is that the purchase funnel is not one continuous chain. It gets cut into two segments at the Gatsby -&gt; Shopify Checkout handoff: the first half lives on our own domain, the second half lives on Shopify’s checkout domain.</p>
<p>The GTM JS running on Gatsby handles the first half: <code>pageview</code>, <code>view_item</code>, <code>add_to_cart</code>, and other on-site actions go straight from the browser to Worker and then into sGTM. Once the user reaches Shopify Checkout, that chain breaks, and the second half switches over to Custom Pixel. Cookie Bridge and Click ID Bridge are both there to reconnect those two halves.</p>
<p>The Custom Pixel takes over five key checkout milestones (from <code>checkout_started</code> to <code>checkout_completed</code>). Every event carries the full ecommerce payload plus <code>user_data</code> (email, phone, address), which is the foundation for Google Enhanced Conversions and Meta Advanced Matching.</p>
<p><code>sendBeacon</code> + <code>keepalive: true</code> is now used mainly for the Cookie Bridge <code>/store-cookies</code> POST, not for the main Purchase event itself. The primary Purchase path is still dataLayer -&gt; GTM -&gt; Worker -&gt; sGTM. <code>sendBeacon</code> is there to push <code>_fbp</code>, <code>_fbc</code>, <code>twclid</code>, <code>rdt_cid</code>, and similar context to the server as reliably as possible before the page unloads.</p>
<h3>Cookie Bridge: Salvaging Data from the Sandbox</h3>
<p>Shopify Custom Pixel runs inside a sandbox, isolated from the main page, so <strong>it cannot directly read attribution Cookies from each platform</strong>—Meta’s <code>_fbp</code> / <code>_fbc</code>, X’s <code>twclid</code>, Reddit’s <code>rdt_cid</code>, all of them are inaccessible. Without those, CAPI events cannot be tied back to the browser-side click, and attribution breaks.</p>
<p>Fortunately, Shopify exposes an async <code>browser.cookie.get()</code> API. Pixel uses it to pull those Cookies one by one and sends them out through two channels:</p>
<p><strong>Channel A</strong>: the Cookie values get packed into a <code>meta_cookies</code> field and travel with the Purchase event through GTM -&gt; Worker -&gt; sGTM. This is the normal path.</p>
<p><strong>Channel B</strong>: a separate copy gets written into Firestore, so if Pixel fails to send, Webhook can recover the Cookies from there.</p>
<h3>Click ID Bridge: From URL to Cart</h3>
<p>Cookie Bridge solves Cookie handoff from Pixel to Webhook. Click IDs have another layer of trouble: when a user lands from an ad, URL parameters like <code>gclid</code>, <code>rdt_cid</code>, and <code>twclid</code> have to be captured on the main site first, then carried across the checkout boundary into sGTM.</p>
<p>The current approach is: CF Worker serves rewritten GTM JS, Web GTM reads the URL parameters in the browser, stores them in first-party Cookies, and forwards them to sGTM as GA4 event params. The whole chain does not depend on any Shopify-side API.</p>
<p>We tried another route early on: writing Click IDs into <code>note_attributes</code> through Shopify’s <code>/cart/update.js</code>, so Webhook could carry them into the sGTM Webhook Client. But <code>onekey.so</code> is a Headless frontend and the <code>/cart/update.js</code> environment was not stable there. That route only survives now as a fallback inside the Webhook Client.</p>
<p>The bridging logic is not identical across platforms either: <code>twclid</code> has three fallback levels (URL / Cookie / localStorage), <code>rdt_cid</code> leans on dataLayer / first-party Cookie / URL fallback, and Google Ads relies more heavily on <code>_gcl_*</code> Cookies plus server-side recovery logic.</p>
<hr />
<h2>Firestore: Deduplication</h2>
<p>Why are there two paths in the first place? Shopify’s <code>orders/paid</code> Webhook is emitted from the server, so it naturally has no browser context—no <code>_fbp</code> / <code>_fbc</code> (needed by Meta attribution), no <code>gclid</code> (needed by Google Ads attribution), and not even the real user User-Agent or IP. Cookie Bridge can recover part of that, but it is still second-hand data. The Pixel path is where attribution quality is highest. So architecturally, Pixel is the primary path and Webhook only exists as the fallback—if Pixel does not make it, Webhook fills the gap.</p>
<p>But Shopify sends the Webhook regardless of whether Pixel succeeded. Custom Pixel reports almost immediately, while the Webhook arrives roughly 40 seconds later. If both paths run and nothing deduplicates them, every order gets reported twice. That time gap is what shaped the dedup design: Pixel has enough time to write a marker into Firestore first, and when Webhook arrives later it can read that marker to decide whether it still needs to report.</p>
<p>Each platform already has its own <code>event_id</code>-based deduplication (Meta uses a 48-hour window, others have similar logic), but I did not want to rely on that. The real problem with duplicate reporting is not quota—it is that it pollutes the platform’s attribution calculation and event quality scoring.</p>
<p>So dedup happens upstream in sGTM through Firestore: once the Pixel path reports successfully, it writes <code>{ reported: true }</code> into Firestore. When Webhook arrives, it checks that record first. If it exists, skip. If it does not, send through the currently enabled Fallback Tags.</p>
<p>Looking at GA4, the Pixel primary path has accounted for more than 99% of purchase events since launch. Pixel success rate is high enough that the Webhook fallback rarely has to fire in practice.</p>
<blockquote>
<p>In the current live configuration, this dedup layer mainly protects Google Ads, Meta, Reddit, and X Purchase fallback flows. <code>GA4 Native Purchase - Shopify Webhook</code> is currently <code>paused</code>, so GA4 purchase events still rely on the browser primary path rather than Webhook backfill.</p>
</blockquote>
<h3>The Full Journey of a Single Order</h3>
<p>Above, Worker, sGTM, signal bridging, and Firestore were explained layer by layer. Here is the full chain stitched back together. Once payment completes, two paths trigger independently: Pixel enters sGTM immediately through Worker, gets standardized by the Enricher, fans out to five platforms, and writes a dedup marker into Firestore at the same time; around 40 seconds later, the Shopify Webhook reaches sGTM and checks Firestore—if the marker exists, it skips; if not, it goes through the Google Ads, Meta, Reddit, and X fallback Tags (GA4 Purchase always stays on the Pixel primary path and does not use Webhook fallback).</p>
<h3>What Gets Stored in Firestore</h3>
<p>Two Collections:</p>
<p><strong><code>sgtm_cookies/{cart_token}</code></strong> — in the current live Cookie Store Client, retention is 30 days</p>
<table>
<thead>
<tr>
<th>Field</th>
<th>Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>fbp</code></td>
<td>string</td>
<td>Meta <code>_fbp</code></td>
</tr>
<tr>
<td><code>fbc</code></td>
<td>string</td>
<td>Meta <code>_fbc</code></td>
</tr>
<tr>
<td><code>twclid</code></td>
<td>string</td>
<td>X click ID</td>
</tr>
<tr>
<td><code>rdt_cid</code></td>
<td>string</td>
<td>Reddit click ID</td>
</tr>
<tr>
<td><code>expires_at</code></td>
<td>number</td>
<td>the current template writes a millisecond timestamp</td>
</tr>
</tbody>
</table>
<p><strong><code>sgtm_purchases/{transaction_id}</code></strong> — in the current template, retention is also 90 days</p>
<table>
<thead>
<tr>
<th>Field</th>
<th>Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>reported</code></td>
<td>boolean</td>
<td>true = already reported</td>
</tr>
<tr>
<td><code>source</code></td>
<td>string</td>
<td>pixel | webhook</td>
</tr>
<tr>
<td><code>timestamp_ms</code></td>
<td>number</td>
<td>write timestamp</td>
</tr>
<tr>
<td><code>expires_at</code></td>
<td>number</td>
<td>the current template writes a millisecond timestamp</td>
</tr>
</tbody>
</table>
<p>There is also <code>sgtm_purchase_context/{transaction_id}</code>, which is used to recover browser-side session context on the Webhook fallback path.</p>
<hr />
<h2>sGTM Sandbox: The Two Limits That Hurt Dev Experience the Most</h2>
<p>The sGTM sandbox is a crippled JavaScript runtime: it looks like JS, but standard APIs are missing, permissions are enforced silently at runtime, and failures give you no feedback. We already hit some of this in the Enricher section above (<code>parseFloat</code> missing, <code>charCodeAt()</code> unavailable). Here are the two worse ones. What makes them bad is not just missing features—it is that you cannot even tell what went wrong.</p>
<p><strong>try/catch turns debugging into a black box.</strong> Instinct says try/catch is there to make things safer. In the sGTM sandbox, if code inside the try block triggers a sandbox-level abort (for example, by calling an API without the declared permission), catch does not catch it. The whole Tag just stops executing—no error, no catch, no logs. Adding try/catch actually makes the problem harder to locate, because you lose even the basic clue of “which line did execution stop on?” What we do now is the dumbest possible thing: do not use try/catch at all, insert <code>logToConsole</code> line by line, publish a version, inspect logs, narrow the range.</p>
<p><strong>Missing permissions do not raise errors. The Tag just aborts silently.</strong> This one came from a real production incident after Template #33 went live: we added <code>getCookieValues</code> calls for <code>_twclid</code>, <code>_rdt_cid</code>, and <code>rdt_cid</code>, but forgot to declare the corresponding Cookie names under <code>get_cookies</code> in the template <code>permissions</code>. The result was not an error, not even a console warning—the entire Tag silently aborted at runtime, and every live event flowing through that Tag was dropped until someone noticed the data had flatlined and started digging.</p>
<p>The lesson is blunt: <strong>code changes and permission changes have to ship together.</strong> Otherwise the worst-case outcome is not “it errors,” but “it quietly does nothing.”</p>
<p>Those two issues together define what debugging sGTM feels like. The problem is not that features are limited—you can work around limited features. The problem is that <strong>you never know which line execution stopped on, and you never know why it stopped.</strong></p>
<hr />
<h2>If I Had to Do It Again</h2>
<p>The data is stable now, but if I had to choose the stack again, I probably would not use sGTM.</p>
<p>At the time, it looked like the rational choice. The team was already on Web GTM, so the event model did not have to be rebuilt. The sGTM ecosystem also has a lot of ready-made server-side Tag templates—official ones for GA4 and Google Ads, community ones for Meta, Reddit, and X—so it looked like we could just snap the pieces together and run. In practice, whether a template was official or community-built, almost all of them had some issue: wrong parameter types, incomplete permission declarations, edge cases not handled. Every template had to be opened up, read at the source-code level, and patched before it was usable. The “works out of the box” expectation collapsed completely.</p>
<p>The debugging experience was more like whack-a-mole: fix one permission issue and another silent failure appears; patch one missing API and then hit a type incompatibility. Every iteration meant publishing a version, checking logs, and guessing which line stopped executing. It burned time and attention. If I had to do it again, my selection criteria would change to this: <strong>debug feedback &gt; ecosystem compatibility &gt; UI convenience</strong>.</p>
<p>More concretely: one Cloudflare Worker, one D1 database, and direct CAPI integrations written against each platform’s docs. One event payload comes in, we write our own mapping logic, and fan it out downstream ourselves—that is fundamentally the same job sGTM is doing, but deployment takes seconds and <code>console.log</code> is enough to debug it. Compared with the sGTM + GCP stack, the developer experience is not even in the same league.</p>
<p>The reason I did not seriously consider that route back then was simple: it felt like “writing the CAPIs ourselves” would be too much work. Looking back, the development time supposedly saved by sGTM templates mostly got paid back in debugging and sandbox workarounds. Net-net, it may not have saved anything.</p>
<h3>Who Is sGTM Actually For?</h3>
<p><strong>If the team is already deeply invested in Web GTM, with a lot of existing Tags and trigger configuration, then sGTM as the natural extension makes sense.</strong></p>
<p>But if you do not have GTM baggage, or if you need to integrate several non-Google platforms like we did while layering in custom dedup logic, <strong>writing the pipeline directly in code is much faster than wrestling the sandbox permission system</strong>.</p>
<p>Platforms like Google Tag Manager were originally built on a simple premise: “lower the barrier with a GUI so non-technical people can configure server-side tracking too.” But once the problem becomes highly customized and heavily dependent on debug feedback—integrating multiple platform CAPIs, repairing browser context, building custom deduplication—the abstraction layers and sandbox restrictions that were introduced to avoid writing code start becoming the obstacle instead. Especially now that Coding Agents can read docs directly, generate integration code, run tests, and fix bugs, it is worth asking how much of the original GUI advantage is really left.</p>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Don't Fall into TrustPilot's Trap]]></title>
            <link>https://archie6.com/trustpilot</link>
            <guid isPermaLink="false">https://archie6.com/trustpilot</guid>
            <pubDate>Mon, 10 Nov 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[How It Started The company received feedback from B2B sales that the brand’s TrustPilot score was only 3/5, with many negative reviews, which was starting to affect brand image and closing rates. How ...]]></description>
            <content:encoded><![CDATA[<h2>How It Started</h2>
<p>The company received feedback from B2B sales that the brand’s TrustPilot score was only 3/5, with many negative reviews, which was starting to affect brand image and closing rates.</p>
<p>How to solve it? The most direct approach was to go the official route: use TrustPilot’s Shopify App to invite past customers to leave reviews, and use positive reviews to boost the score. Sounds reasonable, right?</p>
<p>So that’s what we did. And then we fell into a trap.</p>
<hr />
<h2>TrustPilot’s Pricing Trap</h2>
<p>We use Shopify, and TrustPilot has a plugin that can automatically invite users to review after orders are completed. The company’s thinking at the time was simple:</p>
<p><strong>Staged Strategy:</strong></p>
<ol>
<li>First subscribe to the Advanced plan ($1,099/month), with a large quota, to invite a large number of users at once to boost ratings.</li>
<li>After the rating goes up, downgrade to the Plus plan ($299/month) to maintain it.</li>
<li>After stabilization, switch to the free plan. Perfect.</li>
</ol>
<p>The ideal was plump, but reality slapped us in the face. First, let’s look at this company’s pricing:</p>
<table>
<thead>
<tr>
<th>Plan</th>
<th>Price</th>
<th>Core Features</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Plus</strong></td>
<td>From <strong>$299</strong>/month</td>
<td>Review tools · Remove third-party ads · Marketing widgets · Performance insights</td>
</tr>
<tr>
<td><strong>Premium</strong></td>
<td>From <strong>$629</strong>/month</td>
<td>Advanced analytics · Predict TrustScore · Social assets · Competitive benchmarking · API*</td>
</tr>
<tr>
<td><strong>Advanced</strong></td>
<td>From <strong>$1,099</strong>/month</td>
<td>Custom analytics · Precision widgets · Brand design · Market assessment · Salesforce/API</td>
</tr>
<tr>
<td><strong>Enterprise</strong></td>
<td>Custom pricing</td>
<td>AI tools · Full API · Scale management · Visitor insights · Data reports</td>
</tr>
</tbody>
</table>
<p><small>*Note: The API interface marked in the Premium plan is not actually included. API requires additional payment, and sales will price based on your website’s SEMRush traffic. See how greedy they are!</small></p>
<h2>Rogue Terms: The One-Year Contract Trap</h2>
<p>Here’s where it gets awkward.</p>
<p>Subscribing was smooth as butter, card linked in seconds. But when we wanted to downgrade, the company’s true colors showed:</p>
<blockquote>
<p>“The contract is signed annually, just billed monthly. We don’t accept cancellation or downgrade because you agreed to our terms when you made the payment.”</p>
</blockquote>
<p>It was truly frustrating to hear this, because what SaaS today isn’t billed and serviced monthly? Even worse, if you don’t continue paying, they threaten to transfer the bill to a third-party collection agency.</p>
<h3>Their Business Model Is Essentially Extortion</h3>
<p>Thinking about it carefully, this routine is too disgusting:</p>
<ol>
<li><strong>Passive negative review mechanism</strong>: Only dissatisfied users will actively find channels to complain and leave negative reviews, satisfied customers won’t even think about going to TrustPilot.</li>
<li><strong>Hostage-style charging</strong>: Want to improve your rating? Pay annually, no negotiation.</li>
<li><strong>Exit penalty</strong>: Want to exit mid-term? Sorry, contract period not completed, don’t pay and we’ll send it to collections.</li>
</ol>
<p>For large companies, this amount of money might just be pocket change for the marketing department. But for small and medium-sized businesses? This is blatant robbery.</p>
<p>Reddit is full of small business owners complaining about this <a href="https://www.reddit.com/r/ecommerce/comments/1bu28sk/cancelling_trustpilot_contract">[1]</a> <a href="https://www.reddit.com/r/smallbusiness/comments/12pgddi/getting_out_of_trustpilot_contract/">[2]</a></p>
<p><img src="https://archie6.com/_astro/Reddit-trsutpilot.DNymDWOa_1zUsDp.webp" alt="Reddit users complaining about contracts" /></p>
<hr />
<h3>The Truth About Sales Pitches</h3>
<p>Many people are fooled by their sales, who constantly emphasize TrustPilot’s SEO benefits. Actually, it’s not important at all.</p>
<p>You can try searching for brand names you’re interested in. As long as your content isn’t particularly scarce, TrustPilot pages won’t even make it to the first page of search results. As a consumer, would you really go to TrustPilot specifically to check a brand? I certainly wouldn’t - I’d reference the ratings and reviews on the website itself.</p>
<p>So TrustPilot really isn’t useful for promoting brand image and growth.</p>
<h2>If You’ve Already Fallen Into the Trap</h2>
<p>If you’ve unfortunately already subscribed and want to cut losses before the contract expires, you can try the following methods to escape.</p>
<p>Here are the steps we’ve personally practiced:</p>
<ol>
<li><strong>Cut off the funding source:</strong> Contact your bank to freeze the credit card currently linked to your TrustPilot account.</li>
<li><strong>Formally request termination:</strong> Formally contact TrustPilot customer service via email, <strong>inform them in writing</strong> that you will no longer pay subsequent fees, and request immediate cancellation of service. Keep all communication records as evidence.</li>
<li><strong>Patiently wait and ignore collections:</strong> After stopping payment, TrustPilot will continue to generate subscription bills and keep collecting payments, while threatening to transfer the bill to a collection agency. After about 3 months of unpaid bills, they will transfer your “debt” to a third-party Debt Collection Agency. After that, you’ll receive collection emails from the third-party agency - just stay calm.</li>
<li><strong>Negotiate directly with them:</strong> When you receive emails from the <strong>collection agency</strong>, <strong>do not communicate with them</strong>. At this point, control is back in your hands. Contact TrustPilot’s <strong>billing department</strong> directly and propose a settlement: you’re willing to pay the current outstanding balance (for example, 3 months of fees), <strong>on the condition that they immediately terminate the entire annual contract</strong>.</li>
</ol>
<p>Through this method, we ultimately only paid a small portion of the fees and successfully escaped the subscription trap of nearly $10,000 for the remaining nine months.</p>
<hr />
<h2>How to Improve Your Rating Without Spending a Dime</h2>
<p>According to TrustPilot’s own so-called “transparency” terms, once your brand homepage is created, you can forget about taking it down. It will hang there forever, whether you pay or not.</p>
<p>So, is there a way to improve your rating without paying them “protection money”?</p>
<p><strong>Of course there is, and the method is surprisingly simple.</strong></p>
<p>The core principle is simple: TrustPilot’s paid service is essentially just an <strong>expensive email automation tool</strong>. We can completely bypass it and do the invitation ourselves.</p>
<h4>Step 1: Get Your Exclusive Invitation Link</h4>
<p>This is the most critical part of the entire solution. You need a link that can guide users directly to your brand review page. Format:</p>
<pre><code>https://www.trustpilot.com/evaluate/trustpilot.com
// Replace trustpilot.com with your own brand profile URL
</code></pre>
<p>This link has exactly the same function as the one in their official paid emails. After clicking, users will directly enter this review interface:</p>
<p><img src="https://archie6.com/_astro/TP-invite.B5mKdKI6_ZojXb9.webp" alt="TrustPilot user enters this page after clicking the button" /></p>
<h4>Step 2: Send Invitation Emails Yourself</h4>
<p>After getting the link, you can send invitations to customers by writing a review invitation email through your existing tools:</p>
<ul>
<li><strong>Shopify merchants:</strong> Can directly use the Shopify Email app, filter historical customers, and send invitation emails in batches. Furthermore, you can use Shopify Flow to set up automated processes to send them on a schedule after order completion.</li>
<li><strong>Merchants on other platforms:</strong> Same principle. Export your customer email list and use Mailchimp, Listmonk, or any email marketing tool you’re familiar with to send in batches.</li>
</ul>
<h4>What’s the Difference from the Official Solution?</h4>
<p>The only difference is that reviews submitted through our own link will lack a “✅ Verified” badge.</p>
<p>But the key point is: <strong>this badge doesn’t affect your final score at all.</strong> Whether it has it or not, a five-star positive review is a five-star positive review.</p>
<p><img src="https://archie6.com/_astro/TP-score.Cp1L98WR_Z2slSXU.webp" alt="TrustPilot Score" /></p>
<p>It’s that simple. <strong>Perfectly save $13,188 per year in subscription fees, with zero limits on review invitation sends.</strong></p>
<hr />
<h2>For Those Still Watching: Don’t Use TrustPilot</h2>
<p>If you’re selling physical products, TrustPilot really isn’t useful for you.</p>
<p>Their value mainly shows in the external promotion of large service companies, as an “authoritative certification” badge. For small and medium-sized businesses or e-commerce? Very low cost-effectiveness.</p>
<h3>Better Alternatives</h3>
<p><strong>If you use Shopify:</strong></p>
<ul>
<li>Use third-party plugins like <a href="http://Judge.me">Judge.me</a> or Loox to build your own review system.</li>
<li>Connect review data to Google Merchant Center.</li>
<li>Let your product ratings display in Google Shopping.</li>
</ul>
<p><strong>Why is this better?</strong></p>
<ul>
<li>Google Shopping ratings have much more impact on SEO, SEM, and conversion rates than TrustPilot.</li>
<li>Consumers actually see and reference ratings on Google.</li>
<li>TrustPilot? Few people care.</li>
</ul>
<p>See the image below, this is truly useful rating display:</p>
<p><img src="https://archie6.com/_astro/Google-product-ratings.DBWb0cjd_29QNE1.webp" alt="Google Product Ratings" /></p>
<hr />
<h2>Final Words - TrustPilot’s Business</h2>
<p>TrustPilot’s business model is essentially collecting “digital protection money”, and they do it extremely ungracefully.</p>
<p>Their routine is like this: First, without your knowledge, they set up a target for you online, then let the most unhappy customers go up and fire at will. Because they know very well that satisfied customers are too lazy to speak up, only those who are pissed off will look everywhere for places to complain.</p>
<p>When you get covered in shit and your rating turns into a pile of crap, their sales come along, pretending to hand over the only “antidote” - a ridiculously expensive annual package, and they lock you in with a rogue contract.</p>
<h3>Why Do I Hate This Company So Much?</h3>
<p>Because it’s not doing any legitimate business at all. It’s holding your brand reputation hostage and forcing you to pay ransom annually.</p>
<p>This business model is fundamentally bad. It doesn’t make money by creating value, but by creating trouble and selling anxiety to extort and blackmail. This practice of putting a knife to small businesses’ throats is truly disgusting.</p>
]]></content:encoded>
        </item>
    </channel>
</rss>