Blog · MCP & AI · Feb 24, 2026 · 8 min read

Building an MCP server for Tagstack

It came with a few challenges and thoughts that I wanted to share here, for anyone that might find this useful, and for my future self.

I have been quite focused on using genAI and Claude particularly in the last few months. Claude shines by making it easy to start with automation and connect different data sources together in an atomic manner, without needing workflow or pipeline tools. Connecting those data sources requires what's called an MCP server — a standardized way for LLMs to retrieve data from external sources. The common example: Claude doesn't know today's weather (it only has access to data it has been trained on), so without some way to communicate with the outside world, Claude can't come up with an answer when asked whether to take an umbrella to your terrace party in Bastille. A weather MCP lets Claude call the tool to retrieve the weather in Paris. As you might guess, it's quite useful and an important step in making LLMs more usable for everyone.

Tagstack lets you audit any Google Tag Manager containers in under a minute and list the marketing technologies used by brands — think which CRM like HubSpot or what analytics tool like Google Analytics or Amplitude. Most agencies that work with us leverage the API to automate such lookups for different use cases ranging from inbound lead qualification, sales development, competitive benchmarks or quality monitoring. The catch, discovered through several customer conversations, was that building automation and workflows still feels scary in 2026. There's a clear gap between agile agencies that are willingly embracing advanced automation, building agentic workflows that leverage Tagstack for providing rich context to their personalized outbound, and more conservative players who are still weighing the ROI of such complex pipelines.

LLMs might be the solution that meets heavy automation lovers and conservative users in making it easy to start playing with automation. Models have reached a level of usefulness ("intelligence") that makes them versatile and really efficient at abstracting away the code and technical know-how required to build such pipelines (though, most often than not, humans must stay in the loop to correct course and validate production readiness — so don't go fire your developers and IT team). And, as said, MCP is a new way for your solution to benefit users that would maybe not have considered it. A new acquisition channel, but not just that. I like to consider that marketing channels are like transportation: you'll reach your destination whatever the means — what changes are the demographics, economics, values and physical constraints associated with each. MCP helps you decide on the destination, providing the necessary context to make your decision while you are planning your trip (I love Brittany, but what's the weather now, please?)

So it became clear that building this MCP server — beyond just the sheer excitement of touching on this new hot technology — would be a good addition to Tagstack's current offering. It came with a few challenges and thoughts that I wanted to share here, for anyone that might find this useful, and for my future self.

Challenge #1

No viable alternative to supporting OAuth for UX

The first thing that deterred me, at first, from building that MCP server was that it requires authentication via OAuth, which is fine for Google but becomes way more annoying when you need to build an OAuth server for your service. Everyone's familiar with OAuth: logging in with Google or Microsoft on a third-party site is an example of the OAuth flow — the third-party service, registered with those authentication providers, asks for permissions to authenticate and look up information stored by the provider on the current user. Simple, secure, great. But for an MCP server, you need to be Google in that scenario, as Claude or ChatGPT will ask your service to authenticate the user. It looks easy, as it's a widely accepted solution, but turns out it wasn't so simple. I spent some time researching how to bypass OAuth and rely instead on API keys. My users would put their API key when configuring the MCP connector and... done. You can go that route — it's not encouraged nor officially recommended — but you can ask your users to fill their API key as the OAuth token and update your authorization endpoints. The trick is that even if users setting up MCPs are surely more skilled than your grand-grandmother, they might not be familiar with the theory and the OAuth flow. A simple connection flow where the user sees a screen on your site, reviews permissions and accepts feels like the best approach UX-wise.

So after playing around, I jumped in. Tagstack is fully built on Cloudflare, head to toe, and after considering going with a third-party service like Stytch (whose team wrote a great article on how to set up an authentication flow for MCP servers using their product) — which would have implied rewriting part of the existing auth — I decided to go the deal-it-alone route. I use Claude Code every day and naturally had to chat about this. A few conversations in, Claude Code was trying to convince me to roll out my own OAuth library, considering it was certainly hard but more cost-effective than going with a third-party service. I love building, but let's be honest: building a reliable, production-ready OAuth library, as exciting as it may be, is just way outside my reach. And as I was reflecting on the best manner to tackle this, I remembered that, while working at Cloudflare, Kenton Varda, Cloudflare Workers Tech Lead, had released a library specifically for Cloudflare Workers. It was used as part of the examples to host your MCP server on Cloudflare. From that moment, it became clear that was the way to go, and the experience was really smooth.

Challenge #2

Care about your users' wallet

MCP economics are not the same as API economics, and wrapping your API can't be the best way to go when building your MCP. When you call an API, you usually do not care (within reason) about the size of the response payload — that's because there's normally no meaningful cost bound to the size of the response. APIs are usually priced by request count within a time period. With MCP, things are a little different: with LLMs, the cost unit is the token. In, out — the longer the conversation, the more tokens you are burning. LLM subscriptions include a non-public token count associated with your plan. Burning all of your tokens means you'll get locked out of using the LLM until your usage limits reset, which can kick in several days later. So if your MCP returns 100k tokens per response, it can really damage your users' limits. And yet, for user experience, at least in my case, splitting information retrieval across too many tools does not feel right UX-wise.

So I've used the pattern called the "two-step gate pattern," which simply means that you should ask the user for confirmation when using a tool that might return an expensive payload. You can enable this by adding a token_count property to your tool definition in your MCP server and mentioning the need for user confirmation in your tool description. As they get the token count, users can estimate the cost and assess if it's worth it before burning tokens. In the same vein, listing how expensive your MCP tools can be and offering cheaper but more limited alternatives also looked like a good idea. In Tagstack's case, some Google Tag Manager containers have several hundreds of tags, some containing HTML and JavaScript code, and as we return a JSON containing all of that information, large sites like Tesla or Renault can yield JSON weighing over 100k tokens. The solution was to build "cheaper" tools that only yield part of the information (like, just some tags) alongside the more expensive ones.

The implication is that if your API is well constructed, with pagination and filters, you might feel you're good to go. And yet I'd still use this two-step gate approach to make sure users acknowledge what they will pay for.

Challenge #3

Ditching SSE to support Streamable HTTP only

MCP started with Server-Side Events ("SSE") transport, which posed a few challenges: you had to manually take care of session persistence between the MCP client and MCP server. It's not a big deal on Cloudflare as Durable Objects are perfect for this, but it still means overhead and potential additional costs if your server takes off. Another transport later came: Streamable HTTP, which works as a stateless communication transport that eliminates the need to maintain the session live. All that is left is your authorization check in your MCP server and you are good to go. SSE is now officially deprecated but many resources out there still rely on the old paradigm. There are cases where supporting old clients makes sense, but backwards compatibility wasn't an issue for me as it was a brand new MCP server intended only to be used with Claude web. Ultimately, SSE will fully disappear — MCP is so new (looking at the MCP registry, there are way more servers than 6 months ago, but it's still considered new tech) that carrying the burden of supporting both transports did not seem useful to me.

Wrapping up, it was an instructive dive into how to build a useful MCP server for my user base, and an exciting challenge even if Claude Code and the Cloudflare library hugely simplified the code generation part. We'll see what the future holds for that protocol, but I think MCPs make sense for some apps and aren't so scary anymore.

Here's a little video of how you can ask the Tagstack MCP to find what marketing technologies are used on renault.fr.

Other posts will come on how you can orchestrate different MCP servers to build beyond just the data given by Tagstack. But that's it for now!