Every AI hackathon I hear about runs on the same idea. Someone says the company should do something with AI, nobody can quite explain what that something is and half an hour later the room has agreed to build a retrieval based chatbot for the organisation. Maybe a fancy name with GPT at the end and they call it innovation. The same pattern shows up in large, slightly opaque organisations that mostly want to be able to point at a slide and say that they are doing GenAI now. When you look closely, it is usually the same recipe over and over again.
If everyone is building some flavour of organisational assistant anyway, the interesting part for me is how small, honest and boring you can make it. So I gave myself a simple challenge. Build my own version of that idea, with the organisation reduced to a single person, in roughly one hour. Claude sat in as a very fast, slightly overeager pair programmer. I missed the strict sixty minute target by a little, but the result feels good enough that I am not embarrassed to ship it on my actual site and let people play with it.
From the outside, it is just a chat page, you can ask what I do, how I work with composable commerce and architecture or what kind of projects I have been involved in. The chat bot knows my background, it has access to my blog posts and it can summarise or connect the ideas that keep popping up across them. Just as important, it knows when to back off. If you leave the territory of things I have actually written down, it will tell you that it does not know. The way it gets there is deliberately simple. I wrote down a structured description of myself in a small contentful knowledge base, in clear language that could just as well be a public bio. I combined that with the existing content from my blog. There is no magic data lake and no mystery scraping of random corners of the internet. It is enough to feel personal and specific without disappearing into complexity.
The other half of the experiment was about how to talk to models without tying the whole thing to a single vendor. This is where the Vercel AI gateway comes in. The chat feature in my site sends one well defined request to the gateway and lets it decide which model actually serves the answer. Today that might be a small, cheap model that is fast enough for casual use. Tomorrow it could be something larger if I care more about nuance for a while. I can change that preference in configuration rather than rewriting my application. That makes models feel like a replaceable piece in the architecture, not a foundational bet that leaks into every file. Before that I tried to ship it alongside a Tiny-Llama or one of the smaller Grog models, but I would run into size issues with the edge functions my website is using.
Somewhere in the middle of all this there is an awkward reality that a lot of companies right now exist almost entirely to sell these organisational GPTs to small and mid sized businesses. The pitch sounds transformational, but when you look at the product it is mostly a thin chat layer in front of a RAG setup that crawls the same chaotic Confluence spaces nobody wanted to clean up in the first place. For many office workers it becomes a slightly more polite search box that lets them avoid learning how their own tools are structured. That might still be useful, I am not denying that, but it is a very different story from the grand narrative of AI copilots changing how organisations think. At that point you are paying for a nicer way to query content you already own while the hard problems of knowledge management and ownership stay exactly where they were.
The interesting bit for me, and the reason I am writing this down, is that there is nothing particularly special about my use case. A lot of teams I speak to are sitting on a pile of internal knowledge and circling the idea of an assistant that can make it more usable without quite knowing where to start. What this little chat page shows is that you do not need an enormous programme or a new product line to try that idea out. That also means next time someone sells a Company-GPT as the great new innovation please be a bit more cautious.
I’ve been the architect, sat beside the architect and cleaned up after the architect. Everything here was learned the hard way and in production.