By Jeff Richmond

How the AI Debate Arena Was Built

April 2026

The Debate Arena started as a simple idea: let visitors pick an ideological opponent and jump straight into a sharp, opinionated back-and-forth. The finished feature looks playful on the surface, but under the hood it required a full stack of decisions around AI prompting, runtime wiring, layout design, persistence, rate limits, and a surprising amount of UI debugging.

Political debate illustration

The Product Idea: Make Political Debate Feel Immediate

The goal was not to build a generic chatbot. It was to create a very specific interaction: choose a partisan opponent, get an opening shot immediately, and respond inside a tight, entertaining loop. That meant the feature needed to feel more like a live sparring match than a helpful assistant.

Two personas were enough to make the concept clear: a MAGA conservative and a progressive Democrat. Each gets a distinct opening line pool, visual treatment, and system prompt. The result is a demo that quickly communicates both personality and technical range without requiring the user to learn a workflow first.

The UI Layer: Persona Cards, Debate Thread, and Visual Tone

The homepage needed the arena to feel native to the rest of the site, not like a pasted-in widget. The picker cards were designed as clear, tappable choices with headshots, color cues, hover motion, and a call to action. Once a user picks a side, the view swaps into a split debate layout with the portrait on the left and the live thread on the right.

The styling work mattered more than it sounds. assistant-ui provides a solid functional base, but out of the box it looked too generic for this site. The chat surface, message bubbles, composer, and persona color accents were all restyled so the arena kept the site's darker glass look while still feeling distinct.

The most stubborn visual issue ended up being the persona image layout. Portrait source images did not map cleanly onto landscape containers, and several rounds of layout adjustments were needed before the left-side media pane finally used the intended vertical space correctly on desktop.

The AI Runtime: assistant-ui, Streaming Chat, and Persona Prompts

For the runtime layer, the feature uses assistant-ui on the frontend and the Vercel AI SDK on the backend. The chat thread streams responses from an API route powered by Anthropic, with each persona getting its own system prompt and tone rules. That setup made it possible to keep the UI responsive while preserving a believable debate voice.

One of the key design choices was constraining responses to a short, punchy style. That kept the debate moving, reduced the chance of meandering assistant output, and made the experience feel more like a fast exchange than a wall of exposition.

The initial assistant message is seeded from a curated opening-line pool so users are not dropped into an empty interface. That tiny decision improves engagement immediately because the arena feels alive from the first render.

Operational Layer: Daily Limits, Logging, and Debug History

Once the core interaction worked, the feature needed production-style controls. The debate route was extended to support optional daily round limits driven by environment variables, plus transcript logging to a Neon Postgres database. This gave the arena operational boundaries without complicating the user experience.

The data model tracks daily usage, debate threads, and individual messages. It also supports debug-only history pages so transcripts can be reviewed in the site's internal tooling. That made the feature much easier to inspect while keeping the public-facing experience simple.

I chose fail-open behavior for database outages, which means the arena keeps working even if storage has a transient problem. That tradeoff favors demo continuity and user experience over strict enforcement when the backing store is unavailable.

What Broke While Building It

The build was not a straight line. There were runtime-provider issues, duplicated runtime configuration, composer visibility problems, and layout bugs where the left portrait pane looked correct in theory but did not occupy the intended space in the actual rendered UI.

The image pane bug was a good example of how frontend problems often hide in the relationship between layout models and asset shape. A portrait image inside a landscape box is one thing; a portrait image next to a fixed-height chat surface is another. The final solution was to let the media pane adopt the same desktop height model as the thread rather than forcing the image into a short landscape card.

Small interaction details needed tuning too. The persona picker cards were always clickable, but until they had stronger hover, focus, and CTA signals, they did not look clickable enough. That is a reminder that affordance is part of product quality, not decoration.

What the Build Taught Me

The biggest lesson is that AI features are never just model integrations. A compelling experience depends on prompt design, UX framing, operational controls, persistence strategy, and the visual logic that tells users what this thing is supposed to feel like.

It also reinforced how fast AI-native product work can move now. The arena went from idea to working demo, then from demo to limited and logged feature, in a relatively short cycle. That speed is real, but it only holds up if you stay disciplined about debugging and refinement.

In other words: AI accelerates the build, but taste and iteration still determine whether the result feels finished.

Conclusion

The AI Debate Arena ended up being a compact example of modern product engineering: conversational UI, streamed model output, persona-driven prompting, database-backed controls, and iterative frontend polish all working together. It looks like a simple homepage demo, but it exercises a lot of the stack.

If you're interested in building AI-native interactive experiences like this, reach out through the Contact page.