Natural language is a wonderful interface, but just because we suddenly can doesn't mean we always should. Often the most ergonomic option is a checklist, a pulldown menu, a slider bar. This is a post about how to provide the best of both worlds: the affordances of structured user interfaces with the flexibility of natural language.

Here's a photo of Claude Code's ask user tool for reference, in case you haven't seen it in action:

This is a post about a tool I made called popup-mcp (MCP, or Model Context Protocol, is the standardized tool-use interface via which LLMs can call external tools). I built this tool about 6 months ago and have been experimenting with it as a core part of my LLM interaction modality ever since. I had originally planned on making a post sharing it earlier, but (a) Anthropic released a simple structured UI tool in Claude Code about 4 months ago and (b) I've let my backlog of planned blog posts grow to the point of absurdity. To the first point, my popup tool provides additional features (like conditional interview trees) that I think are worth talking about. To the second, one of my resolutions for 2026 is to write more and I'm starting that here.

Popup-MCP

You can find popup here, along with instructions on how to use it. It's a local MCP tool that uses stdio, which means the process needs to run on the same computer as your LLM client. Popup supports structured GUIs made up of elements including multiple choice checkboxes, drop downs, sliders, and text boxes. These let LLMs render popups like (example with energy/executive function checkin sliders)

Here's an example of a structured user interface generated when I asked Claude to generate an example ADHD status checkin popup:

These elements can be combined and nested as an interview tree, such that subtrees only render if some condition is met. A core example of this pattern is the interview tree: instead of asking a question via text in the chat window, an LLM can generate a set of questions and conditional subquestions that are only shown if the user responds in certain predefined ways.

Because constructing this tree requires registering nested hypotheticals about how a conversation might progress,it provides a useful window into an LLM's internal cognitive state. It also provides a fast way to structure interactively. The alternative (giving an LLM one response, following the resulting chain of conversation, then going back and forking to explore other paths) is cumbersome, often taking whole minutes as the LLM is repeatedly invoked.

An LLM will never be able to fully predict a conversation in advance, but it's surprising how often a dropdown with its top 3-5 predictions will contain your next answer, especially when defining technical specs. Since the system supports conditional rendering of text boxes, it's easy to include an 'Other' option at every stage that, if selected, causes a text box to appear via which the user can provide their specific answer.

This 'Other' escape hatch was originally left to LLMs to add and encouraged via prompting, but I recently modified the tool to _always_ auto-include an 'Other' option on all multiselects and conditionals. LLMs would sometimes forget to offer an escape hatch or be over-confident in their predictions, and it was easier to modify the tool than to fiddle with prompting.

Conclusion

In conclusion, a few points:

  • I'll be writing more. Watch this space.

  • if you have an LLM chat app you should add inline structured GUI elements with conditionals to reduce amortized interaction latency. If you'd like to build on my library or tool definition, or just to talk shop, please reach out. I'd be happy to help.