AskUserQuestion Belongs in Every AI's System Prompt
The single most underrated UX decision in Claude Code is a small tool called AskUserQuestion. When Claude needs to clarify something, it can render an interactive question widget right in the terminal: a real, clickable list of options with descriptions, instead of yet another paragraph of prose asking what you want.
It sounds trivial. It isn't.
The problem with prose questions
Before this tool existed, the back-and-forth with an AI assistant looked like this:
AI: "To make sure I understand: do you want me to (a) refactor the auth module to use the new session API while preserving backwards compatibility, (b) refactor it and drop the legacy path, or (c) leave the existing module alone and add a parallel new one? Also, should I write tests first or just match the existing test style? And finally, would you prefer one big PR or split it into smaller ones?"
Me: reads, scrolls back, scans, types "b, match existing, smaller PRs"
Three questions packed into a wall of text, answered with shorthand that the model now has to interpret. Half the time I forget one of the questions and only answer two. The other half I misread an option. The model fills in the gaps with assumptions, and the conversation drifts off-target before any real work has started.
What AskUserQuestion does instead
The same exchange with the widget:
- Question 1: three labelled options, each with a one-line description of the tradeoff. Click one.
- Question 2: two options. Click.
- Question 3: two options. Click.
No re-reading, no shorthand, no ambiguity. The answers come back as a structured response that the model can act on without interpretation. You go from "translate prose into intent" to "pick a thing."
It supports multi-select for non-exclusive choices, bounded option counts so the model can't drown you in twelve checkboxes, and inline descriptions for each option's implications. Crucially, you can always answer "Other" and type freely, so the widget never traps you when the right answer isn't on the list.
Why it should be in the default system prompt
Right now, models reach for AskUserQuestion inconsistently. A model will sometimes fire off a three-paragraph prose question when it could have asked three crisp multiple-choice questions. The tool is available, but the judgement of when to use it isn't reinforced.
This is fixable with a single rule in the system prompt: when the question has bounded discrete choices, use the question tool; when it's genuinely open-ended, ask in prose.
That's the whole heuristic. It's not subtle and it's not hard for a model to follow. The reason every AI assistant doesn't do this by default is, I think, mostly inertia: chatbots have been prose-only for years, and the muscle memory is to phrase clarifications as paragraphs.
The moment you start using bounded-choice widgets for the right questions, though, the conversation feels qualitatively different. Faster. Less ambiguous. More like a tool, less like a chat.
Use cases beyond clarification
The widget isn't only for "which option do you want." It earns its keep in a much wider set of situations:
- Calibration before writing tasks: tone, audience, and length in one shot, with previews of each option's flavour.
- Diagnosis: "which of these symptoms matches your bug?" beats "describe your bug" for narrowing things down quickly.
- Gut-checks before irreversible actions: a quick "Proceed / Modify / Cancel" widget is cheaper than a long confirmation paragraph.
- Filtering broad queries: "which of these sub-topics interest you?" lets you multi-select rather than typing a list.
- Prioritisation: ranked or weighted options for "what matters most?"
All of these are bounded questions. The set of reasonable answers is small and knowable in advance. Asking them in prose wastes attention every single time on a question where a click would do.
The custom instruction
If your AI doesn't reach for this tool often enough on its own, you can nudge it explicitly. Drop the snippet below into your custom instructions in ChatGPT or Claude.
The first line names both the OpenAI-style identifier (ask_user_input) and Anthropic's (AskUserQuestion) so the prompt works across providers. The rest is a catalogue of the situations where the widget genuinely beats prose, so the model has concrete patterns to match against rather than a vague instruction to "use the tool more."
The takeaway
The web went through this transition decades ago: structured forms replaced free-text email confirmations because structured input is faster, cleaner, and easier to reason about for both sides. AI conversations are going through the same transition now, one tool call at a time.
OpenAI and Anthropic should put this on the default path. Until they do, the snippet above is the workaround.