Andrew Paul Simmons

PRODUCT ENGINEER

Feb 25, 2026 · 10 min read

Why Chat + Tactile UI Is the Future

A follow-up to "Chat Is Not the Final Interface"

In "Chat Is Not the Final Interface," I argued that chat breaks down when tasks require exploration, comparison, and iterative decision-making. Language is linear. Visual interfaces are parallel. Humans need both.

That argument stands. But it leaves a harder question unanswered: if chat alone falls short, and point-and-click menus are the overhead everyone hates, what does the right interface actually look like?

Two Jobs, One Flow

Watch someone use any complex app and you'll see two distinct phases.

First, they know what they want. "Refill my prescription." "Blur the background." "Create tickets from these meeting notes." The intent is already formed. Clear, specific, one sentence.

Then the hunt begins. Which menu? Which tab? Which sub-screen? They click, back out, scroll, search, and eventually arrive at the one screen where they can finally do the thing they meant to do from the start.

That hunt is the tax. It exists because most software uses the visual UI for two fundamentally different jobs: expressing intent and executing decisions. Menus and navigation exist to translate what you want into the app's information architecture. Forms, sliders, previews, and cards exist so you can see, adjust, and commit.

Chat is the best directional interface ever built. Nothing comes close to natural language for stating an outcome and its constraints in one breath. But chat is terrible at execution, the part where you need to see options, compare tradeoffs, and make judgments that only your eyes and hands can make.

The visual UI is the best execution interface ever built. Scrubbing a timeline. Dragging a slider. Scanning a comparison table. Reviewing a prefilled form before hitting submit. But it's a terrible directional interface. It forces you to translate your goal into clicks through someone else's menu structure.

Use each for the job it's good at, in sequence: say what you want, then use your eyes and hands to refine it.

What Chat+Tactile Actually Means

Most products that claim "chat + visual UI" bolt a chatbot onto the side of an existing app, a help widget that answers FAQ questions while the real interface sits untouched beside it. The chatbot tells you where to click but never takes you there. Two separate interfaces ignoring each other.

Chat+Tactile merges them. The conversation and the interface are the same flow.

You say what you want. The system skips every menu, every intermediate screen, and drops you directly into the right interface, pre-filled with your intent, ready for you to refine. You're not hunting through menus or starting from a blank form. You're looking at the thing, with your intent already applied, making final adjustments with your eyes and hands.

Chat replaces the navigation to the visual UI. The moment the right screen appears, the chat's job is done. The visual UI takes over, because that's where trust lives: something you can see, inspect, edit, and confirm.

Three beats: express, see, confirm.

And the system doesn't need to one-shot it. That's the design assumption that separates Chat+Tactile from the "AI does everything" fantasy. The model's first attempt is a draft. The interface is designed for the user to adjust, override, and iterate. Chat gets you close. The visual UI lets you get it right.

What This Looks Like

Once you see the pattern, it appears everywhere. Here's what changes, and what stays the same, in four domains.

Healthcare: five screens become one sentence

Before: A patient wants to refill their metformin. The portal makes them navigate: Home → Medications → Active Prescriptions → Select Medication → Request Refill → Confirm. Five screens of translation between "refill my metformin" and the submit button.

After: The patient types "refill my metformin." The portal jumps straight to the refill confirmation screen, with medication, dosage, and refill count already filled in. The patient scans it, confirms it's right, and submits.

The confirmation screen didn't change. Same form, same fields. What disappeared is everything before it: five screens of navigation that existed only because the old interface couldn't understand what the patient wanted.

Project management: chaos into cards

Before: You leave a meeting with a messy page of notes and spend 30 minutes converting it into tickets. Copy a line, open a new ticket form, type a title, choose a priority, assign someone, add a description, link the context, repeat five times.

After: Paste the notes with a few guidelines: priority rules, criteria for assigning owners. The system returns a set of ticket cards with title, priority, assignee, and description already filled from context. You scan all five at once. Fix the one where it guessed the wrong assignee. Bump the priority on another. Confirm.

Chat did the bulk translation, the tedious work of parsing unstructured notes into structured fields. The cards gave you something chat never could: a fast, parallel visual check across all five tickets simultaneously. You didn't have to read a text summary and hope it was right. You could see it was right.

Photo editing: skip the learning curve

Before: A user wants to blur the background of a photo. They open the editor and face a wall of tools, panels, sliders, and blend modes. Where is the blur filter? What's a Gaussian blur versus a lens blur? How do they select just the background? They click around, maybe watch a tutorial, maybe give up.

After: They say "blur the background." The system segments the subject, applies the blur, and opens the adjustment panel with the radius slider already set. The user didn't need to know the menu structure or what "inverse selection" means. They didn't need to learn the software to use the software.

But here's the part that matters: the slider is still there. They drag it. More blur, less blur. They see the result in real time. They make a creative judgment that no amount of text could express: that looks right. Chat removed the knowledge barrier. The visual UI preserved the creative control.

E-commerce: needs aren't keywords

Before: Someone types "I'm going backcountry skiing for the first time" into a search bar. The search engine treats this as keywords and returns a random pile of products. But the customer doesn't know what to buy yet. They don't even know the categories. They need to discover the checklist before they can shop it.

After: Conversation turns that intent into structure: a curated list organized by category (skis, bindings, boots, beacon, shovel, probe), each tagged Essential / Recommended / Nice to have, with a short explanation of why. Then the familiar visual UI takes over. Compare products, open detail pages, swap options, add to cart.

Chat understood a need that no search bar could parse. The visual UI gave the customer the browsing and comparison experience that no chat transcript could replace. They went from "I don't know what I need" to "I'm confident in this cart."

Designing for the Handoff

If you build for this pattern, a few principles fall out immediately.

Assume the model won't one-shot it. Show a card, form, or preview the user can scan, edit, and correct. If the user can't override the AI's guess in one click, the handoff failed.

When intent is ambiguous, don't guess. Show the top 2–5 interpretations and let the user tap the right one. A dropdown that takes half a second beats a guess that's wrong 20% of the time.

Once the right interface is in front of the user, the chat's job is done. Don't keep a chat panel competing for attention with the visual UI it just invoked. The handoff should feel like a door opening.

Look for the "where do I click?" moments. Every time a user searches your help docs, asks your support bot how to find a feature, or pastes a screenshot of your app into ChatGPT, that's a signal. Those are exactly the places where Chat+Tactile routing should exist.

Start with the minimal interface needed to confirm direction. Reveal advanced controls when the user starts refining. The slider comes first; the blend mode dropdown comes when they ask for it.

The Point

Chat is for direction. The visual UI is for decisions.

People don't want to narrate their way through options, and they don't want to click through menus just to say what they already know. Chat+Tactile collapses the navigation tax, then hands the user a fast, visual, inspectable interface to refine and commit.

Stop making users learn your navigation to use your capabilities. Let them state the outcome. Then show them something they can verify with their eyes and change with their hands.