Conversational UI Is Already Obsolete. Here's What Replaces It.

Open a chat interface. Type a request. Get a response. Type a follow-up. Get a response. Repeat this twelve times to accomplish something that should have taken one.
That's not a conversation. That's micromanagement.
Most AI products launched between 2022 and 2025 are conversational UI — structured as a back-and-forth message thread between a human and a model. This format felt revolutionary when it arrived. It's already beginning to look like a transitional pattern, and Jakob Nielsen said as much in his 2026 UX predictions: delegative interfaces are the successor, and they operate on fundamentally different design principles.
The design community, largely, hasn't noticed yet.
What Conversational UI Actually Is
Conversational UI takes the metaphor of dialogue and applies it to software interaction. You talk to the system. It responds. You talk back. The turn-taking structure of human conversation becomes the interaction model.
This worked because it was accessible. Anyone who had sent a text message already understood the interaction pattern. No learning curve, no manual, no affordance to discover. Type what you want, receive what you asked for. The friction was zero.
The ceiling is also low.
Conversational UI puts the user in the role of project manager. They set direction at every turn. The system executes one step at a time. To accomplish a multi-step goal — research a topic, draft a document, revise based on feedback, format for publication — the user must issue a new message at every stage. The "conversation" is really a sequence of discrete commands wearing a social metaphor.
When the task is simple, this is fine. When the task is complex, it's genuinely exhausting. Users who have tried to use ChatGPT or Claude to produce real deliverables through conversation know this frustration: the interface designed for help actually demands the most demanding form of attention — sustained, directive, iterative instruction without rest.
What Delegative UI Requires
Delegative UI changes the relationship. Instead of issuing requests and waiting for responses, the user assigns a goal — "book the cheapest flight from Bangkok to Berlin in the next three weeks that lands before noon" — and an agent plans and executes the steps to achieve it. The user doesn't manage the process. They set the objective, define the constraints, and optionally review the output.
This is what Nielsen calls the first new UI paradigm in roughly 60 years — since the shift from command-line to graphical interfaces. Each previous shift changed how users input information: CLI to GUI moved from syntax to visual metaphor, web moved GUI to networked hypertext, mobile moved stationary to touch-and-location, conversational moved structured input to natural language. Delegative UI changes what users are doing at the interaction level. Not issuing commands. Not composing requests. Expressing intent and releasing control.
That's a different cognitive mode. And it requires a different design vocabulary.
The Primitives Nobody Has Solved
Conversational UI has a mature design language. Message bubbles, input fields, typing indicators, timestamps, read receipts — these are borrowed from messaging apps and have clear user expectations attached. Designers know how to build chatbot interfaces. Figma has a thousand templates.
Delegative UI has almost none of this yet.
The design problems are genuinely novel. Consider what a user needs when an agent is running a multi-step task autonomously:
Task visibility. What is the agent doing right now? What has it done? What's next? A chat log shows previous messages — it doesn't show work in progress. Users delegating to an agent need something closer to a task queue or a process view: current state, history, planned steps, estimated completion. These don't exist as standard components.
Intervention surfaces. Delegation doesn't mean total surrender. A user might want to let the agent run but step in at specific decision points — "pause before booking anything that costs more than $800" or "show me the three shortlisted options before confirming." Designing for partial autonomy, where the system knows when to proceed and when to surface a checkpoint, requires new UI patterns. The concept doesn't exist in conversational UI at all.
Undo for multi-step operations. In a chat interface, you can ignore a bad response and ask again. In a delegative system that has sent emails, made bookings, or executed code, undo is an actual operation with real-world consequences. What does reversibility look like when the "action" spans fifteen tool calls over eight minutes? This is an unsolved design problem with stakes.
Provenance and trust. If an agent produced a document, retrieved information, or made a recommendation — what did it use? Where did those sources come from? What decisions did it make that you didn't see? The comprehension debt problem in AI-assisted coding applies here too: when an autonomous system does something you didn't watch, you need a different kind of auditability. Not a log. A legible account.
The Trust Layer Is the Product
Most of the current generation of "AI agents" are conversational UI with some tool-calling bolted on. The underlying interaction model is unchanged — the user still drives every turn. True delegation means the user is not driving every turn, which means the user is accepting a level of uncertainty about what the system will do.
This uncertainty is where products will win or fail.
Users will delegate when they trust the system enough to let it run. They will stop delegating — and return to micromanagement via chat — the moment the system does something unexpected. The design problem is building the trust layer that earns that delegation. Not through reassuring copy. Through visible mechanism: showing what the agent is doing, why, with what information, against what constraints, and at what points it will check in.
That's a significant design challenge. The agentic UX patterns question — how do you design interfaces where the system has meaningful autonomy — is one the field has been circling without quite naming it directly.
The naming matters. Designers who keep building better chat interfaces are optimizing the transitional pattern. Designers working on the primitives — task queues, intervention surfaces, reversibility, provenance — are building the next one.
The conversation is almost over. The question is what replaces it.
Photo by Matheus Bertelli via Pexels.