Agentic UX · Free Guide

Agent
Transparency
Patterns

6 design patterns to make AI agents legible, trustworthy, and genuinely in partnership with users across the full agentic lifecycle.

By Valeria Donati
Setup Runtime Handoff
Why this guide

The trust problem isn't in the model.
It's in the design.

AI agents can now plan, act, and complete multi-step tasks with minimal human input. But most of them feel like black boxes they disappear into their own process and resurface with results, leaving users confused, anxious, or simply unable to trust what happened.

These 6 patterns are drawn from real agentic product work. They map to the three critical phases of every agent interaction: Setup (before the agent acts), Runtime (while it acts), and Handoff (when it returns control). Each pattern addresses a specific moment where transparency breaks down and shows you how to design it back in.

Most AI agents feel like black boxes.

The user types a request, something happens behind the scenes, and results appear. No visibility into what the agent decided, what it assumed, or where it might be wrong.

That's not an AI problem. It's a design problem.

In the next 5 minutes, you'll get 6 patterns that cover the three moments where agent transparency breaks down — before it acts, while it acts, and when it hands control back to you.

Each pattern is a concrete design move you can apply to your next agentic feature: what to show, when to surface it, and why it builds the kind of trust that makes users actually rely on AI instead of working around it.

You don't need to be working on a cutting-edge AI product to use these. You need to be a designer who's serious about what comes next.

Phase 01 Setup
01
Setup
Goal Confirmation
The agent mirrors before it moves

Before the agent takes any action, it reflects the task back to the user in its own words. This isn't a loading screen it's an alignment checkpoint. The agent is asking: "Is this actually what you mean?" Miscommunication caught here costs nothing. Miscommunication caught after 12 automated actions costs trust.

Design Principle
"Make the agent's understanding of the task visible before any action is taken."
Security Agent
Before I start, let me confirm what I'll do:
I understood your task as
Scan all user accounts inactive for 90+ days in the identity tenant, flag those with active licenses, and generate a remediation report.
Scope: read-only. I won't make any changes. Shall I proceed?
Yes, proceed
Edit scope
02
Setup
Scope Declaration
What the agent will and won't do

Users fear agentic systems because they don't know how far they'll go. A clear scope declaration names what the agent will access, what actions it's authorized to take, and crucially what it explicitly won't do. The "won't do" column is just as important as the "will do" one. It's the boundary that makes the rest safe.

Design Principle
"A boundary named is a boundary respected. Show the limits before users need to worry about them."
Scope of this run
What I will do
Read security alert data from the last 30 days
Group alerts by severity and affected device
Draft a prioritized remediation plan
~
Flag items that may require human review
Modify any device settings or policies
Send notifications or emails to any users
Phase 02 Runtime
03
Runtime
Action Narration
Show what's happening, not just that something is happening

A spinner tells the user to wait. Action narration tells the user what their agent is actually doing. Each step should name the action, the tool being used, and the reason creating a running log of decisions that users can scan, trust, or interrupt. The agent is not a vending machine. It's a collaborator with a process worth showing.

Design Principle
"Every tool call is a decision. Show the decision, not just its output."
Running · 3 of 5 steps
Authenticated to tenant
Identity API · 0.3s
Pulled inactive accounts
Identity Provider · 847 records found
Checking license assignments
Licensing API · running
4
Flag high-cost redundancies
Awaiting step 3
5
Generate report
Awaiting step 4
04
Runtime
Assumption Surfacing
When the agent fills a gap, it says so

Agents make inferences constantly about intent, context, defaults. Most of the time this is invisible, and that's fine. But when an assumption meaningfully affects the outcome, it needs to surface. Not as an error, not as a question that blocks progress but as a transparent moment: "I assumed X. You can let me continue or redirect me." This is where agent design becomes ethics in practice.

Design Principle
"Hidden assumptions are hidden decisions. Make them visible at the moment they're made."
Assumption flagged
I made an assumption

Your request mentioned "recent alerts" but didn't specify a time window. I assumed the last 30 days based on your team's usual reporting cycle. Want me to use a different range?

Keep 30 days
Change range
This won't stop the analysis I'll continue with 30 days unless you change it.
Phase 03 Handoff
05
Handoff
Confidence Signaling
The agent rates its own output

An agent presenting results with uniform confidence is a design failure. Real agentic output has uneven reliability some findings are rock solid, others are extrapolations from incomplete data. Confidence signaling gives users the metadata they need to calibrate their own judgment. It's not a disclaimer. It's the agent being honest about the difference between what it found and what it inferred.

Design Principle
"Tell users not just what the agent found, but how sure it is and why."
Result confidence
Finding confidence 91%

The 847 flagged accounts were verified against live license data. High confidence all records sourced from authoritative APIs with no missing data.

The estimated annual cost savings ($42k) is an estimate based on your current license tier. Verify against your actual billing data before presenting to leadership.
06
Handoff
Re-entry Point
A clear door back to human control

When an agent finishes, users need more than a results screen. They need a legible moment of handoff a summary of what was done, what's now pending, and what decisions require a human. The re-entry point is the design of that moment. It closes the loop, returns agency to the user, and makes the agent feel like a collaborator that knows when to step back.

Design Principle
"The agent's job is to hand back a better situation than it found and make that handoff legible."
Handoff · Your turn
Analysis complete

I reviewed 847 inactive accounts, flagged 214 with active licenses ($42k/yr estimated), and drafted a remediation plan. The report is ready. 3 items need your decision.

📋 Review and approve the remediation plan
👤 3 accounts flagged as edge cases your call
📤 Share report with your team