AI UX Playbook
Defining how Cisco designs AI
I created the practice, principles, and tools that give Cisco's design teams a shared language and method for building agentic AI experiences, consistently, at scale, across every product.
Lead Designer
Scope
Cross-org · 11+ product teams · Design, PM, Engineering
Status
Launched internally (2025) with MCP Server and Agentic Review Tool in active use (2026)
About
Cisco's teams were building AI products fast. But without a shared definition of what good even meant.
Different teams were making independent calls about how agents should behave, how much autonomy was appropriate, how trust should be communicated to users. The result wasn't just inconsistency. It was a fragmentation risk. Dozens of products, each creating their own AI conventions, that users would eventually have to reconcile.
The problem wasn't capability. It was shared judgment.
Strategy
Most design systems solve for visual consistency. This problem needed something different: a framework for behavioral consistency. How should an AI agent communicate uncertainty? When should it act, and when should it ask? What does it mean for a user to genuinely trust a system that operates on their behalf?
These weren't UI questions. They were harder than that. And the answers needed to work not just for one product, but across Cisco's entire security and networking portfolio, for teams with different users, different stakes, and different levels of AI maturity.
My approach: design the practice, not just the product. Give every team a shared vocabulary, a clear way to evaluate their decisions, and tools that made the right approach easier than the wrong one. If I could establish that foundational layer, the principles, the moments framework, the team rituals, the consistency across products would follow.
Process
I started with the question most teams were skipping: when does agentic AI actually make sense? I wrote the decision framework for when to use GenAI vs. agents vs. neither, grounded in the task, the stakes, and the user's need for control. From there, I defined three Agentic Design Principles: Agency, Trust, and Quality, as testable criteria, not aspirational values. Each comes with specific test questions designers can apply to their own work.
Step 1 - Establishing the framework
Working with designers across Cisco, I developed a framework of eight key points in any agent's lifecycle where design decisions matter most: Understand, Trigger, Plan, Verify, Execute, Resolve, Feedback, Monitor. Each is documented with best-in-class qualities, real examples, and accessibility requirements woven into every moment, not treated as a separate checklist.
Step 2 - The 8 Agentic Moments
I reimagined the design process for AI UX that reflects how AI products actually work. The key shift: traditional UX asks is this usable at launch? AI UX asks is this system behaving well over time? That reframes the designer's role from screen-maker to someone who shapes how the system behaves, and what team rituals support that over time.
Step 3 - The Evolved Design Process
Documentation gets read once and forgotten. I needed the practice embedded in how teams work, not consulted after the fact. So I built two tools.
So I built two tools:
The AI UX Playbook MCP Server gives teams seven specialized tools to query Playbook guidance directly from their design and development environments. Targeted, phase-specific, principle-grounded, without leaving their workflow.
The Agentic Design Review Tool connects the Figma MCP and AI UX Playbook MCP on Cursor. It reads Figma frames, generates a structured review, and adds annotations directly on the canvas: which principles are applied well, which moments are covered, what accessibility gaps exist, what's missing, and what the designer should do next.
Step 4 - Tooling for a self-sustaining practice
Improvements
1
The first version of the Playbook read like a reference manual. Comprehensive, accurate, and easy to ignore. The real need was a decision aid, something that could answer what should I do right now, in this phase, for this specific moment? That insight reshaped both the content structure and the tools built to surface it.
2
The second pivot was accessibility. It started as a Responsible AI checklist near the end. After mapping it to each Agentic Moment, it became clear that accessibility requirements aren't additive, they're structural. Streaming content breaks screen readers. Timed auto-proceed excludes users with motor impairments. A verification step unreachable via keyboard isn't just an accessibility failure, it's a trust failure. Accessibility moved from a section to a dimension woven through every moment.
Outcomes
The Playbook launched in 2025 and is now the foundational reference for AI experience design across Cisco's security and networking portfolio.
250+ designers and researchers across 11 product teams joined the practice, standardizing how Cisco builds AI and reuses components across products. Six internal design programs adopted it as their primary guide for AI education and innovation.
130+ members actively present and contribute at bi-weekly AI design sessions. The AI Hackathon and internal design sprints use the Playbook as their core resource, a sign it moved from reference material to living practice.
110+ contributors are keeping the Playbook current, adding content that reflects the needs of teams across Cisco. Contribution at this scale means the practice is self-sustaining. It no longer depends on a central author.
The team received Cisco's AI in Design Innovation Award and the Playbook was recognized at dzone, Cisco's internal design conference. The MCP Server and Design Review Tool are in active use as of 2026
Beyond the Playbook itself, I mentored six junior designers across product teams, coaching them on stakeholder negotiation, getting designs implemented, and building their own fluency in AI design, from using AI tools in their workflow to designing AI-native experiences. Two of those designers were promoted to senior roles during the period I mentored them.
Reflections
The Playbook was built in parallel with the products it was meant to guide, which meant some teams were already mid-build when the framework arrived. It helped with iteration but had less influence on early decisions than it could have. If I were doing this again, I'd create a one-page pre-brief for any team starting an AI feature, before the full framework was ready. Getting something lightweight into teams early would have been more valuable than waiting for the comprehensive version.
“The AI UX Playbook is a program, not just a resource — it’s a democratic organizational enabler empowering design teams to use AI in design and design for AI.”
— Design Program Manager, Design Ops at Cisco