Charlotte AI Agent Builder

Conversational AI agent creation platform for building, testing, and managing security automation agents. Enabled security analysts to build and deploy automation agents through conversation and configuration—no code required.

CrowdStrike logo

CrowdStrike

AI
Security
Automation
Platform

Executive Summary

Overview

  • Product: Charlotte AI Agent Builder
  • Company: CrowdStrike
  • Role: Lead Product Designer
  • Timeline: 5 months
  • Scope: AI agent platform for security workflow automation

Key Contributions

  • Designed the platform architecture for an AI agent ecosystem within the security console
  • Created a conversational builder enabling natural language agent creation
  • Designed a dual-mode configuration system balancing guided setup with manual control

Outcomes

  • Enabled 100+ early adopter customers to create and deploy AI agents within the CrowdStrike platform within the first few weeks of launch
  • Customers created 150+ custom agents, with out-of-the-box compliance and threat detection agents seeing the highest repeated use
  • Established the foundational agent pattern for the platform — taking CrowdStrike from zero agentic capability to a scalable, extensible AI automation system

Context & Problem

Security teams increasingly rely on AI tools for investigation, triage, and workflow automation. However, creating useful agents often requires engineering support, scripting knowledge, or fragmented tools. The opportunity was to design an AI agent platform that would allow security analysts to create, configure, and deploy their own AI agents directly inside the cybersecurity platform. The challenge was balancing flexibility with usability. Power users needed deep configuration and control, while less technical users needed a guided experience that made agent creation intuitive and approachable.

Objectives & Metrics

CrowdStrike customers could automate workflows through Fusion SOAR and Charlotte Workbooks, but had no way to create or use AI agents within the platform. The goal was to design a first-class agent creation and management system — enabling customers to build custom agents for their own workflows, use out-of-the-box CrowdStrike-provided agents, and deploy agentic capabilities across cybersecurity research, threat detection, investigation, and auto-remediation.

Results:

  • 100+ early adopter customers onboarded within the first few weeks
  • 150+ custom agents created by customers at launch
  • 4 out-of-the-box template agents shipped, directly informed by user research
  • Compliance and threat detection agents saw the highest repeated usage among early adopters
  • Strong early demand drove an expected exponential increase in adoption

My Role

This was a greenfield 0→1 project — there were no prior examples of agent creation or management anywhere in the CrowdStrike platform. I was selected to lead it based on my prior work as one of the lead designers on the CrowdStrike App Platform and my experience delivering features across the Fusion SOAR workflow automation team and the SIEM data ingestion team.

I led a team of four designers. I owned the end-to-end UX architecture, all production-ready designs shown in this case study, user recruitment, and execution of all 8 user interviews. A senior designer on the team co-developed the research protocol and helped compile findings into a report.

Two other designers contributed early wireframes for the agent list page, stakeholder management support during the design sprint, and competitive intelligence for our alignment workshop.

Approach & Key Decisions

Competitive Research

The project began with researching emerging AI agent platforms to understand how other tools approached agent creation, orchestration, and configuration.

I created a competitive landscape board in Miro and evaluated platforms such as Microsoft's Copilot agent builder and other AI automation tools.

This analysis revealed common patterns around agent configuration, knowledge management, and workflow orchestration that informed the platform direction.

Competitive landscape and research on emerging AI agent platforms
Competitive landscape and research gathered on emerging AI agent platforms.

Information Architecture & User Flows

Before designing the interface, I mapped the core information architecture and user workflows required for an AI agent platform.

This included flows for:

  • Creating new agents
  • Discovering existing agents
  • Managing and monitoring agents
  • Connecting agents to tools and knowledge sources

Mapping these flows helped align the team on the underlying system structure before moving into interface design.

Early workflow diagrams for agent creation and management
Early workflow diagrams mapping how users create, manage, and deploy agents.

Cross-Organizational Collaboration & Alignment

Designing an AI platform required alignment across product, engineering, and AI research teams.

To facilitate this, I ran a design sprint workshop with stakeholders across the organization.

During the workshop we mapped the end-to-end user journey, identified opportunities, and generated early concepts that later informed the high-fidelity designs.

Workshop artifacts for AI agent platform alignment
Workshop artifacts used to align stakeholders on the AI agent platform direction.

AI Agent Platform

The first step in enabling an agent ecosystem was designing a central hub where users could explore, create, and manage their agents.

The AI Agents landing page provides a unified control center where users can:

  • Browse template agents
  • Create custom agents
  • Manage and monitor agents
  • Search and filter agents
Central platform view for managing and creating AI agents
Central platform view for managing and creating AI agents.

Conversational Agent Builder

To make agent creation accessible to a wide range of users, I designed a conversational builder powered by an AI assistant.

Users can describe the agent they want to create in natural language, and the system generates the configuration, goals, and structure of the agent.

Conversational builder for creating agents with natural language
Conversational builder allowing analysts to create agents using natural language.

Advanced Configuration Interface

While the conversational builder simplified creation, advanced users still needed direct control over agent behavior.

I designed an advanced configuration interface exposing the agent's components and settings in a structured view.

Users can define goals, select models, connect tools, and configure knowledge sources.

Manual configuration interface for advanced users
Manual configuration interface for advanced users.

Design Challenges & Tradeoffs

One of the core design challenges was balancing simplicity with flexibility.

A fully configurable system could overwhelm analysts new to AI automation, while overly simplifying the system would limit advanced use cases.

The solution was a dual-mode design:

  • A conversational builder for fast creation
  • A structured configuration interface for full control

This allowed both new and experienced users to work efficiently.

User Research

To validate the concept before launch, I conducted interviews with 8 security professionals and walked them through the proposed workflows using prototypes.

Key findings that directly shaped the final design:

  • 3 of 8 participants recommended the agent types that became the 4 out-of-the-box template agents
  • Users were evenly split on creation mode — 4 preferred manual configuration, 4 preferred the conversational builder — directly validating the dual-mode approach
  • 2 participants independently suggested agent cloning, which was added as a feature
  • 3 participants said they would struggle to come up with agent ideas on their own, which informed the Generate Agent examples surface at the top of the creation flow — giving users a starting point rather than a blank slate

Outcomes & Impact

Prior to this work, CrowdStrike customers had no way to create or use AI agents within the platform. Automation was possible through Fusion SOAR and Charlotte Workbooks, but these required manual configuration and couldn't support the kind of autonomous, multi-step agentic workflows customers were beginning to expect from an AI-native security platform.

The Charlotte AI Agent Builder changed that — taking the platform from zero agentic capability to a full agent creation, management, and deployment system in a single release.

Results from early access launch:

  • 100+ customers onboarded in the first few weeks
  • 150+ custom agents created, spanning threat detection, investigation, auto-remediation, compliance, and custom workflows
  • 4 out-of-the-box template agents shipped and immediately adopted, with compliance and threat detection agents seeing the highest repeated use
  • Dual-mode builder (conversational + manual) validated by research and adopted by both technical and less technical users
  • Early demand signaled exponential adoption growth, with more customers actively requesting access at launch

What I'd Do Next

  • Agent run history showing when it fired, what actions it took, and what it returned
  • Sandbox and simulation mode for testing agents against historical data before production deployment
  • Approval and permissions layer for controlling who can create and promote agents in an enterprise environment
  • Agent health signals covering silent failures, LLM degradation, and unusual output patterns
  • Adaptive onboarding that evolves the builder experience as a user's familiarity grows
Sean Crisman — Product Design Leader