SIEM Data Parser

Scriptable tools to parse, normalize, and route security log and event data into SIEM and data lakes.

CrowdStrike logo

CrowdStrike

Security
Data
SIEM
Parser editor with script surface and live test logs.
Parser editor with script surface and live test logs.

Executive Summary

Overview

  • Product: SIEM Data Parser
  • Company: CrowdStrike
  • Role: Lead Product Designer
  • Timeline: 5 months
  • Scope: Tooling to parse, normalize, and route log and event data into SIEM and data lakes

Key Contributions

  • Designed the end-to-end parser creation and editing workflow for faster log onboarding
  • Simplified complex parsing concepts into a usable, auditable interface for security teams
  • Partnered with engineering to support scalable onboarding across many log types

Outcomes

  • Reduced friction for onboarding new log sources
  • Enabled repeatable, standardized parser authoring and maintenance
  • Improved clarity and confidence for security teams managing log pipelines

Context & Problem

Cybersecurity Co.’s customers stream large volumes of third-party security logs into the platform, but each source formats fields differently. Raw events were hard for humans to read, inconsistent for detection content, and expensive to normalize in backend code. Parser logic lived in engineer-owned scripts, so onboarding a new integration or fixing a broken parser meant opening tickets, waiting on deploys, and guessing at production behavior with little visibility into parser health.

Objectives & Metrics

We set out clear goals across ownership, speed, visibility, and intelligence:

Ownership

  • Give detection engineers and analysts a first-class UI for creating and maintaining parsers without needing a code deploy.
  • Shift day-to-day parser edits from backend engineering into the detection engineering team.

Speed & Efficiency

  • Cut the time to onboard a new log source from days to hours.
  • Enable live testing against real log samples so users can iterate quickly and confidently.

Visibility

  • Centralize visibility into parser status, usage, and data volume in a single library.
  • Help teams quickly spot broken or stale parsers and prioritize fixes based on impact.

Intelligence

  • Introduce AI-assisted parser generation so users can bootstrap new parsers without writing every script from scratch.

My Role

Role

Lead Product Designer & User Research

Tools

Figma, FigJam

Timeline

5 months

I owned:

  • User research with SOC analysts and detection engineers
  • End-to-end parser workflow mapping
  • Parser editor UX (script surface, tests, AI assist)
  • Parser library and parser details information architecture
  • High-fidelity prototypes and interaction design
  • Design specifications, QA, and implementation support

I led the project from discovery through launch, partnering closely with PM and staff engineers.

Approach & Key Decisions

Script-Based Editor with Live Testing

Rather than abstract parsing into a drag-and-drop flow, we leaned into our users’ familiarity with scripting and query languages and designed a script-based editor with live testing.

The editor surfaces the parser script side-by-side with test log data, pass/fail counts, and a run-tests control so users can validate changes against real log samples as they iterate.

Parser editor with script surface and live test logs.
Parser editor designed for detection engineers, with script surface, live test logs, and AI-assisted generation.

Parser Library as a Source of Truth

On top of the editor, we introduced a parser library that lists every parser across a tenant – including type (default, imported, custom), health status, 7-day data volume, and last-updated metadata.

Search and filters make it easy to find the parser behind a given integration and quickly understand coverage and impact across the estate.

Parser library showing parser health, type, and 7-day data volume.
Parser library showing health, type, 7-day data volume, and recency for every parser in a tenant.

Parser Details and AI Assist

From the library, a parser details view exposes richer context: parser metadata, the script and test logs, plus all data connectors currently using that parser so users can see the blast radius of any change before they edit.

They can then review, tweak, and re-run tests against real sample data, keeping the script as the source of truth while dramatically reducing the time and effort to author new parsers.

Parser details view with parser metadata, script, test logs, and connected data sources.
Parser details view with metadata, script, test logs, and connected data sources for blast-radius awareness.

Outcomes & Impact

  • Shifted parser ownership from backend engineers to detection engineers and analysts, with most day-to-day edits happening directly in the UI.
  • Cut the typical time to bring a new log source online from multiple days of back-and-forth to a same-day workflow using live tests on production-like logs.
  • Reduced manual script authoring effort while improving confidence in changes through side-by-side testing.
  • Gave teams a single place to monitor parser status, spot non-functional or stale parsers, and prioritize fixes based on data volume and impact.

What I'd Do Next

Next, I’d extend the system’s intelligence and observability:

  • As a fast follow, add a ‘Generate parser script’ feature where users describe the script they want in natural language and generate it with the press of a button.
  • Extend AI assistance beyond generation to include inline explanations of script snippets and suggested field mappings when new fields appear in logs.
  • Automatically detect parsing anomalies or drops so teams are alerted when parsers silently fail or degrade.
  • Add richer analytics around parser performance over time – error rates, dropped events, and latency – to guide optimization work.
  • Explore reusing this editor pattern for other data-transformation tooling in the platform so users have a consistent way to author and test data logic.
Sean Crisman — Product Design Leader