Traces
What it is
Section titled “What it is”Traces provides an observability layer for AI executions across a program. It displays a timeline chart of execution volume, aggregate statistics, a filterable list of all executions, and a detail drawer for inspecting individual traces.
Why it matters
Section titled “Why it matters”Understanding how AI agents perform over time is essential for tuning skills, controlling costs, and debugging failures. Traces gives you the data to answer “what ran, when, how long, which model, and what did it cost?”
Key concepts
Section titled “Key concepts”- Trace filters:
TraceFilterBarwith controls for date range (default 30 days), skill, trigger type, review status, model, and free-text search - Stats row:
TraceStatsRowshows aggregate metrics — total executions, success rate, average duration, total cost - Timeline chart:
TraceTimelineChartvisualizes execution volume over time as a sparkline - Trace list:
TraceListrenders filterable, sortable rows of executions - Detail drawer:
TraceDetailDrawerslides in with full trace details — input, output, timing, tokens, model, and cost - Date range filtering: Client-side filtering within the selected date window
- Model filter: Filter by AI model (Opus, Sonnet 4.5 v2, Sonnet 4.5)
How to use it
Section titled “How to use it”- Navigate to [Program] > Traces.
- View the Timeline Chart for execution volume trends.
- Review the Stats Row for aggregate metrics.
- Use the Filter Bar to narrow by date range, skill, trigger, model, or search term.
- Click a row in the Trace List to open the Detail Drawer with full trace information.
- Adjust the date range to analyze different time periods.
Data model
Section titled “Data model”This feature uses the following tables:
agentExecutions— Execution records with skill, model, status, timing, tokens, and costtraceAnalytics— Precomputed trace statistics and timeline data