← Back to Home
Intermediate ~15 min read

Reading Performance Reports

The HTML report is the primary output of every PerfGuard run. Learn how to read every section, spot real regressions, and share findings with your team.

1

Report Overview and Structure

PerfGuard generates a single self-contained HTML file with no external dependencies. Chart.js is embedded inline, all data is inlined as JSON, and styles are included in the file. You can open it in any browser, email it, or attach it to a PR.

The report is organized top-to-bottom, from high-level summary to detailed analysis:

  1. Banner — immediate pass/fail signal
  2. Executive Summary — plain-English health overview
  3. Per-Scenario Tables — stat-by-stat comparison data
  4. Trend Charts — historical performance over time
  5. Histograms — frame time distribution
  6. Hitch Analysis — frame spikes and their causes
  7. Diagnostics — memory, capture quality, thermals
📊
Full PerfGuard HTML report opened in a browser, showing the complete layout from banner through diagnostics
Screenshot: Full report overview (scrolled)
2

The Banner: Pass/Fail at a Glance

The banner at the top of the report gives you the answer immediately. It shows:

  • Green / PASS — All scenarios within threshold. No action needed.
  • Red / FAIL — One or more regressions detected. Scroll down for details.
  • Amber / WARNING — Marginal results or data quality issues. Review recommended.

The banner also shows the build label, timestamp, platform, and the number of scenarios tested.

🆗
Report banner in FAIL state: red background, "Performance Regression Detected" text, build label, timestamp, 3/5 scenarios passing
Screenshot: Report banner (fail state)
3

Executive Summary

The executive summary translates raw numbers into plain English. It's designed for leads and producers who need to understand impact without reading stat tables. It covers:

  • Overall health assessment
  • Which scenarios regressed and by how much
  • Whether frame budgets are being met
  • Key recommendations (e.g., "investigate GPU time increase in Arena level")
💡
Tip Copy the executive summary text directly into PR descriptions or Slack messages when reporting regressions to the team. It's written to be understandable without the full report context.
4

Per-Scenario Tables

Each scenario gets a detailed comparison table showing every tracked stat with:

  • Baseline value — The reference measurement
  • Current value — The new measurement
  • Delta — Absolute difference (ms, count, etc.)
  • Delta % — Percentage change, color-coded red/green
  • Threshold — The configured tolerance
  • Status — Pass/Fail badge per stat

Tables support filtering by stat group (timing, memory, rendering) and can be exported to CSV directly from the report.

📋
Per-scenario stat table showing FrameTime, GPUTime, GameThreadTime with baseline/current/delta columns, red highlighting on regressed rows
Screenshot: Stat comparison table
5

Trend Charts

If you have run history enabled, trend charts show how each stat has changed over time. This is invaluable for spotting gradual regressions that stay under threshold but accumulate.

Each chart plots the stat value (Y axis) against build labels or dates (X axis). The baseline value appears as a horizontal reference line, and the threshold band is shown as a shaded region.

📈
Chart.js line chart showing FrameTime trend over 20 builds, with baseline reference line at 14ms and threshold band shaded up to 14.7ms
Screenshot: Trend chart with baseline reference
💡
Tip Click on a data point in the trend chart to see the exact value and build label. Charts auto-expand when you click on a scenario card header.
6

Frame Time Histograms

Histograms show the distribution of frame times, which tells a much richer story than the mean alone. A healthy distribution is a tight cluster well under budget. A problematic one has a long tail or bimodal shape.

Look for:

  • Tight cluster — Good. Consistent frame times with low variance.
  • Long right tail — Occasional hitches pulling the mean up.
  • Bimodal peaks — Two distinct performance modes (e.g., streaming vs stable).
  • Wide spread — High variance. Check for thermal throttling or background interference.
📊
Frame time histogram showing a tight peak at 13-14ms with a small tail extending to 22ms, budget line marked at 16.67ms
Screenshot: Frame time histogram
7

Hitch Analysis

The hitch analysis section identifies individual frames that exceeded the budget threshold and groups them by severity:

  • Minor — 1–2x budget (noticeable stutter)
  • Major — 2–4x budget (visible freeze)
  • Severe — 4x+ budget (hard stall)

For each hitch, the report shows the frame number, duration, and bottleneck attribution (GPU-bound, game thread-bound, render thread-bound) based on which thread had the highest time that frame.

Hitch clusters (multiple hitches in close succession) are highlighted separately, as they indicate systemic issues rather than one-off spikes.

Hitch analysis table: 3 minor hitches (GPU-bound), 1 major hitch (game thread-bound) at frame 342, cluster detected at frames 340-345
Screenshot: Hitch analysis detail
8

Performance Diagnostics

The diagnostics card provides meta-analysis beyond raw stat comparison:

  • Memory leak detection — Flags monotonically increasing memory usage across the capture
  • Capture quality validation — Checks for sufficient frame count, warmup adequacy, and data completeness
  • Thermal throttle detection — Identifies frame time degradation over time that suggests thermal throttling
  • Coefficient of Variation (CoV) — Flags stats with high variance that may produce unreliable comparisons
🔍
Diagnostics card showing: Memory Leak: None detected, Capture Quality: Good (842 frames), Thermal: No throttling, CoV Warnings: DrawCalls (12.3% variance)
Screenshot: Performance Diagnostics card
Warning If the diagnostics card shows thermal throttling, your comparison results may not be reliable. The regression you're seeing might be heat-induced slowdown rather than a code change. Re-run after letting the machine cool down.
9

Dark Mode and PDF Export

The report ships in dark mode by default (matching the PerfGuard brand). A toggle in the top-right corner switches to light mode for printing or embedding in documents.

To export as PDF, use the browser's built-in print function (Ctrl+P / Cmd+P). The report includes print-optimized CSS that adjusts layouts, hides interactive elements, and ensures charts render cleanly on paper.

💡
Tip When exporting to PDF, switch to light mode first for better readability on white paper. The print stylesheet handles the rest automatically.
10

Sharing Reports with Your Team

Since the report is a single HTML file with no dependencies, sharing is straightforward:

  • PR attachment — Upload as a build artifact and link from the PR description
  • Slack/Teams — Drag and drop the HTML file into a channel
  • Static hosting — Publish to an internal web server or S3 bucket
  • Jenkins — Use HTML Publisher to serve directly from the build page
  • Email — Attach the file or export as PDF
💡
What's next? Now that you can read reports, learn about advanced threshold tuning and multi-run analysis to make your regression detection more accurate and reduce false positives.