← Back to Home
Beginner ~15 min read

Quick Start: Your First Performance Test

Install the plugin, create your first performance scenario, record a baseline, and compare against it. Everything you need to catch regressions in under 15 minutes.

1

Install PerfGuard Plugin

PerfGuard can be installed from the FAB marketplace or by copying the plugin directly into your project. For manual installation, clone or extract the plugin into your project's Plugins/ directory.

Terminal
# Option A: Copy plugin folder
cp -r PerfGuard/ YourProject/Plugins/PerfGuard/

# Option B: Symlink (useful during development)
ln -s /path/to/PerfGuard YourProject/Plugins/PerfGuard
Command Prompt (Run as Administrator)
:: Option A: Copy plugin folder
xcopy /E /I PerfGuard\ YourProject\Plugins\PerfGuard\

:: Option B: Symlink (useful during development)
mklink /D "YourProject\Plugins\PerfGuard" "C:\path\to\PerfGuard"

After copying, regenerate your project files so UnrealBuildTool picks up the new plugin module.

PerfGuard CLI initializing project directories
2

Enable the Plugin in UE Editor

Open your project in Unreal Editor and navigate to Edit → Plugins. Search for "PerfGuard" and enable it. The editor will prompt you to restart — go ahead and restart.

UE5 Plugins window with PerfGuard enabled under the Installed category
💡
Tip If you installed via FAB, the plugin is enabled automatically. You can verify by checking that PerfGuard appears under the "Testing" category in the Plugins window.
📁
Data Directories PerfGuard stores all data under your project's Saved/ directory: baselines in Saved/PerfGuard/Baselines/, results and HTML reports in Saved/PerfGuard/Results/, and run history in Saved/PerfGuard/History/. These paths can be overridden per-suite in suite.json (using baselines_dir, results_dir, history_dir, report_output) or via CLI flags like --baselines-dir and --output.
🎲
Bundled Demo Content PerfGuard ships with a demo scene at Plugins/PerfGuard/Content/Demo/ including DemoScene.umap, a LS_TestFlythrough Level Sequence, and sample baselines. You can use these to try PerfGuard immediately without setting up your own content.
Quick Capture Want to try PerfGuard instantly? Right-click any map in the Content Browser and select PerfGuard: Quick Capture. It runs a 10-second capture of Frame Time, GPU Time, Draw Calls, and Memory with sensible defaults — no scenario or Level Sequence required. Results appear in Window → PerfGuard Results.
3

Create a Level Sequence Camera Flythrough

Performance scenarios use Level Sequences to drive a deterministic camera path through your scene. This ensures you're testing the same visual workload every time.

In the editor, go to Cinematics → Add Level Sequence, then add a CineCamera Actor and keyframe a path that covers the heaviest areas of your map.

UE5 Sequencer showing CineCameraActor with Transform keyframes on LS_TestFlythrough
💡
Tip Aim for 10–30 seconds of camera movement. Shorter runs have higher variance; longer runs waste CI time. Focus on the heaviest views in your level.
4

Create a PerfScenario Asset

Right-click in the Content Browser and select PerfGuard → Performance Scenario. This creates a new UPerfScenario data asset that tells PerfGuard what map to load, which sequence to play, and which stats to capture.

Content Browser context menu showing PerfGuard > Performance Scenario option
5

Configure the Scenario

Open your new PerfScenario asset and fill in the core properties:

  • Map — the level to load for this test
  • Level Sequence — the camera flythrough you created
  • Tracked Stats — which CSV profiler columns to monitor (FrameTime, GPUTime, DrawCalls, etc.)
  • Warmup Frames — frames to skip at the start (shader compilation, asset streaming)
PerfScenario asset Details panel showing Map, Level Sequence, Tracked Stats, and Warmup Frames
6

Install the Python CLI

PerfGuard's analysis tooling runs as a Python CLI. You need Python 3.11 or later.

Terminal
cd Plugins/PerfGuard/Tools

# Verify the CLI is working
python3 perfguard_cli.py --help
Command Prompt
cd Plugins\PerfGuard\Tools

:: Verify the CLI is working
python perfguard_cli.py --help
Warning PerfGuard requires Python 3.11+. Earlier versions will fail with syntax errors. Run python --version to verify.

You can also use the shell launchers instead of calling Python directly:

  • Tools/Launchers/perfguard.sh <command> (Linux/Mac)
  • Tools\Launchers\perfguard.bat <command> (Windows)

To scaffold starter configuration files (suite config, baseline directory, CI workflow template), run:

perfguard init
7

Run Gauntlet Capture

Launch the engine in headless game mode with the Gauntlet controller. This loads your scenario, plays the sequence, captures CSV profiler data, and exits automatically.

Terminal
UnrealEditor YourProject.uproject \
    -game \
    -gauntlet=PerfGuardGauntletController \
    -scenario=MyScenario \
    -csvprofile \
    -trace=cpu,gpu,frame,counters,rhicommands,rendercommands \
    -RenderOffScreen \
    -unattended \
    -log
Command Prompt
UnrealEditor-Cmd.exe YourProject.uproject ^
    -game ^
    -gauntlet=PerfGuardGauntletController ^
    -scenario=MyScenario ^
    -csvprofile ^
    -trace=cpu,gpu,frame,counters,rhicommands,rendercommands ^
    -unattended ^
    -log

The CSV output lands in Saved/Profiling/CSV/. The controller logs progress under the LogPerfGuard category.

💡
Tip The -game flag is required. Gauntlet controllers only initialize in game mode, not editor mode. Without it, the controller silently won't start.
8

Record a Baseline

Once you have CSV data from a known-good build, record it as your baseline. This is the reference point all future runs will be compared against.

Terminal
python3 perfguard_cli.py baseline record MyScenario \
    --csv Saved/Profiling/CSV/Profile_2026.01.15-10.30.00.csv \
    --platform Win64 \
    --warmup 60
Command Prompt
python perfguard_cli.py baseline record MyScenario ^
    --csv Saved\Profiling\CSV\Profile_2026.01.15-10.30.00.csv ^
    --platform Win64 ^
    --warmup 60

This parses the CSV, trims warmup frames, removes outliers, and saves a .json baseline file to the baselines directory.

9

Compare Against Baseline

After making code or content changes, run the scenario again and compare the new CSV against your recorded baseline.

Terminal
python3 perfguard_cli.py baseline compare MyScenario \
    --csv Saved/Profiling/CSV/Profile_2026.01.16-14.22.00.csv \
    --threshold-percent 5.0 \
    --budget 60fps
Command Prompt
python perfguard_cli.py baseline compare MyScenario ^
    --csv Saved\Profiling\CSV\Profile_2026.01.16-14.22.00.csv ^
    --threshold-percent 5.0 ^
    --budget 60fps

The CLI exits with code 0 if all stats are within threshold, or 1 if any regression is detected. This exit code is what CI pipelines use for pass/fail gating.

Warning Make sure you're comparing data captured on the same hardware and platform as the baseline. GPU thermals, background processes, and driver versions can all cause false positives.
10

Generate an HTML Report

Generate a self-contained HTML report with charts, tables, and diagnostics that you can share with your team or attach to a pull request.

Terminal
python3 perfguard_cli.py report results/ \
    --output report.html
Command Prompt
python perfguard_cli.py report results\ ^
    --output report.html

Open report.html in any browser. The report is fully self-contained with no external dependencies — Chart.js is embedded inline.

PerfGuard HTML report showing executive summary, trend charts, and scenario details
11

View Reports in the Dashboard

For an interactive experience with working action buttons, use the built-in dashboard server instead of opening the HTML file directly:

Terminal
python3 perfguard_cli.py dashboard \
    --project-dir . \
    --ue-root /path/to/UnrealEngine \
    --results-dir Saved/PerfGuard/Results

This starts a local server (default port 8076) and opens your browser. The dashboard provides buttons to:

  • New Run — run a full capture and comparison
  • Update Baseline — re-record the baseline
  • Clear All — wipe all data and start fresh
Warning You can also open the HTML report files directly (e.g. double-clicking report.html), but the action buttons will be disabled — they require the dashboard server to function.
12

Next Steps

You've completed the basics. Here's where to go from here:

  • Authoring Performance Scenarios — set up comprehensive tests with Level Sequences, Replays, or Duration Only captures
  • CI/CD Pipeline Setup — automate regression testing in your build pipeline
  • Advanced Threshold Tuning — after 5+ runs, use perfguard baseline auto-tune to get statistically-grounded threshold recommendations
  • Scenario Browser — open Window → PerfGuard Scenarios in the editor for a dockable panel listing all scenarios with Open, Validate, and Run buttons
  • Validate Scenarios — run UnrealEditor-Cmd YourProject.uproject -run=PerfGuard -scenario=MyScenario for fast pre-flight validation (checks scenario asset well-formedness without GPU rendering)
  • Clean Up — use perfguard clean to remove PerfGuard data (baselines, results, or history) when needed
💡
Multi-Run Analysis Use perfguard analyze with multiple CSV captures to compute confidence intervals and identify volatile metrics before trusting your thresholds.