← Back to Home
Intermediate ~20 min read

CI/CD Pipeline Setup

Automate performance regression testing in your build pipeline. PerfGuard includes ready-made configs for GitHub Actions, Jenkins, and GitLab CI.

1

Prerequisites

Performance testing in CI requires a self-hosted runner with GPU access. Cloud-hosted runners don't have GPUs and can't produce meaningful frame time data. You'll need:

  • Unreal Engine 5.4+ installed on the runner
  • Python 3.11+ with PerfGuard CLI dependencies installed
  • A built project — Gauntlet runs against a cooked or editor build
  • Dedicated GPU — consistent GPU hardware for reliable comparisons
  • Baselines recorded on the same hardware as the runner
Warning Shared CI machines where other jobs compete for GPU/CPU resources will produce noisy results. Dedicate a machine (or at least a time window) for performance testing.
2

Creating suite.json Configuration

A suite file defines which scenarios to run, their thresholds, and the overall test configuration. Create a suite.json in your project root:

suite.json
{
  "project": "YourProject.uproject",
  "scenarios": [
    "MainMenu_Flythrough",
    "GameplayLevel_Overview",
    "OpenWorld_Drive"
  ],
  "threshold_percent": 5.0,
  "budget": "60fps",
  "warmup_frames": 60,
  "platform": "Win64"
}

The perfguard run command reads this file and orchestrates the full capture-compare-report cycle for all listed scenarios.

3

GitHub Actions Setup

Copy the provided workflow file into your repository:

Terminal
cp Plugins/PerfGuard/Tools/ci/github-actions/perfguard.yml \
   .github/workflows/perfguard.yml
Command Prompt
copy Plugins\PerfGuard\Tools\ci\github-actions\perfguard.yml ^
     .github\workflows\perfguard.yml

The workflow runs on self-hosted runners tagged with gpu. Key sections:

perfguard.yml (excerpt)
name: PerfGuard Regression Test
on:
  pull_request:
    branches: [main]
  schedule:
    - cron: '0 3 * * *'  # Nightly at 3 AM

jobs:
  perf-test:
    runs-on: [self-hosted, gpu]
    steps:
      - uses: actions/checkout@v4
      - name: Run PerfGuard Suite
        run: |
          python Plugins/PerfGuard/Tools/perfguard_cli.py \
            run suite.json --mode compare
      - name: Upload Report
        uses: actions/upload-artifact@v4
        with:
          name: perfguard-report
          path: results/report.html
GitHub Actions workflow run showing PerfGuard steps: capture, compare, report generation, all passing with green checkmarks
Screenshot: GitHub Actions workflow detail
4

Jenkins Setup

Copy the Jenkinsfile and configure your Jenkins instance:

Terminal
cp Plugins/PerfGuard/Tools/ci/jenkins/Jenkinsfile ./Jenkinsfile
Command Prompt
copy Plugins\PerfGuard\Tools\ci\jenkins\Jenkinsfile .\Jenkinsfile

The Jenkinsfile uses the HTML Publisher plugin to serve the generated report directly from the Jenkins build page. Install the plugin via Manage Jenkins → Plugins → HTML Publisher.

Jenkinsfile (excerpt)
pipeline {
    agent { label 'gpu' }
    stages {
        stage('Perf Test') {
            steps {
                sh 'python Tools/perfguard_cli.py run suite.json --mode compare'
            }
        }
    }
    post {
        always {
            publishHTML(target: [
                reportDir: 'results',
                reportFiles: 'report.html',
                reportName: 'PerfGuard Report'
            ])
        }
    }
}
5

GitLab CI Setup

Copy the GitLab CI configuration:

Terminal
cp Plugins/PerfGuard/Tools/ci/gitlab/.gitlab-ci.yml ./.gitlab-ci.yml
Command Prompt
copy Plugins\PerfGuard\Tools\ci\gitlab\.gitlab-ci.yml .\.gitlab-ci.yml
.gitlab-ci.yml (excerpt)
perf-test:
  stage: test
  tags: [gpu]
  script:
    - python Tools/perfguard_cli.py run suite.json --mode compare
  artifacts:
    paths:
      - results/report.html
    when: always
    expire_in: "30 days"

The report is uploaded as a pipeline artifact and available for download directly from the merge request page.

6

Using perfguard-ci.sh

If you don't want to maintain CI-specific configs, the perfguard-ci.sh wrapper auto-detects your CI environment and runs the appropriate commands:

Terminal
# Works in GitHub Actions, Jenkins, GitLab CI, or locally
bash Plugins/PerfGuard/Tools/ci/perfguard-ci.sh suite.json
Command Prompt
:: No .sh wrapper on Windows — use the CLI directly
python Plugins\PerfGuard\Tools\perfguard_cli.py run suite.json --mode compare

The script detects the CI system from environment variables (GITHUB_ACTIONS, JENKINS_URL, GITLAB_CI), sets up paths, runs the suite, and handles exit codes correctly for each platform.

💡
Tip Use perfguard-ci.sh when you want a quick start that works everywhere. Switch to the platform-specific configs when you need more control over triggers, caching, and artifact handling.
7

Webhook Notifications (Slack/Teams)

Get notified instantly when a regression is detected. PerfGuard supports Slack, Microsoft Teams, and generic JSON webhooks.

Terminal
# Slack notification
python3 perfguard_cli.py run suite.json --mode compare \
    --webhook-url https://hooks.slack.com/services/T.../B.../xxx \
    --webhook-format slack

# Microsoft Teams
python3 perfguard_cli.py run suite.json --mode compare \
    --webhook-url https://outlook.office.com/webhook/... \
    --webhook-format teams
Command Prompt
:: Slack notification
python perfguard_cli.py run suite.json --mode compare ^
    --webhook-url https://hooks.slack.com/services/T.../B.../xxx ^
    --webhook-format slack

:: Microsoft Teams
python perfguard_cli.py run suite.json --mode compare ^
    --webhook-url https://outlook.office.com/webhook/... ^
    --webhook-format teams
💬
Slack channel showing a PerfGuard notification with regression summary: scenario name, stat deltas, and a link to the full report
Screenshot: Slack webhook notification
8

Understanding Exit Codes

PerfGuard uses standard exit codes that CI systems interpret for pass/fail:

  • Exit 0 — All scenarios passed. No regressions detected, all budgets met.
  • Exit 1 — Regression detected. One or more stats exceeded thresholds or budget.
  • Exit 2 — Error. Something went wrong (missing baseline, CSV parse failure, engine crash).

In CI, exit code 1 fails the build/PR check, which is the core gating mechanism. Exit code 2 also fails but indicates infrastructure problems rather than performance regressions.

💡
Tip In GitHub Actions, you can use continue-on-error: true on the compare step and check ${{ steps.compare.outcome }} to handle regressions without blocking the entire workflow.
9

PR Comment Bot Setup

The PR comment bot automatically posts a performance summary as a comment on every pull request. Copy the dedicated workflow:

Terminal
cp Plugins/PerfGuard/Tools/ci/github-actions/pr-comment.yml \
   .github/workflows/perfguard-pr-comment.yml
Command Prompt
copy Plugins\PerfGuard\Tools\ci\github-actions\pr-comment.yml ^
     .github\workflows\perfguard-pr-comment.yml

The bot posts a markdown table showing each scenario, its pass/fail status, and key stat deltas. It updates the same comment on subsequent pushes rather than creating new ones.

📝
GitHub PR showing a PerfGuard bot comment with a markdown table: scenario names, pass/fail badges, FrameTime delta, GPUTime delta
Screenshot: PR comment bot output
Warning The PR comment workflow requires a GitHub token with pull-requests: write permission. The default GITHUB_TOKEN has this in most repository configurations, but verify if you're using a restricted token.
10

Artifact Archival and Report Publishing

Always archive the HTML report and raw results as build artifacts. This creates a historical record you can reference when investigating regressions weeks later.

Key artifacts to archive:

  • results/report.html — The self-contained HTML report
  • results/*.json — Raw comparison results (machine-readable)
  • Saved/Profiling/CSV/*.csv — Raw CSV profiler data (optional, large)
💡
Tip Use --json-output results/comparison.json on the compare command to produce machine-readable output alongside the HTML report. This is useful for custom dashboards or trend tracking systems.
💡
What's next? Your CI pipeline is running. Learn to read and interpret the HTML reports it generates, or dive into advanced threshold tuning to reduce false positives.