Skip to content
Go back

Test Automation - Accelerating Playwright Python Tests with Parallel Execution in GitHub Actions

Published:

The Challenge of Long Test Execution Times

In test automation, reducing execution time without compromising results is crucial, especially as test suites grow in large projects with continuous testing needs. This can become a bottleneck when executing all tests.

To optimize execution time, particularly for regression tests, you can distribute the test load across multiple machines, running tests in parallel. This article explores how to use GitHub Actions and pytest-split to run Playwright Python tests in this distributed manner.

The solution presented in this article is exemplified in my Playwright Python example project, developed in collaboration with Elias Shourosh.

Implementing the Solution

The solution code can be found here.

jobs:
  setup-matrix:
    runs-on: ubuntu-latest
    outputs:
      matrix: ${{ steps.set-matrix.outputs.matrix }}
    steps:
      - id: set-matrix
        run: |
          count=${{ github.event.inputs.parallelism || 2 }}
          matrix=$(seq -s ',' 1 $count)
          echo "matrix=$(jq -cn --argjson groups "[${matrix}]" '{group: $groups}')" >> $GITHUB_OUTPUT

The setup-matrix job dynamically creates a matrix based on the specified number of parallel executions, allowing flexible scaling of our test infrastructure.

nightly-test:
  needs: setup-matrix
  runs-on: ubuntu-latest
  strategy:
    fail-fast: false
    matrix: ${{ fromJson(needs.setup-matrix.outputs.matrix) }}
  steps:
    - uses: actions/checkout@v4
    - name: Set up Python
      uses: actions/setup-python@v5
      with:
        python-version: '3.12'
    - name: Install Poetry
      uses: snok/install-poetry@v1
      with:
        virtualenvs-create: true
        virtualenvs-in-project: true
        installer-parallel: true
    - name: Load cached venv
      id: cached-poetry-dependencies
      uses: actions/cache@v4
      with:
        path: .venv
        key: venv-${{ runner.os }}-${{ steps.setup-python.outputs.python-version }}-${{ hashFiles('**/poetry.lock') }}
    - name: Install Dependencies
      run: poetry install --no-interaction --no-root
    - name: Install Playwright Browsers
      run: npx playwright install --with-deps
    - name: Run Tests
      run: |
        source .venv/bin/activate
        xvfb-run pytest ${{ github.event.inputs.pytest_command || '-m "not devRun"' }} \
          --base-url ${{ vars.BASE_URL }} \
          --splits ${{ github.event.inputs.parallelism || 2 }} \
          --group ${{ matrix.group }}
    - name: Upload test results and artifacts
      if: always()
      uses: actions/upload-artifact@v4.3.3
      with:
        name: test-results-${{ matrix.group }}
        path: |
          test-results/
          allure-results
        retention-days: 7

The nightly-test job is where the actual test execution occurs, this job uses the dynamic matrix to run tests in parallel. The fail-fast: false setting in the matrix strategy prevents the entire job from failing as soon as one matrix configuration fails. This means that all test shards will continue to execute, even if one or more fail. The --splits and --group options from pytest-split ensure each machine runs a distinct subset of tests.

After test execution, we upload the test results (Traces and videos) and Allure results, The uploaded artifacts set the stage for the merge-reports job, where we’ll consolidate results and determine the overall test suite status.

merge-reports:
  needs: nightly-test
  if: always()
  runs-on: ubuntu-latest
  steps:
    - uses: actions/checkout@v4
    - name: Download all test results
      uses: actions/download-artifact@v4
      with:
        path: artifacts
    - name: Merge test results
      run: |
        mkdir -p merged-test-results
        for dir in artifacts/test-results-*/test-results; do
          cp -R $dir/* merged-test-results/
        done
    - name: Upload Merged Test Results
      uses: actions/upload-artifact@v4.3.4
      id: merged-artifact-upload
      with:
        name: merged-test-results
        path: merged-test-results/
        retention-days: 7
    - name: Merge Allure Results
      run: |
        mkdir -p allure-results
        for dir in artifacts/test-results-*/allure-results; do
          cp -R $dir/* allure-results/
        done
    - name: Link Git Information And Browser Version To Allure Report
      working-directory: allure-results
      if: always()
      run: |
        {
          echo BUILD_URL=${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
          echo GIT_BRANCH=${{ github.head_ref || github.ref_name }}
          echo GIT_COMMIT_ID=${{ github.sha }}
          echo GIT_COMMIT_MESSAGE="$(git show -s --format=%s HEAD)"
          echo GIT_COMMIT_AUTHOR_NAME="$(git show -s --format='%ae' HEAD)"
          echo GIT_COMMIT_TIME="$(git show -s --format=%ci HEAD)"
          echo CHROME_VERSION=$(google-chrome --product-version)
        } >> environment.properties
    - name: Link Playwright Traces And Videos To Allure Report
      working-directory: allure-results
      if: failure()
      run: |
        echo ARTIFACT_URL=${{ steps.merged-artifact-upload.outputs.artifact-url }} >> environment.properties
    - name: Generate Allure Report
      uses: simple-elf/allure-report-action@master
      if: always()
      id: allure-report
      with:
        allure_results: allure-results
        allure_report: allure-report
        gh_pages: gh-pages
        allure_history: allure-history
    - name: Deploy Report To Github Pages
      if: always()
      uses: peaceiris/actions-gh-pages@v4
      with:
        github_token: ${{ secrets.GITHUB_TOKEN }}
        publish_dir: allure-history

This merge-reports job is designed to consolidate test results from multiple runs and generate a comprehensive Allure report. It runs after the “nightly-test” job and is executed even if previous jobs fail.

The process begins by checking out the repository code and downloading all artifacts from previous jobs. It then merges all test results into a single directory and uploads this consolidated set as a new artifact.

Next, it combines all Allure results into a single directory. To provide context, it adds important environment information to the Allure report, including Git details (like branch, commit ID, commit message, author, and time) and the Chrome browser version used for testing.

If any tests fail, it also links Playwright traces and videos to the Allure report, which can be crucial for debugging. The job then generates the Allure report from these merged results.

Finally, the generated Allure report is deployed to GitHub Pages. This makes the report easily accessible to team members and stakeholders, allowing them to view the test results without needing to download or generate the report locally.

Conclusion

Incorporating dynamic matrices into your test automation workflow can significantly enhance the efficiency of your CI/CD pipeline. The combination of GitHub Actions, pytest-split, and Allure reporting creates a robust framework for parallel test execution and result analysis.

By continually refining your test automation strategy in this way, you can ensure faster feedback cycles and maintain high-quality software delivery. The result is a more responsive, efficient testing process that can keep pace with modern development needs.

Happy testing!


Suggest Changes

Ready to build your quality roadmap? Start Here


Previous Post
Test Automation - Flexible Test Execution with Playwright Python and GitHub Actions
Next Post
Test Automation - Accelerating Playwright TypeScript Tests with Parallel Execution in GitHub Actions and Allure Reporting