Tanya bcc902fce2
All checks were successful
CI / skip-ci-check (pull_request) Successful in 1m35s
CI / lint-and-type-check (pull_request) Successful in 2m11s
CI / python-lint (pull_request) Successful in 2m0s
CI / test-backend (pull_request) Successful in 3m42s
CI / build (pull_request) Successful in 4m41s
CI / secret-scanning (pull_request) Successful in 1m42s
CI / dependency-scan (pull_request) Successful in 1m41s
CI / sast-scan (pull_request) Successful in 2m51s
CI / workflow-summary (pull_request) Successful in 1m33s
fix: Update tests to align with API response structure and improve assertions
This commit modifies several test cases to reflect changes in the API response structure, including:
- Updating assertions to check for `tag_name` instead of `tag` in tag-related tests.
- Adjusting the response data checks for bulk add/remove favorites to use `added_count` and `removed_count`.
- Ensuring the photo search test verifies the linked face and checks for the presence of the photo in the results.

These changes enhance the accuracy and reliability of the tests in relation to the current API behavior.
2026-01-12 11:59:24 -05:00
..

Running Backend API Tests

Quick Start

./run_tests.sh

Option 2: Using npm script

npm run test:backend

Option 3: Manual command

export PYTHONPATH=$(pwd)
export SKIP_DEEPFACE_IN_TESTS=1
./venv/bin/python3 -m pytest tests/ -v

Where to See Test Results

Test results are displayed in your terminal/console where you run the command.

Example Output

When tests run successfully, you'll see output like:

tests/test_api_auth.py::TestLogin::test_login_success_with_valid_credentials PASSED
tests/test_api_auth.py::TestLogin::test_login_failure_with_invalid_credentials PASSED
tests/test_api_auth.py::TestTokenRefresh::test_refresh_token_success PASSED
...
========================= 26 passed in 2.34s =========================

Understanding the Output

  • PASSED (green) - Test passed successfully
  • FAILED (red) - Test failed (shows error details)
  • ERROR (red) - Test had an error during setup/teardown
  • SKIPPED (yellow) - Test was skipped

Verbose Output

The -v flag shows:

  • Each test function name
  • Pass/fail status for each test
  • Summary at the end

Detailed Failure Information

If a test fails, pytest shows:

  • The test that failed
  • The assertion that failed
  • The actual vs expected values
  • A traceback showing where the error occurred

Test Coverage

To see coverage report:

export PYTHONPATH=$(pwd)
export SKIP_DEEPFACE_IN_TESTS=1
./venv/bin/python3 -m pytest tests/ --cov=backend --cov-report=term-missing

This shows:

  • Which lines of code are covered by tests
  • Which lines are missing coverage
  • Overall coverage percentage

Running Specific Tests

Run a single test file

./venv/bin/python3 -m pytest tests/test_api_auth.py -v

Run a specific test class

./venv/bin/python3 -m pytest tests/test_api_auth.py::TestLogin -v

Run a specific test

./venv/bin/python3 -m pytest tests/test_api_auth.py::TestLogin::test_login_success_with_valid_credentials -v

CI/CD Test Results

In CI (GitHub Actions/Gitea Actions), test results appear in:

  1. CI Logs - Check the "Run backend tests" step in the workflow
  2. Test Artifacts - JUnit XML files are generated for test reporting tools
  3. Coverage Reports - Coverage XML files are generated

Troubleshooting

Tests not showing output?

  • Make sure you're running in a terminal (not an IDE output panel that might hide output)
  • Try adding -s flag: pytest tests/ -v -s (shows print statements)

Tests hanging?

  • Check if database is accessible
  • Verify SKIP_DEEPFACE_IN_TESTS=1 is set (prevents DeepFace from loading)

Import errors?

  • Make sure virtual environment is activated or use ./venv/bin/python3
  • Verify all dependencies are installed: ./venv/bin/pip install -r requirements.txt