Tanya a6ba78cd54 feat: add debug mode, distance-based thresholds, and improve pose detection
- Add debug mode support for encoding statistics in API responses
  - Debug info includes encoding length, min/max/mean/std, and first 10 values
  - Frontend logs encoding stats to browser console when debug enabled
  - Identify page enables debug mode by default

- Implement distance-based confidence thresholds for stricter matching
  - Borderline distances require higher confidence (70-95% vs 50%)
  - Applied when use_distance_based_thresholds=True (auto-match)
  - Reduces false positives for borderline matches

- Dual tolerance system for auto-match
  - Default tolerance 0.6 for regular browsing (more lenient)
  - Run auto-match button uses 0.5 tolerance with distance-based thresholds (stricter)
  - Auto-accept threshold updated to 85% (from 70%)

- Enhance pose detection with single-eye detection
  - Profile threshold reduced from 30° to 15° (stricter)
  - Detect single-eye visibility for extreme profile views
  - Infer profile direction from landmark visibility
  - Improved face width threshold (20px vs 10px)

- Clean up debug code
  - Remove test photo UUID checks from production code
  - Remove debug print statements
  - Replace print statements with proper logging
2026-02-10 13:20:07 -05:00
..

Running Backend API Tests

Quick Start

./run_tests.sh

Option 2: Using npm script

npm run test:backend

Option 3: Manual command

export PYTHONPATH=$(pwd)
export SKIP_DEEPFACE_IN_TESTS=1
./venv/bin/python3 -m pytest tests/ -v

Where to See Test Results

Test results are displayed in your terminal/console where you run the command.

Example Output

When tests run successfully, you'll see output like:

tests/test_api_auth.py::TestLogin::test_login_success_with_valid_credentials PASSED
tests/test_api_auth.py::TestLogin::test_login_failure_with_invalid_credentials PASSED
tests/test_api_auth.py::TestTokenRefresh::test_refresh_token_success PASSED
...
========================= 26 passed in 2.34s =========================

Understanding the Output

  • PASSED (green) - Test passed successfully
  • FAILED (red) - Test failed (shows error details)
  • ERROR (red) - Test had an error during setup/teardown
  • SKIPPED (yellow) - Test was skipped

Verbose Output

The -v flag shows:

  • Each test function name
  • Pass/fail status for each test
  • Summary at the end

Detailed Failure Information

If a test fails, pytest shows:

  • The test that failed
  • The assertion that failed
  • The actual vs expected values
  • A traceback showing where the error occurred

Test Coverage

To see coverage report:

export PYTHONPATH=$(pwd)
export SKIP_DEEPFACE_IN_TESTS=1
./venv/bin/python3 -m pytest tests/ --cov=backend --cov-report=term-missing

This shows:

  • Which lines of code are covered by tests
  • Which lines are missing coverage
  • Overall coverage percentage

Running Specific Tests

Run a single test file

./venv/bin/python3 -m pytest tests/test_api_auth.py -v

Run a specific test class

./venv/bin/python3 -m pytest tests/test_api_auth.py::TestLogin -v

Run a specific test

./venv/bin/python3 -m pytest tests/test_api_auth.py::TestLogin::test_login_success_with_valid_credentials -v

CI/CD Test Results

In CI (GitHub Actions/Gitea Actions), test results appear in:

  1. CI Logs - Check the "Run backend tests" step in the workflow
  2. Test Artifacts - JUnit XML files are generated for test reporting tools
  3. Coverage Reports - Coverage XML files are generated

Troubleshooting

Tests not showing output?

  • Make sure you're running in a terminal (not an IDE output panel that might hide output)
  • Try adding -s flag: pytest tests/ -v -s (shows print statements)

Tests hanging?

  • Check if database is accessible
  • Verify SKIP_DEEPFACE_IN_TESTS=1 is set (prevents DeepFace from loading)

Import errors?

  • Make sure virtual environment is activated or use ./venv/bin/python3
  • Verify all dependencies are installed: ./venv/bin/pip install -r requirements.txt