Skincare-Filter-Web-Application

Testing Guide

Overview

This guide documents the testing strategy, fixture patterns, coverage expectations, and best practices for writing tests in the Skincare Allergy Filter project. All contributors (human and AI agents) must follow these guidelines to maintain code quality and reliability.

Current Coverage Target: 75% (current minimum) → 80% (Gate 5 completion)


Test Organization

Directory Structure

allergies/tests/
    __init__.py
    test_models.py           # Allergen, UserAllergy model tests
    test_views_error_handling.py
    test_admin_error_handling.py
    test_models.py

users/
    tests.py                 # CustomUser model tests, signals, validators

conftest.py                  # Shared fixtures (project root)

Test Categories

Tests are organized using pytest markers defined in pyproject.toml:

Default: Tests without markers are treated as standard unit tests with database access.

Test Filtering with Markers

The project uses pytest markers to categorize tests, allowing you to run specific subsets:

Available Markers

Using Markers in Your Tests

Add markers to test functions in files like allergies/tests/test_models.py:

import pytest
from allergies.models import Allergy

@pytest.mark.slow
def test_complex_allergen_matching():
    # Test that takes significant time
    pass

@pytest.mark.integration
def test_user_allergy_workflow():
    # Test that spans multiple components
    pass

Filtering Tests

Run specific test subsets using the -m flag:

# Run only fast tests (exclude slow tests)
uv run pytest -m "not slow"

# Run only integration tests
uv run pytest -m integration

# Run all tests except integration tests
uv run pytest -m "not integration"

Fixtures

Shared Fixtures (Project Root)

Located in conftest.py at the project root for reuse across all apps:

@pytest.fixture
def media_root(settings, tmp_path_factory):
    """Redirect MEDIA_ROOT to a temp directory; prevents writes to real media/ during tests."""
    temp_media = tmp_path_factory.mktemp("media")
    settings.MEDIA_ROOT = str(temp_media)
    yield temp_media

@pytest.fixture
def user_email():
    """Standard test email address."""
    return "test@example.com"

@pytest.fixture
def user_password():
    """Standard test password for all users."""
    return "SecurePassword123!"

@pytest.fixture
def test_user(db, user_email, user_password):
    """Create standard test user with predictable credentials."""
    return User.objects.create_user(
        email=user_email,
        username="testuser",
        password=user_password
    )

@pytest.fixture
def authenticated_client(client, test_user, user_password):
    """Django test client with authenticated session."""
    client.login(email=test_user.email, password=user_password)
    return client

@pytest.fixture
def contact_allergen(db):
    """Contact allergen: Sodium Lauryl Sulfate (SLS)."""
    return Allergen.objects.create(
        category=CATEGORY_CONTACT,
        allergen_key="sls",
        is_active=True
    )

@pytest.fixture
def food_allergen(db):
    """Food allergen: Peanut."""
    return Allergen.objects.create(
        category=CATEGORY_FOOD,
        allergen_key="peanut",
        is_active=True
    )

@pytest.fixture
def user_allergy(db, test_user, contact_allergen):
    """Confirmed UserAllergy linking test_user to contact_allergen."""
    return UserAllergy.objects.create(
        user=test_user,
        allergen=contact_allergen,
        severity_level="moderate",
        is_confirmed=True,
    )

@pytest.fixture
def unconfirmed_user_allergy(db, test_user, contact_allergen):
    """Unconfirmed UserAllergy; uses model defaults (severity_level='', is_confirmed=False)."""
    return UserAllergy.objects.create(
        user=test_user,
        allergen=contact_allergen,
    )

App-Specific Fixtures

For fixtures used only within a single app, define them in the app’s test file or a conftest.py within that app’s test directory. The shared fixtures above cover the most common cross-app scenarios — only create app-specific fixtures for truly isolated concerns.

Test Patterns

Model Tests

Location: app/tests/test_models.py

Pattern: Test model creation, validation, constraints, methods, and string representations.

Example from allergies/tests/test_models.py:

class TestAllergenModel:
    def test_allergen_str_representation(self, contact_allergen, food_allergen):
        """Verify __str__ returns 'Category: Allergen Name' format."""
        assert (
            str(contact_allergen)
            == "Contact/Topical Allergens: Sodium Lauryl Sulfate (SLS)"
        )
        assert str(food_allergen) == "Food Allergens: Peanut"

    def test_category_to_allergens_map(self):
        """Verify CATEGORY_TO_ALLERGENS_MAP contains all allergen choices."""
        contact_allergens = CATEGORY_TO_ALLERGENS_MAP.get(CATEGORY_CONTACT, [])
        food_allergens = CATEGORY_TO_ALLERGENS_MAP.get(CATEGORY_FOOD, [])

        assert ("sls", "Sodium Lauryl Sulfate (SLS)") in contact_allergens
        assert ("peanut", "Peanut") in food_allergens
        assert len(contact_allergens) > 1
        assert len(food_allergens) > 1

Common Model Test Scenarios:

  1. Field Validation - Test max_length, choices, blank/null constraints
  2. Unique Constraints - Verify unique_together enforcement
  3. Custom Validation - Test clean() method raises ValidationError
  4. String Representation - Test __str__() and __repr__()
  5. Properties & Methods - Test custom model methods and computed properties
  6. Signals - Test post_save, pre_save, pre_delete signal handlers

View Tests

Location: app/tests/test_views.py

Pattern: Test HTTP responses, authentication requirements, template rendering, form validation, and error handling.

Key Areas to Cover:

  1. HTTP Method Support - GET, POST, PUT, DELETE
  2. Authentication - Unauthenticated vs authenticated access
  3. Authorization - User permissions and ownership checks
  4. Status Codes - 200 OK, 201 Created, 400 Bad Request, 403 Forbidden, 404 Not Found
  5. Template Rendering - Correct template used, context variables present
  6. Redirects - POST-redirect-GET pattern for form submissions
  7. Error Handling - Graceful handling of invalid input, database errors

Example Pattern:

@pytest.mark.django_db
class TestAllergyListView:
    def test_unauthenticated_user_redirected(self, client):
        """Verify anonymous users are redirected to login."""
        response = client.get(reverse('allergies_list'))
        assert response.status_code == 302
        assert '/login/' in response.url

    def test_authenticated_user_sees_own_allergies(self, authenticated_client, test_user, contact_allergen):
        """Verify user sees only their own allergies."""
        UserAllergy.objects.create(user=test_user, allergen=contact_allergen)

        response = authenticated_client.get(reverse('allergies_list'))

        assert response.status_code == 200
        assert 'allergies/allergies_list.html' in [t.name for t in response.templates]
        assert contact_allergen.allergen_label in response.content.decode()

    def test_post_creates_allergy(self, authenticated_client, test_user, contact_allergen):
        """Verify POST creates UserAllergy and redirects."""
        data = {
            'allergen': contact_allergen.id,
            'severity_level': 'moderate',
            'is_confirmed': True,
        }

        response = authenticated_client.post(reverse('allergy_create'), data=data)

        assert response.status_code == 302
        assert UserAllergy.objects.filter(user=test_user, allergen=contact_allergen).exists()

Error Handling Tests

Location: app/tests/test_views_error_handling.py, app/tests/test_admin_error_handling.py

Purpose: Verify graceful error handling for validation errors, database errors, and edge cases.

Pattern: Test exception handling, transaction rollbacks, user-friendly error messages, and logging.

class TestErrorHandling:
    def test_duplicate_allergy_raises_validation_error(self, authenticated_client, test_user, contact_allergen):
        """Verify duplicate UserAllergy raises ValidationError, not IntegrityError."""
        UserAllergy.objects.create(user=test_user, allergen=contact_allergen)

        data = {'allergen': contact_allergen.id, 'severity_level': 'mild', 'is_confirmed': False}
        response = authenticated_client.post(reverse('allergy_create'), data=data)

        assert response.status_code == 400
        assert 'already exists' in response.content.decode().lower()

Integration Tests

Marker: @pytest.mark.integration

Purpose: Test interactions between multiple models, apps, or external systems.

Example:

@pytest.mark.integration
def test_user_allergy_cascade_delete(test_user, contact_allergen):
    """Verify UserAllergy is deleted when user is deleted."""
    UserAllergy.objects.create(user=test_user, allergen=contact_allergen)

    user_id = test_user.id
    test_user.delete()

    assert not UserAllergy.objects.filter(user_id=user_id).exists()

Running Tests

Basic Commands

# Run all tests
uv run pytest

# Run tests for specific app
uv run pytest allergies/tests/

# Run specific test file
uv run pytest allergies/tests/test_models.py

# Run specific test class or function
uv run pytest allergies/tests/test_models.py::TestAllergenModel
uv run pytest allergies/tests/test_models.py::TestAllergenModel::test_allergen_str_representation

# Run tests by marker
uv run pytest -m unit          # Only unit tests
uv run pytest -m integration   # Only integration tests
uv run pytest -m "not slow"    # Exclude slow tests

With Coverage

# Run tests with coverage report
uv run pytest --cov --cov-report=term-missing

# Generate HTML coverage report
uv run pytest --cov --cov-report=html

# Open coverage report (Windows)
start htmlcov/index.html

# Fail if coverage is below threshold (75%)
uv run pytest --cov --cov-fail-under=75

Verbose Output

# Show test names and outcomes
uv run pytest -v

# Show print statements and logs
uv run pytest -s

# Show detailed traceback
uv run pytest --tb=long

# Show summary of all test outcomes
uv run pytest -ra

Parallel Execution (Future)

# Install pytest-xdist
uv add --group test pytest-xdist

# Run tests in parallel (4 workers)
uv run pytest -n 4

Coverage Configuration

Coverage settings are defined in pyproject.toml.

Current Settings

Coverage Targets

| Gate | Target | Milestone | |——|——–|———–| | Gate 4 start | 75% | Forms + matching logic added | | Gate 4 complete | 75% | Views + forms tested | | Gate 5 complete | 80% | Full feature coverage |

Coverage Rules for New Code:


Configuration Files

The coverage system is configured in pyproject.toml:

[tool.pytest.ini_options]

Controls pytest and coverage integration:

[tool.coverage.run]

Controls coverage.py behavior:

[tool.coverage.report]

Controls coverage reporting:

For advanced customization, see:


AI Testing Guidelines

For AI Agents Implementing Features

Rule 1: All new features require tests BEFORE merging

Rule 2: Use descriptive test names

Format: test_<action>_<expected_result>

✅ Good:

def test_create_user_allergy_with_duplicate_allergen_raises_validation_error(...)
def test_authenticated_user_sees_only_own_allergies(...)
def test_allergen_str_representation_includes_category_and_name(...)

❌ Bad:

def test_allergy(...)
def test_view(...)
def test_model_1(...)

Rule 3: Use fixtures from conftest.py

Reuse shared fixtures instead of duplicating setup code:

# ❌ Don't duplicate fixture logic
def test_something():
    user = User.objects.create_user(email="test@example.com", username="testuser", password="pass")
    allergen = Allergen.objects.create(category=CATEGORY_CONTACT, allergen_key="sls")
    # ...

# ✅ Use shared fixtures
def test_something(test_user, contact_allergen):
    # Fixtures automatically provide user and allergen
    # ...

Rule 4: Test error handling, not just happy paths

Every feature must test:

Rule 5: Follow Development Gates

Do NOT write tests before implementing:

  1. ✅ Gate 1: Dependencies installed
  2. ✅ Gate 2: Logging infrastructure in place
  3. ✅ Gate 3: Error handling implemented
  4. ✅ Gate 4: Forms created (if applicable)
  5. 🎯 Gate 5: NOW write tests (parallel with Gate 4 for forms)

See .github/instructions/copilot-instructions.md for full gate requirements.

Rule 6: Run tests before committing

# Run tests and pre-commit hooks
uv run pytest
pre-commit run --all-files

# Verify coverage meets minimum
uv run pytest --cov --cov-fail-under=75

Test Data Strategies

Realistic vs Minimal Data

Use realistic data for:

Use minimal data for:

Test Database

Django pytest automatically creates a test database for each test session and rolls back transactions after each test.


Common Pitfalls

❌ Testing Implementation Details

Don’t test private methods or internal state. Test behavior from the user’s perspective.

# ❌ Bad: Testing internal attribute
def test_allergen_internal_cache():
    allergen = Allergen(category=CATEGORY_CONTACT, allergen_key="sls")
    assert allergen._cached_label is None

# ✅ Good: Testing public behavior
def test_allergen_label_property_returns_correct_label():
    allergen = Allergen(category=CATEGORY_CONTACT, allergen_key="sls")
    assert allergen.allergen_label == "Sodium Lauryl Sulfate (SLS)"

❌ Overly Broad Assertions

Be specific about what you’re testing.

# ❌ Bad: Too vague
def test_create_allergy(test_user, contact_allergen):
    allergy = UserAllergy.objects.create(user=test_user, allergen=contact_allergen)
    assert allergy

# ✅ Good: Specific assertions
def test_create_allergy_sets_correct_attributes(test_user, contact_allergen):
    allergy = UserAllergy.objects.create(
        user=test_user,
        allergen=contact_allergen,
        severity_level="moderate",
        is_confirmed=True
    )
    assert allergy.user == test_user
    assert allergy.allergen == contact_allergen
    assert allergy.severity_level == "moderate"
    assert allergy.is_confirmed is True

❌ Ignoring Pre-commit Hooks

Tests must pass and pre-commit hooks must pass before committing.

# ✅ Always run before committing
uv run pytest && pre-commit run --all-files

Future Enhancements

Planned Testing Infrastructure

Coverage Milestones

Milestone Target Requirements
Gate 4 start 75% Forms + matching logic added
Gate 4 complete 75% Views + forms fully tested
Gate 5 complete 80% Full feature coverage
Production ready 90%+ E2E tests, performance tests added

Resources


Questions?

For questions about testing strategy or patterns, refer to: