Skip to main content

Zyntem Fiscalization - Testing Guide

Company: Zyntem Product: Fiscalization by Zyntem Version: 1.1.0 Last Updated: 2025-11-13 Coverage Threshold: 50% minimum (enforced per module in CI) Status: Active


Overview

This guide covers testing practices for Fiscalization by Zyntem, including unit tests, integration tests, test database setup, mocking strategies, and code coverage requirements.

Testing Philosophy: Write tests that provide confidence, not just coverage. Focus on behavior, not implementation.


Table of Contents

  1. Testing Framework Stack
  2. Testing Pyramid Strategy
  3. Epic-Specific Testing Strategy
  4. Go Testing Standards
  5. TypeScript Testing Standards
  6. Integration Testing
  7. Test Database Setup
  8. Mocking Strategies
  9. Code Coverage Requirements
  10. Test Data Best Practices
  11. Test Execution Commands
  12. Troubleshooting
  13. Additional Resources

Testing Framework Stack

  • Go Testing: testify (v1.9.0+) with Go standard library testing package
  • TypeScript Testing: Vitest (v1.4+) with built-in coverage via c8
  • Test Database: PostgreSQL fiscalization_test (isolated from development database)
  • Coverage Reporting: Go coverage tools and Vitest/c8 for HTML reports

Testing Pyramid Strategy

We follow the industry-standard testing pyramid approach:

  • Unit Tests (60%): Fast, isolated tests with mocked dependencies
  • Integration Tests (30%): Real database and HTTP handler tests (requires postgres-test on port 5433)
  • Conformance Tests: Golden file byte-comparison against government specs, XSD validation
  • End-to-End Tests (10%): Full user journey testing (Playwright)

Epic-Specific Testing Strategy

Epic 1: Infrastructure & Core Services

Focus: Unit tests with comprehensive mocking

Testing Approach:

  • Unit tests only for Epic 1 stories (infrastructure setup)
  • Integration tests begin in Epic 2 when adapters are implemented
  • Mock all external dependencies (database, HTTP clients)
  • Achieve 80% code coverage baseline

Example Test Scenarios (Epic 1):

  • PostgreSQL schema validation (Story 1.2)
  • API key authentication logic (Story 1.5)
  • Location validation rules (Story 1.6c)
  • Configuration parsing and validation

Epic 2: Spain Fiscalization

Focus: Integration tests with real adapters and mock tax authorities

Testing Approach:

  • Add integration tests for database queries
  • Test Spain adapter with mock AEAT/LROE services (Story 2.2d)
  • Test TicketBAI XML generation and validation
  • Test transaction submission end-to-end flow

Example Test Scenarios (Epic 2):

  • POST /v1/transactions creates transaction in database
  • Spain adapter generates valid TicketBAI XML
  • QR code generation follows TicketBAI spec
  • Idempotency key prevents duplicate transactions
  • Tax ID validation (Spanish NIF format: 12345678Z)

Epic 3+: Developer Experience & Beyond

Focus: End-to-end tests and SDK testing

Testing Approach:

  • Add E2E tests for critical workflows (Playwright)
  • Test SDK code generation and examples
  • Test OpenAPI specification completeness
  • Test documentation accuracy

Go Testing Standards

Test File Naming Convention

  • Convention: *_test.go
  • Location: Same directory as the code being tested
  • Example: user_service.gouser_service_test.go

Why this matters: Go's testing package only recognizes files ending with _test.go as test files. Files without this suffix will not be discovered or executed by go test.

Test Function Naming Convention

  • Convention: TestFunctionName_Scenario(t *testing.T)
  • Pattern: Test + FunctionUnderTest + _ + SpecificScenario
  • Examples:
    • TestCreateAccount_ValidInput(t *testing.T)
    • TestCreateAccount_DuplicateEmail_ReturnsError(t *testing.T)
    • TestGetLocation_NotFound_Returns404(t *testing.T)

Why this format: Go's test runner requires functions to start with Test and accept *testing.T. The scenario suffix makes test intent clear and enables easy filtering with go test -run.

testify Package Usage

testify provides four key packages:

  1. assert: Non-fatal assertions (test continues after failure)
  2. require: Fatal assertions (test stops immediately on failure)
  3. mock: Interface-based mocking for dependencies
  4. suite: Test suite organization (setup/teardown hooks)

When to use assert vs require:

  • Use require for preconditions that prevent further test execution
  • Use assert for multiple independent assertions in a single test

Example: Unit Test with testify

package user

import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)

func TestValidateEmail_ValidInput(t *testing.T) {
// Arrange
email := "test@example.com"

// Act
err := ValidateEmail(email)

// Assert
assert.NoError(t, err, "Valid email should not return error")
}

func TestValidateEmail_InvalidInput_ReturnsError(t *testing.T) {
// Arrange
email := "invalid-email"

// Act
err := ValidateEmail(email)

// Assert
require.Error(t, err, "Invalid email should return error")
assert.Contains(t, err.Error(), "invalid format", "Error message should indicate invalid format")
}

Table-Driven Tests Pattern

For testing multiple scenarios of the same function:

func TestValidateEmail_MultipleScenarios(t *testing.T) {
tests := []struct {
name string
email string
wantError bool
}{
{"valid email", "test@example.com", false},
{"missing @", "testexample.com", true},
{"missing domain", "test@", true},
{"empty string", "", true},
}

for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := ValidateEmail(tt.email)
if tt.wantError {
assert.Error(t, err)
} else {
assert.NoError(t, err)
}
})
}
}

TypeScript Testing Standards

Test File Naming Convention

  • Convention: *.test.ts or *.test.tsx (for React components)
  • Location: Co-located with source files (required for this project)
  • Examples:
    • userService.tsuserService.test.ts (same directory)
    • Button.tsxButton.test.tsx (same directory)

Why co-location:

  • Easier refactoring - moving a file moves its tests automatically
  • Clear ownership - developers immediately see test coverage gaps
  • Reduced cognitive load - no context switching between /src and /tests directories
  • Modern TypeScript projects (Vite, Next.js) default to co-location

Why this matters: Vitest discovers test files matching these patterns by default. Custom patterns can be configured in vitest.config.ts. Build tools automatically exclude *.test.ts from production bundles.

Vitest Test Structure

Vitest follows the describe/it/expect pattern popularized by Jest:

  • describe: Groups related tests (optional but recommended)
  • it / test: Individual test cases (both aliases work identically)
  • expect: Assertion library

Example: Unit Test with Vitest

import { describe, it, expect } from 'vitest'
import { validateEmail } from './userService'

describe('validateEmail', () => {
it('accepts valid email addresses', () => {
// Arrange
const email = 'test@example.com'

// Act
const result = validateEmail(email)

// Assert
expect(result).toBe(true)
})

it('rejects invalid email addresses', () => {
// Arrange
const email = 'invalid-email'

// Act
const result = validateEmail(email)

// Assert
expect(result).toBe(false)
})

it('rejects empty strings', () => {
expect(validateEmail('')).toBe(false)
})
})

Integration Testing

Integration tests verify that multiple components work together correctly. Unlike unit tests, integration tests use real external dependencies (database, file system, HTTP clients).

When to Write Integration Tests

  • Testing database interactions (CRUD operations, transactions, complex queries)
  • Testing HTTP API endpoints with real request/response cycles
  • Testing file system operations
  • Testing third-party service integrations (after mocking in unit tests proves insufficient)

Integration Test Pattern

Integration tests that require a database use testing.Short() to skip when running in fast mode. There are no build tags — all tests compile together.

package repository_test

import (
"context"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)

func TestAccountRepository_Create_Integration(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}

// Arrange: Setup test database connection
db, cleanup := testutil.SetupTestDB(t)
defer cleanup()

repo := NewAccountRepository(db)
account := &Account{
Email: "test@example.com",
Name: "Test User",
}

// Act: Create account in real database
err := repo.Create(context.Background(), account)

// Assert
require.NoError(t, err)
assert.NotEmpty(t, account.ID)
}

Running tests:

# Unit tests only (fast, no database)
cd apps/core-api && go test ./... -short

# All tests including integration (requires test DB on port 5433)
cd apps/core-api && go test ./... -count=1

# Specific package
cd apps/core-api && go test ./internal/accounts/... -v -count=1

Conformance Testing (Golden Files)

Conformance tests verify that adapter output matches government specifications byte-for-byte. Golden files live in testdata/conformance/ organized by country and system.

Each test case consists of:

  • <name>.input.json — the API request or Go struct as JSON
  • <name>.golden.xml (or .golden.json) — the expected output
  • <name>.meta.json — spec traceability (section, description, version)

Key test utilities (in apps/core-api/internal/testing/conformance/):

  • Golden(t, path, got) — compare output against golden file
  • GoldenWithMask(t, path, got, masks) — mask dynamic fields (timestamps, UUIDs)
  • SandboxRecorder / ReplayTransport — record and replay government API interactions

Running conformance tests:

# Run golden file comparisons + XSD validation
make conformance-check

# Generate spec coverage report (shows golden files per country/system)
make conformance-report

# Regenerate golden files after intentional adapter changes
UPDATE_GOLDEN=1 go test ./...

# Record sandbox interactions with government test APIs
RECORD_SANDBOX=1 go test ./...

Management-plane contract tests also use golden files (in testdata/conformance/management/) to verify the JSON API contract for heartbeat, registration, and webhook delivery.

See testdata/conformance/README.md for directory structure and how to add new test cases.

Integration Test Example (TypeScript + API)

import { describe, it, expect, beforeAll, afterAll } from 'vitest'
import request from 'supertest'
import { app } from '../src/app'
import { setupTestDB, cleanupTestDB } from './helpers/setup'

describe('POST /api/accounts - Integration', () => {
beforeAll(async () => {
await setupTestDB()
})

afterAll(async () => {
await cleanupTestDB()
})

it('creates a new account and returns 201', async () => {
// Arrange
const accountData = {
email: 'test@example.com',
name: 'Test User'
}

// Act
const response = await request(app)
.post('/api/accounts')
.send(accountData)
.expect(201)

// Assert
expect(response.body).toHaveProperty('id')
expect(response.body.email).toBe('test@example.com')
expect(response.body.name).toBe('Test User')
})
})

Test Database Setup

Overview

The Fiscalization platform uses two separate PostgreSQL databases to ensure test isolation:

  • fiscalization_dev (port 5432): Development database for manual testing and development
  • fiscalization_test (port 5433): Test database for automated tests (isolated, ephemeral)

Test Database Architecture

graph TB
subgraph "Development Environment"
DevApp[Development App]
DevDB[(fiscalization_dev<br/>Port 5432)]
DevApp -->|Connection| DevDB
end

subgraph "Test Environment"
GoTests[Go Tests]
TSTests[TypeScript Tests]
TestDB[(fiscalization_test<br/>Port 5433)]

GoTests -->|DATABASE_TEST_URL| TestDB
TSTests -->|DATABASE_TEST_URL| TestDB
end

subgraph "Docker Compose"
DC[docker-compose.yml]
DC -->|Manages| DevDB
DC -->|Manages| TestDB
end

subgraph "Migrations"
MigrateTool[golang-migrate]
MigrateTool -->|Schema Up/Down| DevDB
MigrateTool -->|Schema Up/Down| TestDB
end

style TestDB fill:#e1f5ff
style DevDB fill:#fff4e1
style DC fill:#f0f0f0
style MigrateTool fill:#e8f5e9

Key Architecture Points:

  • Isolation: Development and test databases never share data
  • Parallel Testing: Multiple test suites can run concurrently against test database
  • Schema Parity: Both databases use identical migrations (same schema version)
  • Docker Management: Both databases run in containers for reproducibility

Database Migration Tool

We use golang-migrate/migrate for all database migrations:

  • Tool: golang-migrate/migrate
  • Why: Industry standard, supports PostgreSQL-specific features, CLI + library usage
  • Migrations Location: /migrations directory
  • Version Control: Sequential numbered migrations (e.g., 000001_create_accounts_table.up.sql)

Installation:

# macOS
brew install golang-migrate

# Linux
curl -L https://github.com/golang-migrate/migrate/releases/download/v4.17.0/migrate.linux-amd64.tar.gz | tar xvz
sudo mv migrate /usr/local/bin/

# Windows
scoop install migrate

Starting the Test Database

Use Docker Compose to start both databases:

# Start all services (dev + test databases)
docker-compose up -d

# Start only test database
docker-compose up -d postgres-test

# Check database health
docker-compose ps

Connection Strings

# Development Database
DATABASE_URL=postgres://fiscalization:fiscalization_dev@localhost:5432/fiscalization_dev

# Test Database (used by automated tests)
DATABASE_TEST_URL=postgres://fiscalization_test:fiscalization_test@localhost:5433/fiscalization_test

Important: Copy .env.example to .env if you need to override default values.

Test Data Cleanup Strategy

To ensure test isolation, truncate all tables after each test suite:

Go Example:

func TestMain(m *testing.M) {
// Setup: Run migrations
setupTestDatabase()

// Run tests
code := m.Run()

// Cleanup: Truncate all tables
cleanupTestDatabase()

os.Exit(code)
}

func cleanupTestDatabase() {
db := testutil.GetTestDB()
tables := []string{"accounts", "locations", "transactions"}

for _, table := range tables {
_, err := db.Exec(fmt.Sprintf("TRUNCATE TABLE %s CASCADE", table))
if err != nil {
log.Fatalf("Failed to truncate table %s: %v", table, err)
}
}
}

TypeScript Example:

// tests/helpers/setup.ts
import { afterEach } from 'vitest'
import { db } from '../src/db'

afterEach(async () => {
// Truncate all tables after each test
await db.raw('TRUNCATE TABLE accounts, locations, transactions CASCADE')
})

Running Migrations on Test Database

Migrations should run before tests to ensure the test database schema matches production:

# Run migrations on test database (to be implemented in Story 1.2)
make test-db-migrate

# Reset test database (drop + recreate + migrate)
make test-db-reset

Mocking Strategies

Mocking external dependencies allows unit tests to run fast and without side effects. We use different mocking tools for Go and TypeScript.

Go Mocking with mockery

We use mockery for automated mock generation from Go interfaces. This ensures mocks stay synchronized with interface definitions without manual maintenance.

Installation:

# macOS
brew install mockery

# Linux/Windows
go install github.com/vektra/mockery/v2@latest

Configuration:

Create .mockery.yaml in project root:

with-expecter: true
dir: "{{.InterfaceDir}}/mocks"
filename: "{{.InterfaceName}}.go"
mockname: "Mock{{.InterfaceName}}"
outpkg: mocks
packages:
github.com/zyntem/fiscalization/internal/repository:
interfaces:
AccountRepository:

Step 1: Define an interface

// internal/repository/account.go
type AccountRepository interface {
Create(ctx context.Context, account *Account) error
FindByEmail(ctx context.Context, email string) (*Account, error)
}

Step 2: Generate mocks automatically

# Generate all mocks
mockery

# Generate mock for specific interface
mockery --name AccountRepository --dir internal/repository --output internal/repository/mocks

Generated mock (automatic - do not edit manually):

// internal/repository/mocks/AccountRepository.go
type MockAccountRepository struct {
mock.Mock
}

func (m *MockAccountRepository) Create(ctx context.Context, account *Account) error {
args := m.Called(ctx, account)
return args.Error(0)
}

func (m *MockAccountRepository) FindByEmail(ctx context.Context, email string) (*Account, error) {
args := m.Called(ctx, email)
if args.Get(0) == nil {
return nil, args.Error(1)
}
return args.Get(0).(*Account), args.Error(1)
}

Step 3: Use mock in tests

func TestAccountService_CreateAccount_DuplicateEmail(t *testing.T) {
// Arrange
mockRepo := new(mocks.MockAccountRepository)
service := NewAccountService(mockRepo)

// Mock returns existing account (duplicate email)
mockRepo.On("FindByEmail", mock.Anything, "test@example.com").
Return(&Account{ID: "123", Email: "test@example.com"}, nil)

// Act
err := service.CreateAccount(context.Background(), "test@example.com", "Test User")

// Assert
require.Error(t, err, "Should return error for duplicate email")
assert.Contains(t, err.Error(), "already exists")
mockRepo.AssertExpectations(t) // Verify mock was called as expected
}

TypeScript Mocking with Vitest (vi.mock)

Vitest provides vi.mock for mocking modules and vi.fn for mocking functions.

Mocking a module:

import { describe, it, expect, vi } from 'vitest'
import { createAccount } from './accountService'
import * as db from './database'

// Mock the entire database module
vi.mock('./database', () => ({
query: vi.fn(),
transaction: vi.fn(),
}))

describe('createAccount', () => {
it('calls database with correct parameters', async () => {
// Arrange
const mockQuery = vi.mocked(db.query)
mockQuery.mockResolvedValue({ id: '123', email: 'test@example.com' })

// Act
const result = await createAccount('test@example.com', 'Test User')

// Assert
expect(mockQuery).toHaveBeenCalledWith(
'INSERT INTO accounts (email, name) VALUES ($1, $2) RETURNING *',
['test@example.com', 'Test User']
)
expect(result.id).toBe('123')
})
})

Mocking a function:

import { vi } from 'vitest'

const mockFn = vi.fn((x: number) => x * 2)

// Call the mock
const result = mockFn(5)

// Assertions
expect(result).toBe(10)
expect(mockFn).toHaveBeenCalledWith(5)
expect(mockFn).toHaveBeenCalledTimes(1)

When to Mock vs Use Real Dependencies

ScenarioUnit Test (Mock)Integration Test (Real)
Database queries✅ Mock repository interface✅ Use real test database
HTTP API calls✅ Mock HTTP client✅ Use real HTTP server (testcontainers)
File system✅ Mock file reader interface✅ Use temp directory
External APIs (Stripe, fiskaly)✅ Mock client interface✅ Use sandbox/test mode
Business logic✅ Test directly (no mocking needed)N/A

Code Coverage Requirements

Minimum Coverage Threshold

  • Required: 50% code coverage per module (core-api, management-plane, shared, telemetry)
  • Enforcement: CI pipeline fails builds below threshold
  • Measurement: Line coverage (not branch coverage)

Generating Coverage Reports

Go:

# Generate coverage report
go test -cover -coverprofile=coverage.out ./...

Expected output:

ok      github.com/zyntem/fiscalization/internal/repository    0.234s  coverage: 85.2% of statements
ok github.com/zyntem/fiscalization/internal/service 0.189s coverage: 82.7% of statements
ok github.com/zyntem/fiscalization/pkg/validator 0.156s coverage: 91.3% of statements
# View HTML report (opens in browser)
go tool cover -html=coverage.out

# View in terminal
go tool cover -func=coverage.out

Terminal coverage output example:

github.com/zyntem/fiscalization/internal/repository/account.go:15:    Create          85.7%
github.com/zyntem/fiscalization/internal/repository/account.go:28: FindByEmail 100.0%
github.com/zyntem/fiscalization/internal/repository/account.go:42: Update 80.0%
total: (statements) 85.2%

TypeScript:

# Generate coverage report with Vitest
npm test -- --coverage

Expected output:

 ✓ src/services/accountService.test.ts (4 tests) 234ms
✓ src/validators/emailValidator.test.ts (2 tests) 89ms

Test Files 2 passed (2)
Tests 6 passed (6)
Duration 323ms

% Coverage report from c8
--------------------|---------|----------|---------|---------|-------------------
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s
--------------------|---------|----------|---------|---------|-------------------
All files | 84.21 | 78.57 | 91.67 | 84.21 |
accountService.ts | 87.50 | 80.00 | 100.00 | 87.50 | 23-25
emailValidator.ts | 92.31 | 85.71 | 100.00 | 92.31 | 18
--------------------|---------|----------|---------|---------|-------------------
# Open HTML report
open coverage/index.html

Coverage Best Practices

  1. Focus on meaningful coverage: 80% is a minimum, not a target. Aim for high-value test coverage.
  2. Don't test trivial code: Simple getters/setters may not need explicit tests.
  3. Test business logic thoroughly: Core domain logic should approach 100% coverage.
  4. Integration tests count: Coverage includes all test types (unit + integration).

Test Data Best Practices

1. Use Test Fixtures and Builders

Problem: Hardcoding test data leads to brittle tests and duplicated setup code.

Solution: Use the Builder pattern for flexible test data creation.

Go Example (using testutil fixtures):

import "github.com/zyntem/fiscalization/internal/testutil"

// Simple fixture with defaults
account := testutil.NewAccountBuilder().Build()

// Customized fixture
account := testutil.NewAccountBuilder().
WithEmail("custom@example.com").
WithName("Custom User").
WithCreatedAt(time.Now().Add(-24 * time.Hour)).
Build()

TypeScript Example (using test factories):

import { createTestAccount } from './helpers/factories'

// Simple fixture with defaults
const account = createTestAccount()

// Customized fixture
const account = createTestAccount({
email: 'custom@example.com',
name: 'Custom User',
createdAt: new Date('2025-01-01')
})

2. Avoid Hardcoded IDs

Problem: Hardcoded IDs create test interdependencies and break when database state changes.

Solution: Use generated IDs or fixtures that create data dynamically.

Bad:

account, _ := repo.FindByID(ctx, "123")  // Assumes ID "123" exists

Good:

// Create test account first, use its ID
account := testutil.CreateTestAccount(t, db)
found, _ := repo.FindByID(ctx, account.ID)

3. Clean Up Test Data After Each Test

Problem: Tests that don't clean up can cause flaky tests and unexpected failures.

Solution: Use defer cleanup functions or teardown hooks.

Go Example:

func TestAccountRepository_Create(t *testing.T) {
db, cleanup := testutil.SetupTestDB(t)
defer cleanup() // Always cleanup, even if test fails

// Test code here...
}

TypeScript Example:

import { afterEach } from 'vitest'

afterEach(async () => {
await cleanupTestData()
})

4. Use Transactions for Test Isolation (Advanced)

Problem: Database tests can interfere with each other when run in parallel.

Solution: Wrap each test in a transaction and rollback after completion.

Go Example:

func TestAccountRepository_Create_WithTransaction(t *testing.T) {
db := testutil.GetTestDB()

// Begin transaction
tx, err := db.Begin()
require.NoError(t, err)
defer tx.Rollback() // Rollback on test completion

// Run test using transaction
repo := NewAccountRepository(tx)
err = repo.Create(context.Background(), &Account{Email: "test@example.com"})
assert.NoError(t, err)

// No manual cleanup needed - rollback handles it
}

5. Use Descriptive Test Data

Problem: Generic test data like "test@test.com" makes debugging harder.

Solution: Use descriptive, scenario-specific test data.

Bad:

email := "test@test.com"

Good:

email := "duplicate_account_test@example.com"  // Clearly indicates test scenario

6. Avoid Shared Mutable State

Problem: Tests sharing mutable state can cause race conditions and flaky tests.

Solution: Create fresh fixtures for each test.

Bad:

var sharedAccount *Account  // Shared across tests (dangerous!)

func TestA(t *testing.T) {
sharedAccount.Email = "a@example.com"
// ...
}

func TestB(t *testing.T) {
sharedAccount.Email = "b@example.com" // Mutates shared state!
// ...
}

Good:

func TestA(t *testing.T) {
account := testutil.NewAccountBuilder().Build() // Fresh for each test
account.Email = "a@example.com"
// ...
}

func TestB(t *testing.T) {
account := testutil.NewAccountBuilder().Build() // Independent
account.Email = "b@example.com"
// ...
}

Test Execution Commands

We provide a comprehensive Makefile with convenient commands for running tests, managing databases, and working with Docker services.

Quick Reference

# Show all available commands
make help

# Run all tests (Go + TypeScript)
make test

# Run only unit tests (fast, no database needed)
make test-unit

# Run only integration tests (requires test database)
make test-integration

# Generate coverage reports and open HTML viewer
make test-coverage

# Run tests in watch mode (auto-reload on file changes)
make test-watch

Testing Commands

make test

Runs all tests for both Go and TypeScript.

When to use: Before committing code, during CI/CD pipeline, final validation.

What it runs:

  • All Go tests: go test ./... -v
  • All TypeScript tests: npm test

make test-unit

Runs unit tests only (skips integration tests that require database).

When to use: During development for fast feedback loop, testing business logic changes.

What it runs:

  • Go unit tests: go test ./... -v -short
  • TypeScript unit tests: npm test

Note: Integration tests in Go are skipped when testing.Short() returns true.

make test-integration

Runs integration tests only (requires test database to be running).

When to use: After database schema changes, testing repository layer, verifying API endpoints.

Prerequisites: Test database must be running (make docker-up)

What it runs:

  • Go integration tests: go test ./... -v -run Integration
  • TypeScript integration tests: npm test -- --run

make test-coverage

Generates code coverage reports in HTML format and opens them in your browser.

When to use: Checking coverage metrics, identifying untested code, preparing for code review.

What it does:

  1. Runs Go tests with coverage: go test ./... -cover -coverprofile=coverage.out
  2. Generates Go HTML report: go tool cover -html=coverage.out
  3. Runs TypeScript tests with coverage: npm test -- --coverage
  4. Opens Go coverage report in browser

Output locations:

  • Go coverage: coverage/go-coverage.html
  • TypeScript coverage: coverage/index.html
  • Raw Go coverage: coverage.out

make test-watch

Runs tests in watch mode - automatically reruns tests when files change.

When to use: During active development, TDD (Test-Driven Development), refactoring sessions.

What it runs:

  • TypeScript watch mode: npm test:watch

Note: Go watch mode can be added later with tools like entr or reflex if needed.

Database Commands

make test-db-migrate

Runs database migrations on the test database.

When to use: After pulling new migrations, setting up test database for first time.

Status: To be implemented in Story 1.2 (PostgreSQL Schema and Migrations).

Future implementation:

migrate -path ./migrations -database "$DATABASE_TEST_URL" up

make test-db-reset

Drops and recreates the test database, giving you a clean slate.

When to use: Test database is in inconsistent state, need fresh start, after major schema changes.

Warning: This will delete all test data. Development database is not affected.

What it does:

  1. Drops fiscalization_test database
  2. Creates new fiscalization_test database
  3. Reminds you to run migrations

After resetting: Run make test-db-migrate to apply schema.

Docker Commands

make docker-up

Starts all Docker services (development and test PostgreSQL databases).

When to use: At start of development session, before running integration tests.

What it starts:

  • Development database (port 5432)
  • Test database (port 5433)

What it does:

  1. Starts Docker Compose services in detached mode
  2. Waits for databases to become healthy
  3. Displays connection strings

Connection strings:

Dev:  postgres://fiscalization:fiscalization_dev@localhost:5432/fiscalization_dev
Test: postgres://fiscalization_test:fiscalization_test@localhost:5433/fiscalization_test

make docker-down

Stops all Docker services.

When to use: End of development session, freeing up system resources.

What it does:

  • Stops all Docker Compose containers
  • Preserves database volumes (data is not lost)

Direct Commands (Without Makefile)

If you prefer running commands directly:

Go tests:

# All tests
go test ./...

# Verbose output
go test ./... -v

# Specific package
go test ./internal/testutil

# Run specific test
go test ./internal/testutil -run TestSetupTestDB

# Coverage
go test ./... -cover -coverprofile=coverage.out
go tool cover -html=coverage.out

TypeScript tests:

# All tests
npm test

# Watch mode
npm run test:watch

# Coverage
npm run test:coverage

# UI mode (visual test runner)
npm run test:ui

Docker:

# Start services
docker-compose up -d

# Stop services
docker-compose down

# View logs
docker-compose logs -f postgres-test

# Check service status
docker-compose ps

Troubleshooting

Go Tests Not Discovered

Symptom: go test reports "no tests to run"

Solution:

  1. Verify file ends with _test.go
  2. Verify functions start with Test and accept *testing.T
  3. Verify package declaration matches the directory

Vitest Tests Not Discovered

Symptom: npm test doesn't find test files

Solution:

  1. Verify file ends with .test.ts or .test.tsx
  2. Check vitest.config.ts for custom include/exclude patterns
  3. Ensure file is not in node_modules/ or other excluded directory

Tests Pass Locally But Fail in CI

Symptom: Tests pass on your machine but fail in GitHub Actions or other CI

Cause: Environment differences (timezone, database version, locale, etc.)

Solution:

# Use Docker to match CI environment
docker-compose up -d postgres-test

# Run tests with same environment variables as CI
export DATABASE_TEST_URL="postgres://postgres:postgres@localhost:5433/fiscalization_test"
make test

Flaky Tests (Intermittent Failures)

Symptom: Tests sometimes pass, sometimes fail

Common Causes:

  • Race conditions between goroutines
  • Timing issues (async operations)
  • External dependencies (network, filesystem)
  • Global state leaking between tests

Solutions:

// Add explicit waits for async operations
time.Sleep(100 * time.Millisecond) // Not ideal, but sometimes necessary

// Use channels for synchronization
done := make(chan bool)
go func() {
// async work
done <- true
}()
<-done // Wait for completion

// Avoid global state
// Bad: var sharedCounter int
// Good: Create new instances for each test
// React Testing Library: Use waitFor
import { waitFor } from '@testing-library/react'

await waitFor(() => {
expect(result.current.isSuccess).toBe(true)
})

// Vitest: Use vi.useFakeTimers for time-dependent tests
vi.useFakeTimers()
vi.setSystemTime(new Date('2025-01-01'))
// ... test code ...
vi.useRealTimers()

Slow Tests

Symptom: Test suite takes too long to run (> 1 minute for Epic 1)

Common Causes:

  • Too many integration tests (use more unit tests)
  • Inefficient test database setup
  • Not running tests in parallel

Solutions:

# Skip integration tests locally with -short flag
go test -short ./...

# Run tests in parallel (Go)
go test -parallel 4 ./...

# Only run changed tests (TypeScript)
npm test -- --changed

Optimize test database:

// Reuse database connection across tests
var testDB *sql.DB

func TestMain(m *testing.M) {
// Setup once
testDB = setupTestDB()
code := m.Run()
// Cleanup once
testDB.Close()
os.Exit(code)
}

Coverage Drops Below Threshold

Symptom: CI fails with coverage below threshold

Solution:

  1. Identify uncovered code: go tool cover -html=coverage.out
  2. Add tests for critical paths first
  3. Consider excluding generated code:
// Add to .coverignore or exclude in coverage config
**/generated/*.go
**/*_mock.go

Contract Testing with Schemathesis

Schemathesis validates the API against its OpenAPI spec by generating requests from the schema and checking responses.

Running Locally

# Install
pip install schemathesis

# Run against a local server (start the API first)
schemathesis run http://localhost:8080/docs/openapi.yaml \
--checks all \
--max-response-time 10000 \
--hypothesis-max-examples 50

CI runs Schemathesis in the contract-test job after build, testing phases: examples, coverage, and fuzzing. Results are uploaded as a JUnit XML artifact.

OpenAPI Linting with Spectral

Spectral lints docs/openapi.yaml against the standard spectral:oas ruleset.

Running Locally

# Run the linter (same command CI uses)
npx --yes @stoplight/spectral-cli@latest lint docs/openapi.yaml

Configuration lives in .spectral.yaml at the repo root. CI runs Spectral in the lint job alongside gofmt.


Non-Functional Testing

This document covers functional testing (unit, integration, E2E). For non-functional testing requirements, see:

  • Performance Testing: See PERFORMANCE-TESTING.md (to be created in Epic 4)

    • Load testing for transaction submission endpoints
    • Database query performance benchmarks
    • API response time SLAs
  • Security Testing: See SECURITY-TESTING.md (to be created in Epic 3)

    • Penetration testing for API authentication
    • Vulnerability scanning (OWASP Top 10)
    • Security audit trail validation
  • Compliance Testing: See Architecture: Compliance

    • GDPR data handling verification
    • Tax authority audit trail requirements
    • Data retention policy validation

Non-functional testing plans will be developed during respective epics when these concerns become critical to implementation.


Additional Resources

Official Documentation

Best Practices

Project-Specific Documentation

Tools & Utilities


Document Changelog

VersionDateChangesAuthor
1.1.02025-11-13Added database migration tool (golang-migrate), mockery for Go mocking, integration test tagging pattern, non-functional testing cross-references, test database architecture diagram, command output examples, and document changelogEngineering Team
1.0.02025-11-13Initial comprehensive testing guide covering testing framework stack, epic-specific strategies, Go/TypeScript standards, integration testing, test database setup, mocking strategies, coverage requirements, and troubleshootingEngineering Team

Next Steps

  1. Story 1.0b Complete - Testing framework setup done
  2. Story 1.2 - Write first unit tests for PostgreSQL schema validation
  3. Epic 1 Goal - Achieve 80% code coverage baseline
  4. Epic 2 - Add integration tests for Spain adapter and database operations
  5. Epic 3+ - Add E2E tests for critical user workflows