Skip to content

Testing Guide

This guide covers the testing philosophy, strategy, and practical commands for contributors working on FinFocus Core.

  1. Test-Driven Development (TDD): Write tests before implementation.
  2. High Coverage: CI enforces a 60% minimum; aim for 80% overall and 95% on critical paths.
  3. Isolation: Unit tests must not depend on external systems. Use mocks and table-driven patterns.
  4. Integration: Verify component interactions with dedicated integration tests in test/integration/.
  5. Performance: Benchmarks must catch regressions on critical paths.

Unit tests follow the standard Go convention: each foo_test.go file lives beside the foo.go it tests. There is no separate test/unit/ directory.

Additional test infrastructure lives under test/:

DirectoryContents
test/integration/Cross-component tests (CLI, Engine, Plugin communication).
test/e2e/End-to-end tests. Separate Go module. Requires AWS + Pulumi CLI.
test/fixtures/Shared test data: plans, specs, configs, mock responses.
test/mocks/Mock plugin server implementations.
test/benchmarks/Performance benchmarks for regression detection.
Terminal window
make test
Terminal window
make test-race
Terminal window
make test-integration

E2E tests require a built binary, AWS credentials, and the Pulumi CLI.

Terminal window
make build
export PATH="$HOME/.pulumi/bin:$PATH"
export PULUMI_CONFIG_PASSPHRASE="e2e-test-passphrase"
make test-e2e
Terminal window
go test -v ./internal/cli/...
go test -v ./internal/engine/...
Terminal window
go test -run TestSpecificFunction ./...
Terminal window
go test -coverprofile=coverage.out ./...
go tool cover -html=coverage.out
Terminal window
make lint

All tests use github.com/stretchr/testify/require and github.com/stretchr/testify/assert. Do not write manual if x != y { t.Errorf(...) } checks.

import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)

Use require.* when a failure makes the rest of the test invalid:

result, err := SomeFunction(input)
require.NoError(t, err)
require.NotNil(t, result)

Use assert.* for value comparisons where seeing all failures is helpful:

assert.Equal(t, "expected", result.Name)
assert.Len(t, result.Items, 3)
assert.Contains(t, result.Message, "success")

Prefer table-driven tests for functions with multiple input variations:

func TestFunction_Errors(t *testing.T) {
tests := []struct {
name string
input string
wantErr bool
errContains string
}{
{"empty input", "", true, "input required"},
{"invalid format", "bad", true, "invalid format"},
{"valid input", "good", false, ""},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := Function(tt.input)
if tt.wantErr {
require.Error(t, err)
assert.Contains(t, err.Error(), tt.errContains)
} else {
require.NoError(t, err)
}
})
}
}

Every error return must have a test that triggers it. Priority paths:

  • File I/O errors (missing files, permission denied)
  • Network errors (connection refused, timeout)
  • Validation errors (invalid input, out-of-range values)
  • Resource exhaustion (goroutine leaks, unclosed handles)

Tests that intentionally create failing plugin scenarios must use t.Logf() rather than t.Errorf(), so CI does not flag expected errors as failures:

client, err := pluginhost.NewClient(ctx, launcher, mockPlugin)
if client != nil {
client.Close()
}
if err != nil {
t.Logf("Expected failure (handled): %v", err)
}

Use require.Error when an error is required for the test to be valid:

import "github.com/stretchr/testify/require"
_, err := launcher.Start(ctx, "/nonexistent/binary")
require.Error(t, err, "expected error for invalid command")
ScopeMinimum
CI gate60%
General target80%
Critical paths95%

Critical paths include the Engine cost calculation pipeline, Plugin host lifecycle, and CLI command dispatch.