Eine API ohne Tests ist wie ein Vertrag ohne Unterschrift – theoretisch existent, praktisch wertlos. Tests sind nicht nur ein Qualitätsmerkmal, sondern ein fundamentaler Bestandteil des API-Designs selbst. Sie dokumentieren erwartetes Verhalten, verhindern Regressionen und schaffen Vertrauen bei Konsumenten.
Zielbild
Nach diesem Artikel kannst du:
- Eine Teststrategie für APIs definieren, die alle relevanten Ebenen abdeckt
- Contract Tests implementieren, die Breaking Changes vor dem Release erkennen
- Quality Gates in CI/CD etablieren, die ungetesteten Code blockieren
- Security- und Performance-Tests in den Entwicklungszyklus integrieren
Kernfragen
- Welche Tests sind Must-have vor Go-Live?
- Wie prüfen wir Schema- und Contract-Compatibility?
- Welche Quality Gates brauchen wir in der CI/CD-Pipeline?
Die Testpyramide für APIs
Die klassische Testpyramide gilt auch für APIs, aber mit API-spezifischen Ausprägungen:
Unit Tests: Die Basis
Unit Tests prüfen isolierte Business Logic ohne externe Abhängigkeiten.
Was Unit Tests abdecken sollten
| Bereich | Beispiel | Priorität |
|---|---|---|
| Validierungslogik | E-Mail-Format, Pflichtfelder | Must-have |
| Business Rules | Preisberechnung, Statusübergänge | Must-have |
| Error Mapping | Exception → HTTP Status | Must-have |
| Serialisierung | DTO ↔ Domain Model | Should-have |
| Edge Cases | Grenzwerte, leere Listen | Should-have |
Beispiel: Validierungslogik testen
describe('OrderValidator', () => {
describe('validateOrder', () => {
it('should reject order with negative quantity', () => {
const order = {product_id: 'abc', quantity: -1};
const result = validateOrder(order);
expect(result.isValid).toBe(false);
expect(result.errors).toContainEqual({
field: 'quantity',
code: 'INVALID_VALUE',
message: 'Quantity must be positive'
});
});
it('should accept valid order', () => {
const order = {product_id: 'abc', quantity: 5};
const result = validateOrder(order);
expect(result.isValid).toBe(true);
expect(result.errors).toHaveLength(0);
});
});
});
Integration Tests: Die Verbindungen prüfen
Integration Tests verifizieren das Zusammenspiel mit externen Systemen.
Testbare Integrationen
Beispiel: Datenbankintegration mit Testcontainern
describe('UserRepository', () => {
let container: StartedPostgreSqlContainer;
let repository: UserRepository;
beforeAll(async () => {
container = await new PostgreSqlContainer()
.withDatabase('testdb')
.start();
repository = new UserRepository(container.getConnectionUri());
await repository.migrate();
});
afterAll(async () => {
await container.stop();
});
it('should persist and retrieve user', async () => {
const user = {email: 'test@example.com', name: 'Test User'};
const created = await repository.create(user);
const found = await repository.findById(created.id);
expect(found).toEqual(expect.objectContaining(user));
});
it('should enforce unique email constraint', async () => {
const user = {email: 'unique@example.com', name: 'First'};
await repository.create(user);
await expect(repository.create({...user, name: 'Second'}))
.rejects.toThrow('UNIQUE_VIOLATION');
});
});
Beispiel: Downstream-API mit Mock
describe('PaymentService', () => {
let mockServer: MockServer;
beforeEach(() => {
mockServer = new MockServer(3001);
});
afterEach(() => {
mockServer.stop();
});
it('should handle payment provider timeout', async () => {
mockServer.mock({
method: 'POST',
path: '/v1/charges',
delay: 10000 // Timeout provozieren
});
const service = new PaymentService('http://localhost:3001');
await expect(service.charge({amount: 100}))
.rejects.toThrow('PAYMENT_PROVIDER_TIMEOUT');
});
it('should retry on 503', async () => {
let attempts = 0;
mockServer.mock({
method: 'POST',
path: '/v1/charges',
handler: () => {
attempts++;
if (attempts < 3) {
return {status: 503};
}
return {status: 200, body: {id: 'ch_123'}};
}
});
const result = await service.charge({amount: 100});
expect(result.id).toBe('ch_123');
expect(attempts).toBe(3);
});
});
Contract Tests: Der Konsensvertrag
Contract Tests sichern die Kompatibilität zwischen API-Provider und Konsumenten.
Schema-basierte Contract Tests
Prüfen, ob die API-Responses dem OpenAPI-Schema entsprechen:
import {OpenAPIValidator} from 'express-openapi-validator';
describe('API Schema Compliance', () => {
const validator = new OpenAPIValidator('./openapi.yaml');
it('GET /users should match schema', async () => {
const response = await request(app).get('/users');
const errors = validator.validate(
'GET',
'/users',
response.status,
response.body
);
expect(errors).toHaveLength(0);
});
it('POST /users error should match Problem Details', async () => {
const response = await request(app)
.post('/users')
.send({invalid: 'data'});
expect(response.status).toBe(400);
const errors = validator.validate(
'POST',
'/users',
response.status,
response.body
);
expect(errors).toHaveLength(0);
});
});
Consumer-Driven Contract Tests (CDCT)
Consumer definieren ihre Erwartungen, Provider verifizieren sie:
Consumer-Seite (Pact)
import {PactV3, MatchersV3} from '@pact-foundation/pact';
const provider = new PactV3({
consumer: 'Frontend',
provider: 'UserAPI'
});
describe('User API Contract', () => {
it('should return user by ID', async () => {
await provider
.given('user with ID 123 exists')
.uponReceiving('a request for user 123')
.withRequest({
method: 'GET',
path: '/users/123',
headers: {'Accept': 'application/json'}
})
.willRespondWith({
status: 200,
headers: {'Content-Type': 'application/json'},
body: MatchersV3.like({
id: '123',
email: MatchersV3.email(),
name: MatchersV3.string('John Doe'),
created_at: MatchersV3.iso8601DateTime()
})
});
await provider.executeTest(async (mockServer) => {
const client = new UserClient(mockServer.url);
const user = await client.getUser('123');
expect(user.id).toBe('123');
expect(user.email).toContain('@');
});
});
});
Provider-Seite (Verification)
import {Verifier} from '@pact-foundation/pact';
describe('Pact Verification', () => {
it('should fulfill all consumer contracts', async () => {
const verifier = new Verifier({
providerBaseUrl: 'http://localhost:3000',
pactBrokerUrl: process.env.PACT_BROKER_URL,
provider: 'UserAPI',
providerVersion: process.env.GIT_SHA,
publishVerificationResult: true,
stateHandlers: {
'user with ID 123 exists': async () => {
await testDb.users.insert({id: '123', email: 'john@example.com'});
}
}
});
await verifier.verifyProvider();
});
});
Breaking Change Detection
Automatische Prüfung auf Breaking Changes im OpenAPI-Schema:
# .github/workflows/api-check.yml
name: API Breaking Change Check
on:
pull_request:
paths:
- 'openapi/**'
jobs:
check-breaking-changes:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Check for breaking changes
run: |
git show origin/main:openapi/api.yaml > /tmp/openapi-main.yaml
npx oasdiff breaking /tmp/openapi-main.yaml openapi/api.yaml
Breaking Changes, die erkannt werden:
| Kategorie | Beispiel | Schweregrad |
|---|---|---|
| Endpoint entfernt | DELETE /users/{id} fehlt |
Breaking |
| Pflichtfeld hinzugefügt | Neues required Field in Request | Breaking |
| Response-Typ geändert | string → number |
Breaking |
| Statuscode entfernt | 404 nicht mehr dokumentiert |
Breaking |
| Enum-Wert entfernt | Status pending entfernt |
Breaking |
| Optional → Required | Field wird Pflichtfeld | Breaking |
| Neues optionales Field | Neues Response-Field | Non-breaking |
| Neuer Statuscode | 429 hinzugefügt |
Non-breaking |
Security Tests
Security ist keine Phase, sondern ein kontinuierlicher Prozess.
SAST (Static Application Security Testing)
Findet Schwachstellen im Quellcode:
# .github/workflows/security.yml
security-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run Semgrep
uses: returntocorp/semgrep-action@v1
with:
config: p/owasp-top-ten,p/security-audit,p/secrets
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: javascript,typescript
- name: Autobuild
uses: github/codeql-action/autobuild@v3
- name: Run CodeQL
uses: github/codeql-action/analyze@v3
DAST (Dynamic Application Security Testing)
Testet die laufende API auf Schwachstellen:
dast-scan:
runs-on: ubuntu-latest
services:
api:
image: $
ports:
- 3000:3000
steps:
- name: Run OWASP ZAP
uses: zaproxy/action-api-scan@v0.7.0
with:
target: 'http://localhost:3000/openapi.yaml'
fail_action: true
rules_file_name: 'zap-rules.conf'
Security Test Checkliste
Performance Tests
Performance-Tests validieren SLOs unter Last.
Arten von Performance Tests
Beispiel: k6 Load Test
// load-test.js
import http from 'k6/http';
import {check, sleep} from 'k6';
import {Rate, Trend} from 'k6/metrics';
const errorRate = new Rate('errors');
const latency = new Trend('latency_p95');
export const options = {
stages: [
{duration: '2m', target: 50}, // Ramp up
{duration: '5m', target: 100}, // Steady state
{duration: '2m', target: 0}, // Ramp down
],
thresholds: {
http_req_duration: ['p(95)<200', 'p(99)<500'], // SLO
errors: ['rate<0.01'], // <1% Error Rate
},
};
export default function () {
const response = http.get('https://api.example.com/users', {
headers: {'Authorization': `Bearer ${__ENV.API_TOKEN}`},
});
check(response, {
'status is 200': (r) => r.status === 200,
'response time < 200ms': (r) => r.timings.duration < 200,
});
errorRate.add(response.status !== 200);
latency.add(response.timings.duration);
sleep(1);
}
Performance-SLOs als Quality Gate
# CI/CD Integration
performance-test:
runs-on: ubuntu-latest
steps:
- name: Run k6 (fails on threshold breach)
uses: grafana/k6-action@v0.3.1
with:
filename: load-test.js
env:
K6_CLOUD_TOKEN: $
CI/CD Quality Gates
Quality Gates sind automatische Checkpoints, die ungetesteten oder unsicheren Code blockieren.
Gate-Hierarchie
Beispiel: GitHub Actions Pipeline
name: API Quality Gates
on:
pull_request:
branches: [ main ]
push:
branches: [ main ]
jobs:
# Gate 1: Code Quality
lint-and-typecheck:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
- run: npm ci
- run: npm run lint
- run: npm run typecheck
# Gate 2: Unit Tests + Coverage
unit-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
- run: npm ci
- run: npm run test:unit -- --coverage
- name: Check coverage threshold
run: |
COVERAGE=$(cat coverage/coverage-summary.json | jq '.total.lines.pct')
if (( $(echo "$COVERAGE < 80" | bc -l) )); then
echo "Coverage $COVERAGE% is below 80% threshold"
exit 1
fi
# Gate 3: Integration Tests
integration-tests:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:16
env:
POSTGRES_PASSWORD: test
options: >-
--health-cmd pg_isready
--health-interval 10s
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
- run: npm ci
- run: npm run test:integration
# Gate 4: Contract Tests
contract-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
- run: npm ci
- run: npm run test:contract
- name: Publish contracts
if: github.ref == 'refs/heads/main'
run: npm run pact:publish
# Gate 5: API Schema Validation
schema-validation:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Validate OpenAPI
uses: char0n/swagger-editor-validate@v1
with:
definition-file: openapi/api.yaml
- name: Check breaking changes
if: github.event_name == 'pull_request'
run: |
git show origin/main:openapi/api.yaml > /tmp/openapi-main.yaml
npx oasdiff breaking /tmp/openapi-main.yaml openapi/api.yaml
# Gate 6: Security Scan
security-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run Semgrep
uses: returntocorp/semgrep-action@v1
- name: Check dependencies
run: npm audit --audit-level=high
# Final Gate: All checks must pass
all-checks-pass:
needs:
- lint-and-typecheck
- unit-tests
- integration-tests
- contract-tests
- schema-validation
- security-scan
runs-on: ubuntu-latest
steps:
- run: echo "All quality gates passed"
Chaos und Failure Tests
Chaos Engineering validiert Resilience unter realen Fehlerbedingungen.
Failure Injection Patterns
| Failure Type | Test-Szenario | Erwartetes Verhalten |
|---|---|---|
| Network Latency | 500ms Delay zu DB | Timeout, graceful Degradation |
| Connection Drop | DB-Verbindung trennen | Reconnect, Circuit Breaker |
| Disk Full | Log-Volume voll | Graceful Error, keine Crashes |
| Memory Pressure | 90% Memory | GC, keine OOM-Kills |
| Downstream 503 | Payment API down | Retry, Fallback, Queue |
Beispiel: Chaos Test mit Toxiproxy
describe('Chaos Tests', () => {
let toxiproxy: ToxiproxyClient;
beforeAll(async () => {
toxiproxy = new ToxiproxyClient('localhost', 8474);
});
it('should handle database latency', async () => {
const proxy = await toxiproxy.get('postgres');
// Inject 500ms latency
await proxy.addToxic({
name: 'latency',
type: 'latency',
attributes: {latency: 500}
});
const start = Date.now();
const response = await request(app).get('/users');
const duration = Date.now() - start;
// Should timeout gracefully, not hang indefinitely
expect(response.status).toBe(503);
expect(duration).toBeLessThan(3000); // Timeout < 3s
await proxy.removeToxic('latency');
});
it('should survive downstream failure', async () => {
const proxy = await toxiproxy.get('payment-api');
// Simulate complete outage
await proxy.disable();
const response = await request(app)
.post('/orders')
.send({items: [{id: '1', quantity: 1}]});
// Should accept order, queue payment
expect(response.status).toBe(202);
expect(response.body.payment_status).toBe('PENDING');
await proxy.enable();
});
});
Regeln und Anti-Patterns
Do
- Test auf jeder Ebene: Unit, Integration, Contract, E2E
- Contract Tests vor Releases: Verhindern Breaking Changes
- Security Tests in CI: SAST bei jedem PR, DAST vor Deploy
- Performance Baselines: Vergleiche mit vorherigen Versionen
- Deterministische Tests: Keine Flakiness durch Timing oder Reihenfolge
- Schnelles Feedback: Unit Tests < 1 Min, Integration < 5 Min
Don't
- Nur Happy Path testen: Error Cases sind wichtiger
- E2E als einzige Tests: Zu langsam, zu fragil
- Mocks ohne Contracts: Mock-Drift führt zu falscher Sicherheit
- Manuelle Quality Gates: Menschen vergessen, Maschinen nicht
- Tests nach Features: Tests sind Teil des Features
- Flaky Tests ignorieren: Ein Flaky Test ist ein kaputter Test
Artefakt: Teststrategie-Matrix
# Teststrategie-Matrix
## Test-Ebenen und Verantwortlichkeiten
| Ebene | Scope | Tools | Gate | Threshold |
|---------------|-------------------------|---------------------|------------|-----------------|
| Unit | Business Logic | Jest, pytest | PR | 80% Cov |
| Integration | DB, Cache, AuthZ | Testcontainers | PR | Kritische Pfade |
| Contract | API Schema | Pact, oasdiff | PR | 100% Pass |
| Security/SAST | Code Vulnerabilities | Semgrep, CodeQL | PR | 0 High |
| Security/DAST | Runtime Vulnerabilities | OWASP ZAP | Pre-Deploy | 0 High |
| Performance | SLO Compliance | k6, Gatling | Pre-Deploy | p95 < SLO |
| Chaos | Resilience | Toxiproxy, Litmus | Quarterly | Documented |
| E2E | Critical Journeys | Playwright, Cypress | Pre-Deploy | 100% Pass |
## CI/CD Pipeline Stages
```mermaid
flowchart LR
commit[Commit] --> lint[Lint] --> unit[Unit] --> integration[Integration]
integration --> contract[Contract] --> security[Security] --> build[Build]
build --> staging[Deploy Staging]
staging --> performance[Performance] --> dast[DAST] --> smoke[Smoke]
smoke --> prod[Deploy Prod]
```
## Test-Daten-Strategie
| Environment | Datenquelle | PII-Handling |
|-------------|----------------------|----------------------|
| Unit | In-Memory Fixtures | Synthetic |
| Integration | Testcontainers | Synthetic |
| Staging | Anonymized Prod Copy | Masked/Pseudonymized |
| Load Test | Generated Data | Synthetic |
## Quality Gate Definitionen
### PR Gate (Blocking)
- [ ] Alle Unit Tests bestanden
- [ ] Code Coverage >= 80%
- [ ] Keine Linting-Fehler
- [ ] Integration Tests bestanden
- [ ] Contract Tests bestanden
- [ ] Keine Breaking Changes im Schema
- [ ] SAST: 0 High/Critical Findings
- [ ] Dependency Audit: 0 High Vulnerabilities
### Deploy Gate (Blocking)
- [ ] Alle PR Gates erfüllt
- [ ] DAST Scan bestanden
- [ ] Performance Tests: p95 < SLO
- [ ] Smoke Tests in Staging bestanden
### Release Gate (Manual Review)
- [ ] Alle automatischen Gates erfüllt
- [ ] Penetration Test (falls fällig)
- [ ] Change Advisory Board Approval (falls erforderlich)
Checkliste
Must-have vor Go-Live
- [ ] Unit Tests für alle Business Logic (Coverage > 80%)
- [ ] Integration Tests für DB, AuthZ, kritische Downstreams
- [ ] Contract Tests (Schema-based oder Consumer-driven)
- [ ] Schema Validation automatisiert in CI
- [ ] SAST in jedem PR
- [ ] DAST vor Production Deploy
- [ ] CI Gates blockieren bei Test-Failures
Should-have
- [ ] Performance Tests gegen definierte SLOs
- [ ] Chaos/Failure Tests für kritische Pfade
- [ ] Test-Daten-Strategie dokumentiert
- [ ] Flaky Test Detection und Quarantäne
Nice-to-have
- [ ] Mutation Testing für Test-Qualität
- [ ] Visual Regression Tests für API-Dokumentation
- [ ] Automated Canary Analysis
Wie es weitergeht
Im nächsten Teil behandeln wir Betrieb & Deployment – wie du Environment-Parität sicherstellst, Rollbacks planst und Runbooks für den Ernstfall vorbereitest.
Dies ist Teil 19 der Serie API-Design. Alle Teile findest du in der Serie: API-Design.