Skip to main content

Testing Best Practices

This document outlines the best practices for testing in the Gradiant platform. Following these guidelines will ensure high-quality, maintainable, and effective tests across the codebase.

General Testing Principles

1. Test Pyramid

Follow the test pyramid approach:
  • Unit Tests: The foundation - fast, isolated tests for individual functions and components
  • Integration Tests: Middle layer - testing interactions between components
  • End-to-End Tests: Top layer - testing complete user flows
Aim for a ratio of approximately:
  • 70% unit tests
  • 20% integration tests
  • 10% end-to-end tests

2. Test Coverage

  • Maintain a minimum of 80% code coverage for all production code
  • Aim for 100% coverage of critical paths and business logic
  • Focus on meaningful coverage rather than arbitrary metrics
  • Use vitest --coverage to generate coverage reports

3. Test Independence

  • Each test should be independent and not rely on other tests
  • Tests should be able to run in any order
  • Avoid shared mutable state between tests
  • Use proper setup and teardown functions

4. Test Readability

  • Use descriptive test names that explain the expected behavior
  • Follow the Arrange-Act-Assert (AAA) pattern
  • Keep tests focused on a single behavior or assertion
  • Use helper functions to reduce duplication

Unit Testing Best Practices

1. Test Structure

Follow this structure for unit tests:
import { functionToTest } from './module'

describe('functionToTest', () => {
  it('should do something specific', () => {
    // Arrange
    const input = 'test input'

    // Act
    const result = functionToTest(input)

    // Assert
    expect(result).toBe('expected output')
  })

  it('should handle edge cases', () => {
    // Test edge cases
  })

  it('should throw an error for invalid input', () => {
    // Test error conditions
    expect(() => functionToTest(null)).toThrow()
  })
})

2. Mocking

  • Mock external dependencies, not the system under test
  • Use vi.mock() for module-level mocking
  • Use vi.fn() for function-level mocking
  • Reset mocks between tests with vi.resetAllMocks()
  • Prefer explicit mocks over magical ones
Example:
import { describe, it, expect, vi } from 'vitest'
import { UserService } from './user-service'
import { Database } from './database'

// Mock the database module
vi.mock('./database', () => ({
  Database: {
    query: vi.fn(),
  },
}))

describe('UserService', () => {
  beforeEach(() => {
    vi.resetAllMocks()
  })

  it('should get user by id', async () => {
    // Arrange
    const mockUser = { id: '123', name: 'Test User' }
    Database.query.mockResolvedValue([mockUser])
    const userService = new UserService()

    // Act
    const user = await userService.getUserById('123')

    // Assert
    expect(Database.query).toHaveBeenCalledWith(
      'SELECT * FROM users WHERE id = ?',
      ['123'],
    )
    expect(user).toEqual(mockUser)
  })
})

3. Testing React Components

  • Use React Testing Library for component testing
  • Test behavior, not implementation details
  • Focus on user interactions and accessibility
  • Use screen queries that match how users find elements
Example:
;
import { render, screen, fireEvent } from '@testing-library/react';
import { Button } from './Button';

describe('Button', () => {
  it('should render with the correct text', () => {
    // Arrange & Act
    render(<Button>Click me</Button>);

    // Assert
    expect(screen.getByRole('button', { name: /click me/i })).toBeInTheDocument();
  });

  it('should call onClick when clicked', () => {
    // Arrange
    const handleClick = vi.fn();
    render(<Button onClick={handleClick}>Click me</Button>);

    // Act
    fireEvent.click(screen.getByRole('button'));

    // Assert
    expect(handleClick).toHaveBeenCalledTimes(1);
  });

  it('should be disabled when disabled prop is true', () => {
    // Arrange & Act
    render(<Button disabled>Click me</Button>);

    // Assert
    expect(screen.getByRole('button')).toBeDisabled();
  });
});

4. Testing Hooks

  • Use renderHook from @testing-library/react-hooks
  • Test the behavior, not the implementation
  • Test edge cases and error conditions
Example:
import { renderHook, act } from '@testing-library/react-hooks'
import { useCounter } from './useCounter'

describe('useCounter', () => {
  it('should initialize with the default value', () => {
    // Arrange & Act
    const { result } = renderHook(() => useCounter())

    // Assert
    expect(result.current.count).toBe(0)
  })

  it('should initialize with the provided value', () => {
    // Arrange & Act
    const { result } = renderHook(() => useCounter(10))

    // Assert
    expect(result.current.count).toBe(10)
  })

  it('should increment the counter', () => {
    // Arrange
    const { result } = renderHook(() => useCounter())

    // Act
    act(() => {
      result.current.increment()
    })

    // Assert
    expect(result.current.count).toBe(1)
  })
})

Integration Testing Best Practices

1. Test Scope

  • Focus on testing interactions between components
  • Test API endpoints, database interactions, and service integrations
  • Use real dependencies when possible, mock external services

2. API Testing

  • Test all API endpoints with different inputs
  • Verify response status codes, headers, and body
  • Test error handling and edge cases
  • Use a test database for data persistence tests
Example:
import { describe, it, expect, beforeAll, afterAll } from 'vitest'
import { createServer } from '../server'
import { request } from 'undici'
import { setupTestDatabase } from '../lib/prisma-test'

describe('User API', () => {
  let server
  let baseUrl
  let cleanup

  beforeAll(async () => {
    const { cleanup: cleanupDb } = await setupTestDatabase()
    cleanup = cleanupDb

    server = createServer()
    await server.listen(0)
    const address = server.address()
    baseUrl = `http://localhost:${address.port}`
  })

  afterAll(async () => {
    await server.close()
    await cleanup()
  })

  it('should create a user', async () => {
    // Arrange
    const userData = {
      name: 'Test User',
      email: '[email protected]',
    }

    // Act
    const response = await request(`${baseUrl}/api/users`, {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
      },
      body: JSON.stringify(userData),
    })

    // Assert
    expect(response.statusCode).toBe(201)

    const body = await response.body.json()
    expect(body).toHaveProperty('id')
    expect(body.name).toBe(userData.name)
    expect(body.email).toBe(userData.email)
  })
})

3. Database Testing

  • Use a separate test database or isolated schemas
  • Clean up data between tests
  • Test database migrations and schema changes
  • Verify data integrity and constraints
Example:
import { describe, it, expect, beforeAll, afterAll, beforeEach } from 'vitest'
import { setupTestDatabase } from '../lib/prisma-test'
import { UserRepository } from './user-repository'

describe('UserRepository', () => {
  let prisma
  let userRepository
  let cleanup

  beforeAll(async () => {
    const { prisma: prismaClient, cleanup: cleanupDb } =
      await setupTestDatabase()
    prisma = prismaClient
    cleanup = cleanupDb
    userRepository = new UserRepository(prisma)
  })

  afterAll(async () => {
    await cleanup()
  })

  beforeEach(async () => {
    // Clean up data between tests
    await prisma.user.deleteMany()
  })

  it('should create a user', async () => {
    // Arrange
    const userData = {
      name: 'Test User',
      email: '[email protected]',
    }

    // Act
    const user = await userRepository.create(userData)

    // Assert
    expect(user).toHaveProperty('id')
    expect(user.name).toBe(userData.name)
    expect(user.email).toBe(userData.email)

    // Verify in database
    const dbUser = await prisma.user.findUnique({
      where: { id: user.id },
    })
    expect(dbUser).toMatchObject(userData)
  })
})

End-to-End Testing Best Practices

1. Test Critical Paths

  • Focus on testing critical user journeys
  • Test the most important business flows
  • Limit the number of E2E tests to reduce maintenance burden

2. Test Stability

  • Make tests resilient to UI changes
  • Use data attributes for test selectors
  • Add proper waiting mechanisms for asynchronous operations
  • Implement retry mechanisms for flaky tests

3. Test Environment

  • Use a dedicated test environment
  • Reset the environment between test runs
  • Use test data generators for consistent data
  • Implement proper cleanup procedures
Example:
import { test, expect } from '@playwright/test'

test.describe('Authentication', () => {
  test.beforeEach(async ({ page }) => {
    await page.goto('/')
  })

  test('should allow a user to sign in', async ({ page }) => {
    // Navigate to login page
    await page.click('text=Sign In')

    // Fill in credentials
    await page.fill('[data-testid="email-input"]', '[email protected]')
    await page.fill('[data-testid="password-input"]', 'password123')

    // Submit the form
    await page.click('[data-testid="login-button"]')

    // Verify successful login
    await expect(page.locator('[data-testid="user-menu"]')).toBeVisible()
    await expect(page.locator('[data-testid="user-email"]')).toHaveText(
      '[email protected]',
    )
  })
})

Performance Testing Best Practices

1. Benchmark Critical Operations

  • Identify performance-critical operations
  • Establish baseline performance metrics
  • Set performance budgets for key operations
  • Regularly run performance tests to detect regressions

2. Load Testing

  • Test system behavior under expected and peak loads
  • Measure response times, throughput, and error rates
  • Identify bottlenecks and optimize accordingly
  • Test scaling capabilities for distributed systems

3. Memory Usage

  • Monitor memory usage during tests
  • Check for memory leaks in long-running operations
  • Verify garbage collection behavior
  • Test with limited memory resources
Example:
import { bench, describe } from 'vitest'
import { generateReport } from './report-generator'

describe('Report Generator Performance', () => {
  bench(
    'should generate a small report quickly',
    () => {
      return generateReport({ size: 'small', items: 100 })
    },
    { iterations: 100, time: 1000 },
  )

  bench(
    'should generate a medium report efficiently',
    () => {
      return generateReport({ size: 'medium', items: 1000 })
    },
    { iterations: 50, time: 2000 },
  )

  bench(
    'should handle large reports',
    () => {
      return generateReport({ size: 'large', items: 10000 })
    },
    { iterations: 10, time: 5000 },
  )
})

Security Testing Best Practices

1. Input Validation

  • Test all input validation mechanisms
  • Try common attack vectors (SQL injection, XSS, CSRF)
  • Verify proper sanitization of user inputs
  • Test with malformed and malicious inputs

2. Authentication and Authorization

  • Test authentication mechanisms thoroughly
  • Verify proper authorization checks
  • Test role-based access control
  • Check for common security vulnerabilities

3. Data Protection

  • Verify proper encryption of sensitive data
  • Test data masking and anonymization
  • Check for data leakage in responses
  • Verify secure storage of credentials and tokens
Example:
;
import { render, screen, fireEvent } from '@testing-library/react';
import { LoginForm } from './LoginForm';

describe('LoginForm Security', () => {
  it('should not submit the form with invalid credentials', async () => {
    // Arrange
    const handleSubmit = vi.fn();
    render(<LoginForm onSubmit={handleSubmit} />);

    // Act - Try SQL injection
    await fireEvent.change(screen.getByLabelText(/email/i), {
      target: { value: "' OR 1=1 --" },
    });
    await fireEvent.change(screen.getByLabelText(/password/i), {
      target: { value: "password" },
    });
    await fireEvent.click(screen.getByRole('button', { name: /sign in/i }));

    // Assert
    expect(handleSubmit).not.toHaveBeenCalled();
    expect(screen.getByText(/invalid email format/i)).toBeInTheDocument();
  });

  it('should sanitize user inputs', async () => {
    // Arrange
    const handleSubmit = vi.fn();
    render(<LoginForm onSubmit={handleSubmit} />);

    // Act - Try XSS attack
    await fireEvent.change(screen.getByLabelText(/email/i), {
      target: { value: "[email protected]<script>alert('XSS')</script>" },
    });
    await fireEvent.change(screen.getByLabelText(/password/i), {
      target: { value: "password123" },
    });
    await fireEvent.click(screen.getByRole('button', { name: /sign in/i }));

    // Assert
    expect(handleSubmit).not.toHaveBeenCalled();
    expect(screen.getByText(/invalid email format/i)).toBeInTheDocument();
  });
});

Accessibility Testing Best Practices

1. Automated Testing

  • Use axe-core for automated accessibility testing
  • Include accessibility checks in CI/CD pipeline
  • Fix all critical accessibility issues before deployment
Example:
;
import { render } from '@testing-library/react';
import { axe, toHaveNoViolations } from '-axe';
import { Button } from './Button';

expect.extend(toHaveNoViolations);

describe('Button Accessibility', () => {
  it('should not have accessibility violations', async () => {
    // Arrange
    const { container } = render(<Button>Click me</Button>);

    // Act & Assert
    const results = await axe(container);
    expect(results).toHaveNoViolations();
  });

  it('should have proper ARIA attributes when disabled', async () => {
    // Arrange
    const { container } = render(<Button disabled>Click me</Button>);

    // Act & Assert
    const results = await axe(container);
    expect(results).toHaveNoViolations();
  });
});

2. Manual Testing

  • Test with screen readers (NVDA, VoiceOver, JAWS)
  • Verify keyboard navigation works correctly
  • Check color contrast and text readability
  • Test with different zoom levels and font sizes

Test Maintenance Best Practices

1. Continuous Improvement

  • Regularly review and refactor tests
  • Remove flaky or redundant tests
  • Update tests when requirements change
  • Improve test coverage for critical areas

2. Test Documentation

  • Document test setup and requirements
  • Explain complex test scenarios
  • Document known limitations or edge cases
  • Keep test documentation up-to-date

3. Test Monitoring

  • Monitor test execution times
  • Track test failures and flaky tests
  • Analyze test coverage trends
  • Use test metrics to guide improvements

Conclusion

Following these best practices will help ensure that the Gradiant platform has a robust, maintainable, and effective test suite. Remember that testing is an ongoing process, and continuous improvement is key to maintaining high-quality software. For more detailed information on specific testing patterns and setup, refer to: If you have questions or need assistance with testing, please reach out to the development team or open an issue on GitHub.