Contributing
Thank you for your interest in contributing to MCP Registry Client! This guide will help you get started.
Development Setup
Prerequisites
- Python 3.12+
- uv
- Git
Setting up the Development Environment
- Fork and clone the repository
- Create a virtual environment
# suggested
uv venv --seed --relocatable --link-mode copy --python-preference only-managed --no-cache --python 3.12 --prompt mcp-registry-client .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
-
Install development dependencies
Regular dev only
Docs dev only
Everything
uv pip install -e ".[dev,docs]" -r requirements.txt -r requirements-dev.txt -r requirements-docs.txt
- Hack
Development Workflow
We use Nox for standardized development tasks.
Code Quality
Run all quality checks
Or run individual checks
# Linting and formatting
nox -s lint
# Type checking
nox -s type_check
# Security analysis
nox -s security
# Format code
nox -s format_source
Testing
Run tests
Run tests with coverage
Documentation
Build documentation
Serve documentation locally
Code Style
We use the following tools to maintain code quality
- Ruff: For linting, formatting, and import sorting
- mypy: For type checking
- bandit: For security analysis
- pytest: For testing
Code Formatting
Code is automatically formatted with Ruff. Run the formatter
Type Hints
All code must include comprehensive type hints. We use
- Python 3.12+ type syntax
- Pydantic for data validation
- Strict mypy configuration
Docstrings
Use Google-style docstrings
def example_function(param: str, optional: int = 0) -> str
"""Brief description of the function.
Longer description if needed. Explain the purpose, behavior,
and any important details.
Args
param: Description of the parameter.
optional: Description of optional parameter.
Returns
Description of the return value.
Raises
ValueError: When and why this exception is raised.
"""
return f"Result: {param} + {optional}"
Testing Guidelines
Test Structure
Tests are organized in the tests/ directory with three main categories
tests/
├── test_*.py # Unit tests
├── integration/ # Integration tests with real APIs
│ ├── test_real_api.py
│ └── test_cli_integration.py
└── performance/ # Performance benchmarks
├── test_cache_performance.py
├── test_retry_performance.py
└── test_client_performance.py
Testing Strategy
This project employs comprehensive testing patterns including
- Edge Case Testing: Boundary conditions, error states, and concurrency scenarios
- Integration Testing: Real API interactions with rate limiting
- Performance Testing: Benchmarks for critical performance paths
For detailed testing patterns and guidelines, see Testing Patterns.
Running Different Test Categories
# Unit tests only (default for development)
pytest tests/ -m "not integration and not benchmark"
# Integration tests (requires network access)
pytest tests/ -m "integration"
# Performance benchmarks
pytest tests/ -m "benchmark"
# All tests
pytest tests/
Writing Tests
- Use descriptive test names that explain what is being tested
- Include comprehensive docstrings for complex test scenarios
- Test both success and error cases, especially edge conditions
- Use appropriate fixtures and mocking strategies
- Follow the patterns documented in Testing Patterns
Example edge case test
@pytest.mark.asyncio
async def test_concurrent_cache_access(self) -> None
"""Test concurrent access to the same cache key."""
cache = ResponseCache(config)
async def set_value(key: str, value: str) -> None
await cache.set(key, value)
# Run multiple sets concurrently to test race conditions
await asyncio.gather(
set_value('test-key', 'value1'),
set_value('test-key', 'value2'),
set_value('test-key', 'value3'),
)
assert len(cache._cache) == 1
result = await cache.get('test-key')
assert result in ['value1', 'value2', 'value3']
Test Coverage Requirements
- Overall Coverage: >90%
- Core Modules: 100% coverage for
client.py,cache.py,models.py - CLI Module: >95% coverage
Check coverage with
Performance Testing
Performance tests use pytest-benchmark and are marked with
@pytest.mark.benchmark.
These tests
- Validate response times for critical operations
- Test behavior under concurrent load
- Monitor memory usage patterns
- Ensure retry mechanisms perform efficiently
Run performance tests separately
Submitting Changes
Pull Request Process
- Create a feature branch
- Make your changes and commit
- Run quality checks
- Push and create a pull request
Commit Messages
Use Conventional Commits format
We use the following set of "types". If you have one which isn't listed, feel free to use it. If we hate it, we'll let you know. OTOH, we might even adopt it :D.
- agents: agent-related changes (AGENTS.md, "agent-notes" dir)
- build: build system changes
- chore: tech-debt
- ci: CI/CD changes
- docs: documentation changes
- feat: new features
- fix: bug fixes
- perf: performance-related changes
- refactor: refactoring changes - note that this is separate from chore
- repo: .gitignore; git options; folder/file reorg; etc.
- revert: reverting previous commits
- test: test changes (net-new, fixes, refactor, ...)
TL;DR
Examples
feat(server): support for server filtering by category
fix(api): handle API timeout errors gracefully
docs(api): add API usage examples
Pull Request Requirements
- All tests must pass
- Code coverage should not decrease
- All quality checks must pass
- Include tests for new functionality
- Update documentation if needed
- Include a clear description of changes
Release Process
Releases are handled by maintainers
- Update version in
pyproject.toml - Update
CHANGELOG.md - Create a release tag
Getting Help
- Open an issue for bugs or feature requests
- Start a discussion for questions or ideas
- Check existing issues before creating new ones
Code of Conduct
This project follows the Contributor Covenant. Be respectful and inclusive in all interactions.