feat(benchmarks): add comprehensive performance benchmark suite#30
Merged
feat(benchmarks): add comprehensive performance benchmark suite#30
Conversation
- Add test data generators for various event sizes and types - Implement 5 Locust scenarios: * BaselineUser: Maximum throughput test * PayloadSizeUser: Payload size impact * RealisticUser: Real CDP traffic patterns * BurstTrafficUser: Spike handling * ErrorRateUser: Error handling overhead - Add metrics collection utilities - Add benchmark runner script and documentation - Support headless and interactive modes
- Change from /api/v1/{type} to /collect/{stream} format
- Use appropriate stream names for each scenario
- Verified with quick test: 795 req/s, 100% success rate, p99 < 3ms
- Update run_benchmarks.sh to accept queue_mode argument (async|pubsub)
- Results now organized by queue mode: results/{queue_mode}/
- Add Pub/Sub + GCS emulator setup instructions
- Document AsyncQueue vs PubSub trade-offs and characteristics
- Enable comparative testing: single-server vs distributed architectures
- Include emulator commands for local PubSub benchmarking
- Remove external notes references from README
50a7a76 to
155e538
Compare
- Update Task 15 status with PR #30 reference - Document actual implementation: Locust-based benchmark suite - Add completion metrics: 795 req/s → 10k+ projected, p99: 3ms - List all deliverables: 5 scenarios, automated runner, comprehensive docs
- Move completed specs to specs/archive/ - core-pipeline (v0.1.0 - initial implementation) - gcs-bigquery-storage (v0.1.0 - storage backend) - Create specs/active/ for in-progress features - Add READMEs explaining: - Workflow for new features - Archive contents and outcomes - Design decisions and learnings This makes it clear what's done vs what's being designed, and preserves design history for future reference.
prosdev
added a commit
that referenced
this pull request
Jan 16, 2026
- Update Task 15 status with PR #30 reference - Document actual implementation: Locust-based benchmark suite - Add completion metrics: 795 req/s → 10k+ projected, p99: 3ms - List all deliverables: 5 scenarios, automated runner, comprehensive docs
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Adds a complete performance benchmarking infrastructure using Locust to validate EventKit's throughput and latency characteristics.
Closes #10
What's Included
📊 Benchmark Suite
run_benchmarks.shwith configurable parameters🔧 Infrastructure
benchmarks/README.md✅ Initial Results (15s validation run)
Acceptance Criteria Met
From Issue #10:
Test Scenarios
Usage
Files Changed
benchmarks/: New directory with all benchmark codebenchmarks/README.md: Comprehensive usage guidebenchmarks/locustfile.py: 5 Locust test scenariosbenchmarks/run_benchmarks.sh: Automated test runnerbenchmarks/utils/generators.py: Synthetic event generatorsbenchmarks/utils/metrics.py: Metrics helpersNext Steps
This infrastructure enables future optimization work:
Checklist
pytestgreen)mypy)ruff)Related