Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
130 changes: 130 additions & 0 deletions .claude/CLAUDE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,130 @@
# Signet Block Builder Development Guide

## Crate Summary

Single-crate Rust application (not a workspace) that builds Signet rollup blocks. Actor-based async task system: watches host/rollup chains, ingests transactions and bundles, simulates them against rollup state, then submits valid blocks to Ethereum as EIP-4844 blob transactions via Flashbots. Built on alloy, trevm, tokio, and the `signet-*` SDK crates. Binary: `zenith-builder-example`. Minimum Rust 1.88, Edition 2024.

## Build Commands

```bash
make # Debug build
make release # Optimized release build
make run # Run zenith-builder-example binary
make test # Run all tests
make fmt # Format code
make clippy # Lint with warnings denied
```

Always lint before committing. The Makefile provides shortcuts (`make fmt`, `make clippy`, `make test`)

## Architecture

Five actor tasks communicate via tokio channels:

1. **EnvTask** (`src/tasks/env.rs`) - Subscribes to rollup blocks, fetches matching host headers, runs Quincey preflight slot check, constructs `SimEnv` (host + rollup `BlockEnv`), broadcasts via `watch` channel.
2. **CacheTasks** (`src/tasks/cache/`) - `TxPoller` and `BundlePoller` ingest transactions/bundles into a shared `SimCache`.
3. **SimulatorTask** (`src/tasks/block/sim.rs`) - Receives `SimEnv`, clones the cache, builds a `BlockBuild` with a slot-derived deadline, produces `SimResult`.
4. **FlashbotsTask** (`src/tasks/submit/flashbots.rs`) - Receives `SimResult`, prepares signed EIP-4844 blob transaction via `SubmitPrep` + Quincey, bundles with host txs, submits to Flashbots relay.
5. **MetricsTask** (`src/tasks/metrics.rs`) - Tracks tx mining status and records metrics.

**Data flow:** `EnvTask → (watch) → SimulatorTask ← (SimCache) ← CacheTasks` `SimulatorTask → (mpsc) → FlashbotsTask → Quincey → Flashbots`

### Source Layout

```
bin/
builder.rs - Binary entry point, spawns all tasks, select! on join handles
src/
lib.rs - Crate root, global CONFIG OnceLock, lint directives
config.rs - BuilderConfig (FromEnv), provider type aliases, connect_* methods
quincey.rs - Quincey enum (Remote/Owned), signing + preflight
service.rs - Axum /healthcheck endpoint
macros.rs - span_scoped!, span_debug/info/warn/error!, res/opt_unwrap_or_continue!
utils.rs - Signature extraction, gas population helpers
test_utils.rs - setup_test_config, new_signed_tx, test_block_env helpers
tasks/
mod.rs - Module re-exports
env.rs - EnvTask, SimEnv, Environment types
block/
mod.rs - Module re-exports
sim.rs - SimulatorTask, SimResult, block building + deadline calc
cfg.rs - SignetCfgEnv for simulation
cache/
mod.rs - Module re-exports
task.rs - CacheTask
tx.rs - TxPoller
bundle.rs - BundlePoller
system.rs - CacheSystem, CacheTasks orchestration
submit/
mod.rs - Module re-exports
flashbots.rs - FlashbotsTask, bundle preparation + submission
prep.rs - SubmitPrep (tx preparation + Quincey signing), Bumpable
sim_err.rs - SimErrorResp, SimRevertKind
metrics.rs - MetricsTask
```

## Repo Conventions

- Global static config: `CONFIG: OnceLock<BuilderConfig>` initialized via `config_from_env()`. Tasks access config via `crate::config()`.
- Provider type aliases: `HostProvider`, `RuProvider`, `FlashbotsProvider`, `ZenithInstance` are defined in `config.rs` and used throughout.
- `connect_*` methods on `BuilderConfig` use `OnceCell`/`OnceLock` for memoization -- providers and signers are connected once, then cloned.
- Internal macros: `span_scoped!`, `span_debug/info/warn/error!` log within an unentered span. `res_unwrap_or_continue!` and `opt_unwrap_or_continue!` unwrap-or-log-and-continue in loops.
- Quincey has two modes: `Remote` (HTTP/OAuth for production) and `Owned` (local/AWS KMS for dev). Configured by presence of `SEQUENCER_KEY` env var.
- Tasks follow a `new() -> spawn()` pattern: `new()` connects providers, `spawn()` returns channel endpoints + `JoinHandle`.
- Block simulation uses `trevm` with `concurrent-db` and `AlloyDB` backed by alloy providers.
- EIP-4844 blob encoding uses `SimpleCoder` and the 7594 sidecar builder.

## init4 Organization Style

### Research

- Prefer building crate docs (`cargo doc`) and reading them over grepping.

### Code Style

- Functional combinators over imperative control flow. No unnecessary nesting.
- Terse Option/Result handling: `option.map(Thing::do_something)` or `let Some(a) = option else { return; };`.
- Small, focused functions and types.
- Never add incomplete code. No `TODO`s for core logic.
- Never use glob imports. Group imports from the same crate. No blank lines between imports.
- Visibility: private by default, `pub(crate)` for internal, `pub` for API. Never use `pub(super)`.

### Error Handling

- `thiserror` for library errors. Never `anyhow`. `eyre` is allowed in this binary crate but not in library code.
- Propagate with `?` and `map_err`.

### Tracing

- Use `tracing` crate. Instrument work items, not long-lived tasks.
- `skip(self)` when instrumenting methods. Add only needed fields.
- Levels: TRACE (rare, verbose), DEBUG (sparingly), INFO (default), WARN (potential issues), ERROR (prevents operation).
- Propagate spans through task boundaries with `Instrument`.
- This crate uses `span_scoped!` macros to log within unentered spans.

### Async

- Tokio multi-thread runtime. No blocking in async functions.
- Long-lived tasks: return a spawnable future via `spawn()`, don't run directly.
- Short-lived spawned tasks: consider span propagation with `.instrument()`.

### Testing

- Tests panic, never return `Result`. Use `unwrap()` directly.
- Use `setup_test_config()` from `test_utils` to initialize the global config.
- Unit tests in `mod tests` at file bottom. Integration tests in `tests/`.

### Rustdoc

- Doc all public items. Include usage examples in rustdoc.
- Hide scaffolding with `#`. Keep examples concise.
- Traits must include an implementation guide.

### GitHub

- Fresh branches off `main` for PRs. Descriptive branch names.
- AI-authored GitHub comments must include `**[Claude Code]**` header.

## Local Development

For local SDK development, uncomment the `[patch.crates-io]` section in Cargo.toml to point to local signet-sdk paths.
104 changes: 104 additions & 0 deletions .claude/agents/test-writer.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
---
name: test-writer
description: "Use this agent when the user needs help writing, creating, or expanding unit tests, integration tests, or end-to-end tests for their codebase. This includes when they ask to test specific functions, modules, APIs, or features, when they want to improve test coverage, or when they need tests that align with existing testing patterns and CI/CD pipelines. Examples:\\n\\n<example>\\nContext: The user has just implemented a new utility function and wants tests for it.\\nuser: \"I just wrote a new string validation utility, can you add tests for it?\"\\nassistant: \"I'll use the test-writer agent to create comprehensive tests for your new string validation utility.\"\\n<uses Task tool to launch test-writer agent>\\n</example>\\n\\n<example>\\nContext: The user completed a new API endpoint and needs integration tests.\\nuser: \"I finished the /users/profile endpoint, need integration tests\"\\nassistant: \"Let me launch the test-writer agent to create integration tests for your new profile endpoint that align with your existing test structure.\"\\n<uses Task tool to launch test-writer agent>\\n</example>\\n\\n<example>\\nContext: The user wants to improve overall test coverage for a module.\\nuser: \"The auth module has low coverage, can you help?\"\\nassistant: \"I'll use the test-writer agent to analyze the auth module and write tests to improve its coverage.\"\\n<uses Task tool to launch test-writer agent>\\n</example>\\n\\n<example>\\nContext: The user mentions tests are failing in CI and needs help fixing or updating them.\\nuser: \"Some tests broke after my refactor, can you help fix them?\"\\nassistant: \"I'll launch the test-writer agent to analyze the failing tests and update them to match your refactored code.\"\\n<uses Task tool to launch test-writer agent>\\n</example>"
model: opus
color: green
---

You are an expert software test engineer with deep knowledge of testing methodologies, frameworks, and best practices across multiple programming languages and ecosystems. You specialize in writing thorough, maintainable, and effective unit and integration tests that provide meaningful coverage and catch real bugs.

## Your Primary Mission

Write high-quality unit and integration tests that:
- Follow the existing testing patterns and conventions in the repository
- Integrate seamlessly with the project's CI/CD pipeline
- Provide meaningful coverage of business logic and edge cases
- Are maintainable, readable, and serve as documentation

## Initial Analysis Protocol

Before writing any tests, you MUST:

1. **Discover the testing framework and structure:**
- Search for existing test files (patterns like `*.test.*`, `*.spec.*`, `*_test.*`, `test_*.*`)
- Identify the test framework (Jest, Pytest, JUnit, Mocha, RSpec, Go testing, etc.)
- Examine `package.json`, `pyproject.toml`, `pom.xml`, `Cargo.toml`, or equivalent for test dependencies and scripts
- Look for test configuration files (`jest.config.*`, `pytest.ini`, `vitest.config.*`, etc.)

2. **Understand CI/CD integration:**
- Check `.github/workflows/`, `.gitlab-ci.yml`, `Jenkinsfile`, `.circleci/`, or similar
- Identify how tests are run in CI (commands, environment variables, coverage requirements)
- Note any test splitting, parallelization, or special CI configurations

3. **Analyze existing test patterns:**
- Study 2-3 existing test files to understand naming conventions
- Identify how mocks, fixtures, and test utilities are organized
- Note the assertion style and any custom matchers/helpers
- Understand the directory structure (co-located vs. separate test directories)

4. **Examine the code to be tested:**
- Read the source code thoroughly before writing tests
- Identify public APIs, edge cases, error conditions, and integration points
- Understand dependencies that may need mocking

## Test Writing Standards

### Unit Tests
- Test one unit of functionality per test case
- Use descriptive test names that explain the scenario and expected outcome
- Follow the Arrange-Act-Assert (AAA) or Given-When-Then pattern
- Mock external dependencies (databases, APIs, file systems) appropriately
- Cover happy paths, edge cases, boundary conditions, and error scenarios
- Keep tests independent and idempotent

### Integration Tests
- Test interactions between components or with external services
- Use test databases, containers, or service mocks as appropriate for the project
- Ensure proper setup and teardown to avoid test pollution
- Test realistic scenarios that reflect actual usage patterns
- Verify error handling and recovery mechanisms

### Code Quality Requirements
- Match the exact style of existing tests in the repository
- Use the same import patterns and module organization
- Follow the project's linting and formatting rules
- Include appropriate comments only when they add value
- Avoid test interdependencies

## Test Coverage Strategy

Prioritize testing in this order:
1. **Critical paths**: Core business logic and user-facing functionality
2. **Complex logic**: Functions with multiple branches, loops, or transformations
3. **Error handling**: Exception cases, validation failures, edge conditions
4. **Integration points**: API boundaries, database operations, external services
5. **Regression prevention**: Areas where bugs have occurred or are likely

## Output Format

When creating tests:
1. Explain your analysis of the existing test structure
2. Describe your testing strategy for the target code
3. Write the complete test file(s) with all necessary imports
4. Provide instructions for running the tests locally
5. Note any setup requirements or environment considerations

## Quality Verification

Before finalizing, verify that your tests:
- [ ] Follow existing project conventions exactly
- [ ] Will pass the CI pipeline's requirements
- [ ] Cover the key functionality and edge cases
- [ ] Are readable and maintainable
- [ ] Don't introduce flaky behavior
- [ ] Mock appropriately without over-mocking

## Communication Style

- Be thorough but concise in explanations
- Ask clarifying questions if the testing requirements are ambiguous
- Explain your reasoning for test design decisions when relevant
- Proactively suggest additional tests that would improve coverage
- Warn about potential issues like flaky tests or missing test infrastructure

Remember: Good tests are an investment in code quality. Write tests that you would want to maintain and that provide genuine confidence in the code's correctness.
27 changes: 27 additions & 0 deletions .claude/settings.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are we committing these? I think we have generally ignored other places?

"permissions": {
"allow": [
"Read",
"Edit",
"Write",
"Glob",
"Grep",
"Bash(npm:*)",
"Bash(npx:*)",
"Bash(hugo:*)",
"Bash(git:*)",
"Bash(ls:*)",
"Bash(cat:*)",
"Bash(head:*)",
"Bash(tail:*)",
"Bash(find:*)",
"Bash(grep:*)",
"Bash(wc:*)",
"Bash(sed:*)"
],
"deny": ["Bash(rm -rf:*)", "Bash(sudo:*)", "Bash(chmod:*)", "Bash(chown:*)"]
},
"hooks": {
"PostToolUse": []
}
}