feat(cold): add first_log_index to ReceiptContext#23
Closed
Conversation
Adds the index of a receipt's first log among all logs in its block, enabling callers to compute per-log logIndex for RPC responses without refetching prior receipts. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
prestwich
added a commit
that referenced
this pull request
Feb 12, 2026
…text Add log filtering to cold storage following eth_getLogs semantics, and include block-level log indexing for RPC response construction. - Add `LogFilter` type with block range, address, and topic filters - Add `RichLog` type with full block/tx context and block_log_index - Add `ColdStorage::get_logs` with implementations for in-memory, SQLite, and PostgreSQL backends - Add `first_log_index` to `ReceiptContext` (cherry-picked from #23) - Replace `idx_logs_address` with composite `idx_logs_address_block` - Factor out `row_to_log_row` helper in SQL backend - Wire through task channel plumbing (request, handle, runner) - Comprehensive conformance tests covering all filter combinations Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
8 tasks
Member
Author
|
superseded by #24 |
prestwich
added a commit
that referenced
this pull request
Feb 12, 2026
…text Add log filtering to cold storage following eth_getLogs semantics, and include block-level log indexing for RPC response construction. - Add `LogFilter` type with block range, address, and topic filters - Add `RichLog` type with full block/tx context and block_log_index - Add `ColdStorage::get_logs` with implementations for in-memory, SQLite, and PostgreSQL backends - Add `first_log_index` to `ReceiptContext` (cherry-picked from #23) - Replace `idx_logs_address` with composite `idx_logs_address_block` - Factor out `row_to_log_row` helper in SQL backend - Wire through task channel plumbing (request, handle, runner) - Comprehensive conformance tests covering all filter combinations Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
prestwich
added a commit
that referenced
this pull request
Feb 13, 2026
* feat(cold): add first_log_index to ReceiptContext Adds the index of a receipt's first log among all logs in its block, enabling callers to compute per-log logIndex for RPC responses without refetching prior receipts. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat(cold): add eth_getLogs support and first_log_index to ReceiptContext Add log filtering to cold storage following eth_getLogs semantics, and include block-level log indexing for RPC response construction. - Add `LogFilter` type with block range, address, and topic filters - Add `RichLog` type with full block/tx context and block_log_index - Add `ColdStorage::get_logs` with implementations for in-memory, SQLite, and PostgreSQL backends - Add `first_log_index` to `ReceiptContext` (cherry-picked from #23) - Replace `idx_logs_address` with composite `idx_logs_address_block` - Factor out `row_to_log_row` helper in SQL backend - Wire through task channel plumbing (request, handle, runner) - Comprehensive conformance tests covering all filter combinations Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: bump workspace version to 0.3.0 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(hot): fix iter_k2 skipping first entry and infinite loop in mem backend iter_k2 had two bugs: (1) next_dual_above positioned the cursor at the first entry and returned it, but the iterator discarded that result and called next_k2() which advanced past it — so the first entry was never yielded. Fix: capture the first entry in the iterator struct and yield it before advancing. (2) The in-memory backend's next_k2 used next_dual_above(current_k1, current_k2) which is "at or above", returning the same entry forever. Fix: use read_next() (strictly above) and verify k1 still matches. Adds iter_k2 regression tests to the conformance suite. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat(cold): add get_logs to MdbxColdBackend and extract LogFilter::matches_log Move log-matching logic from a private function in mem.rs to a public LogFilter::matches_log method for cross-backend reuse. Implement get_logs on MdbxColdBackend using per-index exact_dual lookups. Remove unused SQL helper methods left over from a prior refactor. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(hot): fix iter/iter_from skipping first entry on all traversal traits The default iter() and iter_from() implementations on KvTraverse and DualKeyTraverse discarded the entry returned by the initial positioning call (first()/lower_bound()/next_dual_above()), causing the first entry to be skipped. The typed iter_from() on TableTraverse and DualTableTraverse had a second bug: they positioned the cursor then called iter() which reset it via first(). Fix by capturing the first entry as owned data in the iterator structs (RawKvIter, RawDualKeyIter) and yielding it before calling read_next(). Also override iter()/iter_from() on the MDBX cursor to use native libmdbx iterators which handle first-entry capture natively. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * refactor(cold): use iter_k2 and functional combinators in get_logs Replace manual exact_dual index probing with iter_k2 in MdbxColdBackend::get_logs_inner, matching the convention used by all other dual-table iterations in this file. Replace inner for/if/push loops with filter/map/extend in both MDBX and in-memory backends. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * refactor(types): move LogFilter, RichLog, Confirmed to signet-storage-types These are pure data types with no cold-storage-specific dependencies. Moving them to the shared types crate makes them available to all storage backends without depending on signet-cold. Re-exported from signet_cold for backward compatibility. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * refactor(cold-mdbx): use traverse/traverse_dual method syntax and zip log iterators Replace verbose turbofish trait calls (DualTableTraverse::<T, _>::iter_k2, TableTraverse::<T, _>::exact) with method syntax via tx.traverse() and tx.traverse_dual(). Zip receipt and transaction iterators in get_logs_inner instead of doing per-receipt point lookups for transaction hashes. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat(cold): add IndexedReceipt and store SealedHeader in MDBX Precompute receipt metadata at write time to eliminate read-path recomputation. IndexedReceipt wraps Receipt with tx_hash and first_log_index, removing the need to join with transactions or iterate prior receipts during queries. MDBX now stores SealedHeader (hash alongside header bytes) to eliminate all hash_slow() calls on reads. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * refactor(cold): replace RichLog with alloy RpcLog and add gas_used to IndexedReceipt Replace custom RichLog/LogFilter types with alloy::rpc::types::Log (aliased as RpcLog) and alloy::rpc::types::Filter. This eliminates redundant types in favor of the standard Ethereum RPC log type. Add gas_used field to IndexedReceipt, precomputed at append time from the cumulative gas sequence. This avoids needing prior receipt lookups at query time, following the same pattern as first_log_index and tx_hash. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * refactor: use method syntax instead of UFCS where unambiguous Replace unnecessary `Trait::method(receiver, ...)` calls with `receiver.method(...)` where no competing trait in scope defines the same method name. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * refactor(cold): unify receipt return type and return SealedHeader from queries Return SealedHeader from all header queries to preserve the cached block hash and eliminate redundant seal_slow() calls in backends. Replace the three overlapping receipt types (Confirmed<Receipt>, IndexedReceipt, ReceiptContext) with a single ColdReceipt type containing a consensus receipt with RPC-enriched logs and full block/transaction metadata. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat(cold): store and return recovered sender with transactions Accept `Recovered<TransactionSigned>` in `BlockData` so the sender address is preserved at append time. Transaction queries now return `Confirmed<RecoveredTx>` and `ColdReceipt` includes a `from` field, eliminating the need for consumers to run ecrecover themselves. - Add `sender: Address` to `IndexedReceipt` and `ColdReceipt` - Add `ColdTxSenders` MDBX table for sender storage - Add SQL migration for `from_address` column - Update all backends (mem, MDBX, SQL) and conformance tests - Add `RecoveredTx` type alias and `Recovered` re-export Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * refactor(cold-sql): collapse from_address into initial migration Move the `from_address` column from a separate migration 002 into the initial schema (001_initial.sql / 001_initial_pg.sql) and remove the idempotent ALTER TABLE logic from the backend constructor. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
first_log_index: u64field toReceiptContext, representing the index of this receipt's first log among all logs in the block (sum of log counts from all preceding receipts)logIndexfor RPC responses asfirst_log_index + iwithout refetching prior receiptsget_receipts_in_blockcall, computing bothprior_cumulative_gasandfirst_log_indexin a single passTest plan
first_log_indexat each position: 0, 2, 5prior_cumulative_gasat each position: 0, 21000, 42000first_log_index🤖 Generated with Claude Code