Conversation
2d4aefb to
c9e380a
Compare
c56343b to
1cab137
Compare
6613f3c to
b30a0ce
Compare
d205ebd to
c0bd9d9
Compare
15022b1 to
c22750c
Compare
28d9855 to
e18d8ce
Compare
Since commit 0e26727 ("btrfs: switch all message helpers to be RCU safe") the RCU protection is applied to all printk helpers, explicitly in the wrapping macros. This inlines the code around each message call but this is in no way a hot path so the RCU protection can be sunk further to _btrfs_printk(). This change saves about 10K of btrfs.ko size on x86_64 release config: text data bss dec hex filename 1722927 148328 15560 1886815 1cca5f pre/btrfs.ko 1712221 148760 15560 1876541 1ca23d post/btrfs.ko DELTA: -10706 Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
On zoned filesystems metadata space accounting can become overly optimistic
due to delayed refs reservations growing without a hard upper bound.
The delayed_refs_rsv block reservation is allowed to speculatively grow and
is only backed by actual metadata space when refilled. On zoned devices this
can result in delayed_refs_rsv reserving a large portion of metadata space
that is already effectively unusable due to zone write pointer constraints.
As a result, space_info->may_use can grow far beyond the usable metadata
capacity, causing the allocator to believe space is available when it is not.
This leads to premature ENOSPC failures and "cannot satisfy tickets" reports
even though commits would be able to make progress by flushing delayed refs.
Analysis of "-o enospc_debug" dumps using a Python debug script
confirmed that delayed_refs_rsv was responsible for the majority of
metadata overcommit on zoned devices. By correlating space_info counters
(total, used, may_use, zone_unusable) across transactions, the analysis
showed that may_use continued to grow even after usable metadata space
was exhausted, with delayed refs refills accounting for the excess
reservations.
Here's the output of the analysis:
======================================================================
Space Type: METADATA
======================================================================
Raw Values:
Total: 256.00 MB (268435456 bytes)
Used: 128.00 KB (131072 bytes)
Pinned: 16.00 KB (16384 bytes)
Reserved: 144.00 KB (147456 bytes)
May Use: 255.48 MB (267894784 bytes)
Zone Unusable: 192.00 KB (196608 bytes)
Calculated Metrics:
Actually Usable: 255.81 MB (total - zone_unusable)
Committed: 255.77 MB (used + pinned + reserved + may_use)
Consumed: 320.00 KB (used + zone_unusable)
Percentages:
Zone Unusable: 0.07% of total
May Use: 99.80% of total
Fix this by adding a zoned-specific cap in btrfs_delayed_refs_rsv_refill():
Before reserving additional metadata bytes, limit the delayed refs
reservation based on the usable metadata space (total bytes minus
zone_unusable). If the reservation would exceed this cap, return -EAGAIN
to trigger the existing flush/commit logic instead of overcommitting
metadata space.
This preserves the existing reservation and flushing semantics while
preventing metadata overcommit on zoned devices. The change is limited to
metadata space and does not affect non-zoned filesystems.
This patch addresses premature metadata ENOSPC conditions on zoned devices
and ensures delayed refs are throttled before exhausting usable metadata.
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
On zoned block devices, block groups accumulate zone_unusable space (space between the write pointer and zone end that cannot be allocated until the zone is reset). When a block group becomes mostly zone_unusable but still contains some valid data and it gets added to the unused_bgs list it can never be deleted because it's not actually empty. The deletion code (btrfs_delete_unused_bgs) skips these block groups due to the btrfs_is_block_group_used() check, leaving them on the unused_bgs list indefinitely. This causes two problems: 1. The block groups are never reclaimed, permanently wasting space 2. Eventually leads to ENOSPC even though reclaimable space exists Fix by detecting block groups where zone_unusable exceeds 50% of the block group size. Move these to the reclaim_bgs list instead of skipping them. This triggers btrfs_reclaim_bgs_work() which: 1. Marks the block group read-only 2. Relocates the remaining valid data via btrfs_relocate_chunk() 3. Removes the emptied block group 4. Resets the zones, converting zone_unusable back to usable space The 50% threshold ensures we only reclaim block groups where most space is unusable, making relocation worthwhile. Block groups with less zone_unusable are left on unused_bgs to potentially become fully empty through normal deletion. Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: David Sterba <dsterba@suse.com>
On zoned block devices, DATA block groups can accumulate large amounts of zone_unusable space (space between the write pointer and zone end). When zone_unusable reaches high levels (e.g., 98% of total space), new allocations fail with ENOSPC even though space could be reclaimed by relocating data and resetting zones. The existing flush states don't handle this scenario effectively - they either try to free cached space (which doesn't exist for zone_unusable) or reset empty zones (which doesn't help when zones contain valid data mixed with zone_unusable space). Add a new RECLAIM_ZONES flush state that triggers the block group reclaim machinery. This state: - Calls btrfs_reclaim_sweep() to identify reclaimable block groups - Calls btrfs_reclaim_bgs() to queue reclaim work - Waits for reclaim_bgs_work to complete via flush_work() - Commits the transaction to finalize changes The reclaim work (btrfs_reclaim_bgs_work) safely relocates valid data from fragmented block groups to other locations before resetting zones, converting zone_unusable space back into usable space. Insert RECLAIM_ZONES before RESET_ZONES in data_flush_states so that we attempt to reclaim partially-used block groups before falling back to resetting completely empty ones. Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: David Sterba <dsterba@suse.com>
Instead of open coding testing the uptodate bit on the extent buffer's flags, use the existing helper extent_buffer_uptodate() (which is even shorter to type). Also change the helper's return value from int to bool, since we always use it in a boolean context. Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
…ee_block()
We have several places that call extent_buffer_uptodate() after reading a
tree block with read_tree_block(), but that is redundant since we already
call extent_buffer_uptodate() in the call chain of read_tree_block():
read_tree_block()
btrfs_read_extent_buffer()
read_extent_buffer_pages()
returns -EIO if extent_buffer_uptodate() returns false
So remove those redundant checks.
Reviewed-by: Boris Burkov <boris@bur.io>
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Failure to read the fs root results in a mount error, but we log a warning message. Same goes for checking the uuid tree, an error results in a mount failure but we log a warning message. Change the level of the logged messages from warning to error. Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
If we fail to start the uuid rescan kthread, btrfs_check_uuid_tree() logs an error message and returns the error to the single caller, open_ctree(). This however is redundant since the caller already logs an error message, which is also more informative since it logs the error code. Some remove the warning message from btrfs_check_uuid_tree() as it doesn't add any value. Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
There is no need to call btrfs_handle_fs_error() (which we are trying to deprecate) if we fail to recover log trees: 1) Such a failure results in failing the mount immediately; 2) If the recovery started a transaction before failing, it has already aborted the transaction down in the call chain. So remove the btrfs_handle_fs_error() call, replace it with an error message and assert that the FS is in error state (so that no partial updates are committed due to a transaction that was not aborted). Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
…oup() We join a transaction with the goal of catching the current transaction and then commit it to get rid of pinned extents and reclaim free space, but a join can create a new transaction if there isn't any running, and if right before we did the join the current transaction happened to be committed by someone else (like the transaction kthread for example), we end up starting and committing a new transaction, causing rotation of the super block backup roots besides extra and useless IO. So instead of doing a transaction join followed by a commit, use the helper btrfs_commit_current_transaction() which ensures no transaction is created if there isn't any running. Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
None of the callers of `cache_save_setup` care about the return type as the space cache is purely and optimization. Also the free space cache is a deprecated feature that is being phased out. Change the return type of `cache_save_setup` to void to reflect this. Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: David Sterba <dsterba@suse.com>
That list head records all pending checksums for that ordered extent. And unlike other lists, we just use the name "list", which can be very confusing for readers. Rename it to "csum_list" which follows the remaining lists, showing the purpose of the list. And since we're here, remove a comment inside btrfs_finish_ordered_zoned() where we have "ASSERT(!list_empty(&ordered->csum_list))" to make sure the OE has pending csums. That comment is only here to make sure we do not call list_first_entry() before checking BTRFS_ORDERED_PREALLOC. But since we already have that bit checked and even have a dedicated ASSERT(), there is no need for that comment anymore. Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
The structure btrfs_ordered_extent has a lot of list heads for different purposes, passing a random list_head pointer is never a good idea as if the wrong list is passed in, the type casting along with the fs will be screwed up. Instead pass the btrfs_ordered_extent pointer, and grab the csum_list inside add_pending_csums() to make it a little safer. Since we're here, also update the comments to follow the current style. Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
The function btrfs_csum_file_blocks() is a little confusing, unlike btrfs_csum_one_bio(), it is not calculating the checksum of some file blocks. Instead it's just inserting the already calculated checksums into a given root (can be a csum root or a log tree). So rename it to btrfs_insert_data_csums() to reflect its behavior better. Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
We are logging messages as warnings but they should really have an error level instead, as if the respective conditions are met the mount will fail. So convert them to error level and also log the error code returned by read_tree_block(). Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
The 'out' label is pointless as there are no cleanups to perform there, we can replace every goto with a direct return. Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
We have several functions with parameters defined as booleans but then we have callers passing integers, 0 or 1, instead of false and true. While this isn't a bug since 0 and 1 are converted to false and true, it is odd and less readable. Change the callers to pass true and false literals instead. Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
This WARN_ON(ret) is never executed since the previous if statement makes us jump into the 'out_put' label when ret is not zero. The existing transaction abort inside the if statement also gives us a stack trace, so we don't need to move the WARN_ON(ret) into the if statement either. Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Remove duplicate inclusion of delayed-inode.h in disk-io.c to clean up redundant code. Signed-off-by: Chen Ni <nichen@iscas.ac.cn> Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
There's no need to pass the maximum between the block group's start offset and BTRFS_SUPER_INFO_OFFSET (64K) since we can't have any block groups allocated in the first megabyte, as that's reserved space. Furthermore, even if we could, the correct thing to do was to pass the block group's start offset anyway - and that's precisely what we do for block groups hat happen to contain a superblock mirror (the range for the super block is never marked as free and it's marked as dirty in the fs_info->excluded_extents io tree). So simplify this and get rid of that maximum expression. Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
btrfs_extent_root() can return a NULL pointer in case the root we are looking for is not in the rb tree that tracks roots. So add checks to every caller that is missing such check to log a message and return an error. The same applies to callers of btrfs_block_group_root(), since it calls btrfs_extent_root(). Reported-by: Chris Mason <clm@meta.com> Link: https://lore.kernel.org/linux-btrfs/20260208161657.3972997-1-clm@meta.com/ Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
btrfs_csum_root() can return a NULL pointer in case the root we are looking for is not in the rb tree that tracks roots. So add checks to every caller that is missing such check to log a message and return an error. Reported-by: Chris Mason <clm@meta.com> Link: https://lore.kernel.org/linux-btrfs/20260208161657.3972997-1-clm@meta.com/ Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
There is a lengthy comment introduced in commit b3ff8f1 ("btrfs: Don't submit any btree write bio if the fs has errors") and commit c9583ad ("btrfs: avoid double clean up when submit_one_bio() failed"), explaining two things: - Why we don't want to submit metadata write if the fs has errors - Why we re-set @ret to 0 if it's positive However it's no longer uptodate by the following reasons: - We have better checks nowadays Commit 2618849 ("btrfs: ensure no dirty metadata is written back for an fs with errors") has introduced better checks, that if the fs is in an error state, metadata writes will not result in any bio but instead complete immediately. That covers all metadata writes better. - Mentioned incorrect function name The commit c9583ad ("btrfs: avoid double clean up when submit_one_bio() failed") introduced this ret > 0 handling, but at that time the function name submit_extent_page() was already incorrect. It was submit_eb_page() that could return >0 at that time, and submit_extent_page() could only return 0 or <0 for errors, never >0. Later commit b35397d ("btrfs: convert submit_extent_page() to use a folio") changed "submit_extent_page()" to "submit_extent_folio()" in the comment, but it doesn't make any difference since the function name is wrong from day 1. Finally commit 5e121ae ("btrfs: use buffer xarray for extent buffer writeback operations") completely reworked how metadata writeback works, and removed submit_eb_page(), leaving only the wrong function name in the comment. Furthermore the function submit_extent_folio() still exists in the latest code base, but is never utilized for metadata writeback, causing more confusion. Just remove the lengthy comment, and replace the "if (ret > 0)" check with an ASSERT(), since only btrfs_check_meta_write_pointer() can modify @ret and it returns 0 or <0 for errors. Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
…ent() We already have btrfs_ordered_extent::inode, thus there is no need to pass a btrfs_inode parameter to btrfs_remove_ordered_extent(). Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
We compared rfer_cmpr against excl_cmpr_sum instead of rfer_cmpr_sum which is confusing. I expect that rfer_cmpr == excl_cmpr in squota, but it is much better to be consistent in case of any surprises or bugs. Reported-by: Chris Mason <clm@meta.com> Link: https://lore.kernel.org/linux-btrfs/cover.1764796022.git.boris@bur.io/T/#mccb231643ffd290b44a010d4419474d280be5537 Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Boris Burkov <boris@bur.io> Signed-off-by: David Sterba <dsterba@suse.com>
Both functions btrfs_finish_ordered_extent() and btrfs_mark_ordered_io_finished() are accepting an optional folio parameter. That @Folio is passed into can_finish_ordered_extent(), which later will test and clear the ordered flag for the involved range. However I do not think there is any other call site that can clear ordered flags of an page cache folio and can affect can_finish_ordered_extent(). There are limited *_clear_ordered() callers out of can_finish_ordered_extent() function: - btrfs_migrate_folio() This is completely unrelated, it's just migrating the ordered flag to the new folio. - btrfs_cleanup_ordered_extents() We manually clean the ordered flags of all involved folios, then call btrfs_mark_ordered_io_finished() without a @Folio parameter. So it doesn't need and didn't pass a @Folio parameter in the first place. - btrfs_writepage_fixup_worker() This function is going to be removed soon, and we should not hit that function anymore. - btrfs_invalidate_folio() This is the real call site we need to bother with. If we already have a bio running, btrfs_finish_ordered_extent() in end_bbio_data_write() will be executed first, as btrfs_invalidate_folio() will wait for the writeback to finish. Thus if there is a running bio, it will not see the range has ordered flags, and just skip to the next range. If there is no bio running, meaning the ordered extent is created but the folio is not yet submitted. In that case btrfs_invalidate_folio() will manually clear the folio ordered range, but then manually finish the ordered extent with btrfs_dec_test_ordered_pending() without bothering the folio ordered flags. Meaning if the OE range with folio ordered flags will be finished manually without the need to call can_finish_ordered_extent(). This means all can_finish_ordered_extent() call sites should get a range that has folio ordered flag set, thus the old "return false" branch should never be triggered. Now we can: - Remove the @Folio parameter from involved functions * btrfs_mark_ordered_io_finished() * btrfs_finish_ordered_extent() For call sites passing a @Folio into those functions, let them manually clear the ordered flag of involved folios. - Move btrfs_finish_ordered_extent() out of the loop in end_bbio_data_write() We only need to call btrfs_finish_ordered_extent() once per bbio, not per folio. - Add an ASSERT() to make sure all folio ranges have ordered flags It's only for end_bbio_data_write(). And we already have enough safe nets to catch over-accounting of ordered extents. Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Keep this open, the build tests are hosted on github CI.