Skip to content

Comments

feat: (perf) allow spawning multiple tasks per read#2156

Open
tafia wants to merge 2 commits intoapache:mainfrom
tafia:spawn-multiple-tasks-per-read
Open

feat: (perf) allow spawning multiple tasks per read#2156
tafia wants to merge 2 commits intoapache:mainfrom
tafia:spawn-multiple-tasks-per-read

Conversation

@tafia
Copy link

@tafia tafia commented Feb 21, 2026

Scanning of all files is both cpu and io intensive. While we can control the io parallelism via concurrency_limit* arguments, all the work is effectively done on the same tokio task, thus the same cpu.

This situation is one of the main reason why iceberg-rust is much slower than pyiceberg while reading large files (my test involved a 10G file).

This PR proposes to split scans into chunks which can be spawned independently to allow cpu parallelism.

In my tests (I have yet to find how to benchmark it in this project directly), reading a 10G file:

  • before: 38s
  • after: 16s
  • pyiceberg: 15s

Which issue does this PR close?

I haven't found any particular issue but several comments are referring to cpu bounded processing.

What changes are included in this PR?

This PR proposes to split scans into chunks which can be spawned independently to allow cpu parallelism.

Are these changes tested?

I have added a test to show that the change doesn't affect the output. I have yet to find a good benchmark to prove my claim about the performance. Any tip on how I could do would be welcomed!

Scanning of all files is both cpu and io intensive. While we can
control the io parallelism via concurrency_limit* arguments, all
the work is effectively done on the same tokio task, thus the
same cpu.

This situation is one of the main reason why iceberg-rust is much
slower than pyiceberg while reading large files (my test involved
a 10G file).

This PR proposes to split scans into chunks which can be spawned
independently to allow cpu parallelism.

In my tests (I have yet to find how to benchmark it in this project
directly), reading a 10G file:
- before: 38s
- after: 16s
- pyiceberg: 15s
@tafia
Copy link
Author

tafia commented Feb 21, 2026

The error seems unrelated to the PR (python) or wrong (need to get iterator ownership not just elements)

@mbutrovich mbutrovich self-requested a review February 23, 2026 14:58
@mbutrovich
Copy link
Collaborator

I'll take a look. In theory there's nothing stopping from generating FileScanTasks that span pieces of the files now (this is what Comet does with the existing reader). I've still been running into mismatches in parallelism though and trying to get more CPU utilization out of the Iceberg scan stages of Comet jobs, even when we've properly dispatched a bunch of I/O requests. I suspect you could be onto something here. Thanks! I'll take a pass this week.

@tafia
Copy link
Author

tafia commented Feb 23, 2026

I've still been running into mismatches in parallelism

Yes, this is far from an optimal solution but at least it is a simple move in the right direction.

Fyi, I've also added some row-group and columns parallelism but the changes are more complex and not ready to be merged.

Copy link
Contributor

@blackmwk blackmwk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @tafia for this pr. Do you mind to try datafusion integration rather than using arrow reader directly? I'm declining to make the to_arrow method more complicated. If you want a high performance local query engine, using datafusion is the right direction.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants