You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Imports modules only when they're requested. There may be more imports for which we could do this
Use while loop in findClosedElmJson, as recursion in JS is not optimal.
Reduce the number of operations when using --report=json or --report=junit.
This makes the startup time faster, as demonstrated when running using --help (using hyperfine):
Benchmark 1: master
Time (mean ± σ): 94.2 ms ± 1.3 ms [User: 105.0 ms, System: 16.5 ms]
Range (min … max): 91.7 ms … 97.2 ms 31 runs
Benchmark 2: branch
Time (mean ± σ): 35.9 ms ± 1.0 ms [User: 31.1 ms, System: 8.2 ms]
Range (min … max): 34.8 ms … 41.9 ms 80 runs
Summary
branch ran 2.63 ± 0.08 times faster than master
However, when running regular tests (plain elm-test, on elmcraft/core-extra), I'm getting worse results:
Benchmark1: master
Time(mean ± σ):1.634 s ± 0.068 s [User:0.633 s,System:0.218 s]Range(min … max):1.510 s … 1.722 s 10 runs
Benchmark2: branch
Time(mean ± σ):1.991 s ± 0.413 s [User:0.628 s,System:0.232 s]Range(min … max):1.644 s … 2.745 s 10 runs
Summary
master ran 1.22 ± 0.26 times faster than branch
I don't know if my benchmarking is wrong or if lazy imports makes things worse when they're spread out and most of them end up being called. If I end up running hyperfine with benchmarks reversed, I usually get diverging results :/
However, mocha tests reports the duration of slow tests, and when comparing mocha tests on both branches, the new version does seem to yield faster tests.
elm-test in example-application/ takes the same time. (I think that’s a good test case, since it has so few tests – the runtime should be dominated by all the setup around running the tests, rather than running the tests themselves.)
Running the mocha tests take the same time.
I really appreciate you taking the time to explore this! But so far, I don’t think it’s worth it.
Maybe related note: My dream is to switch from Flow to TypeScript and use ESM instead of CommonJS. It feels like moving the require() calls like in this PR would make that slightly harder to do.
My dream is to switch from Flow to TypeScript and use ESM instead of CommonJS.
Looking at the code I felt like Flow was forcing some odd code (all the void declarations/exports for instance) to please Flow. So yeah, switching to TS seems like a fine move. (Though we are like 122 versions behind the latest Flow. I tried it briefly and it reported a lot of new errors).
ESM can't be used while supporting Node 12/14, right?
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
findClosedElmJson, as recursion in JS is not optimal.--report=jsonor--report=junit.This makes the startup time faster, as demonstrated when running using
--help(usinghyperfine):However, when running regular tests (plain
elm-test, onelmcraft/core-extra), I'm getting worse results:I don't know if my benchmarking is wrong or if lazy imports makes things worse when they're spread out and most of them end up being called. If I end up running
hyperfinewith benchmarks reversed, I usually get diverging results :/However,
mochatests reports the duration of slow tests, and when comparingmocha testson both branches, the new version does seem to yield faster tests.