#201 / IHS-194: Generate SDK API documentation#822
#201 / IHS-194: Generate SDK API documentation#822
Conversation
…ng the future documentation generation method to be tested in isolation IHS-201
…n method to be developed IHS-201
…archy. This is not yet plugged into tasks.py file IHS-201
… merged into one IHS-201
…tation files generated IHS-200
WalkthroughThe pull request restructures documentation generation infrastructure by introducing a modular framework for content generation strategies ( 🚥 Pre-merge checks | ✅ 2 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. No actionable comments were generated in the recent review. 🎉 🧹 Recent nitpick comments
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report✅ All modified and coverable lines are covered by tests. @@ Coverage Diff @@
## develop #822 +/- ##
========================================
Coverage 80.34% 80.34%
========================================
Files 115 115
Lines 9873 9873
Branches 1504 1504
========================================
Hits 7932 7932
Misses 1420 1420
Partials 521 521
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
2bb2cbe to
31eee1b
Compare
Deploying infrahub-sdk-python with
|
| Latest commit: |
6d9e056
|
| Status: | ✅ Deploy successful! |
| Preview URL: | https://62f41275.infrahub-sdk-python.pages.dev |
| Branch Preview URL: | https://pmi-20260210-sdk-api-documen.infrahub-sdk-python.pages.dev |
037d683 to
fb81d93
Compare
… requirements IHS-197
4347025 to
900096c
Compare
There was a problem hiding this comment.
Actionable comments posted: 18
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.github/workflows/ci.yml (1)
193-230:⚠️ Potential issue | 🟠 MajorOperator precedence bug in the
ifcondition causes thealways()guard to be bypassed.Line 198: The trailing
|| (needs.files-changed.outputs.documentation_generated == 'true')is evaluated with lower precedence than the preceding&&chain. This means whendocumentation_generatedis'true', the job runs even if upstream jobs failed or were cancelled — bypassing thealways() && !cancelled() && !contains(...)guards.This is pre-existing, but the new steps (NodeJS install, npm deps, vitest) now also execute under this flawed condition.
Proposed fix: wrap the entire condition in parentheses
if: | always() && !cancelled() && !contains(needs.*.result, 'failure') && !contains(needs.*.result, 'cancelled') && - (needs.files-changed.outputs.python == 'true') || (needs.files-changed.outputs.documentation_generated == 'true') + ((needs.files-changed.outputs.python == 'true') || (needs.files-changed.outputs.documentation_generated == 'true'))
🤖 Fix all issues with AI agents
In @.github/workflows/sync-docs.yml:
- Around line 11-12: Update the sync step to copy the actual source paths and
include the missing dependency: replace references to
source-repo/docs/sidebars-infrahubctl.ts and
source-repo/docs/sidebars-python-sdk.ts with
source-repo/docs/sidebars/sidebars-infrahubctl.ts and
source-repo/docs/sidebars/sidebars-python-sdk.ts respectively, and add
source-repo/docs/sidebars/sidebar-utils.ts to the list of files being copied so
the imports in sidebars-* files resolve; also confirm whether the target expects
these files under target-repo/docs/ or target-repo/docs/sidebars/ and adjust the
destination paths accordingly.
In @.vale/styles/Infrahub/spelling.yml:
- Line 9: The regex '\b\w+_\w+\b' in spelling.yml is too broad and mutes
spell-checking for any underscored word in prose; remove this global filter and
instead scope it to generated reference docs (apply the pattern in a style used
only for generated MDX or add a path-specific rule in .vale.ini) or tighten the
pattern to only match code identifiers (e.g., only inside code spans/fences or
backticks) so hand-written prose like "recieve_data" still gets flagged; update
the spelling.yml entry for the snake_case rule and move or replace it with the
scoped/tighter pattern so it no longer applies globally.
In `@docs/docs_generation/content_gen_methods/command_output_method.py`:
- Around line 40-49: The apply method creates a temp file (tmp_path) but never
removes it if self.context.run(full_cmd) or subsequent operations raise; wrap
the block that runs the command and reads the file in a try/finally so
tmp_path.unlink(missing_ok=True) is always executed: specifically, after
creating tmp_path in apply(), execute the context.cd / context.run and
tmp_path.read_text() inside a try and move the tmp_path.unlink call into the
finally to guarantee cleanup even on exceptions.
In `@docs/docs_generation/content_gen_methods/mdx/mdx_code_doc.py`:
- Around line 91-102: The _execute_mxify implementation stores MdxFile.path
pointing into a TemporaryDirectory that is removed when the with block exits and
also generate() caches results in self._files without accounting for the
modules_to_document argument; to fix, remove the dangling path by populating
MdxFile only with persistent data (e.g., content and name) or explicitly
document that path is ephemeral and only valid inside _execute_mdxify, and
change generate() to key its cache by the modules_to_document argument (e.g.,
compute a stable key from modules_to_document and store results per-key in
self._files) so repeated calls with different modules re-run mdxify rather than
returning stale results; update references to MdxFile.path usage elsewhere to
use the content/name fields or handle the ephemeral path accordingly.
In `@docs/docs_generation/helpers.py`:
- Around line 28-34: The _resolve_allof function assumes
prop["allOf"][0]["$ref"] exists and will crash for empty or non-$ref allOf
entries; update _resolve_allof to defensively validate that prop.get("allOf") is
a non-empty list and that the first entry is a dict containing a "$ref" string
before accessing it, returning ([], "") if those checks fail, and then continue
to extract ref_name, lookup ref_def from definitions and return
ref_def.get("enum", []), ref_def.get("type", ""). Ensure you reference the
existing function name _resolve_allof and variables prop, definitions, ref_name,
and ref_def when making the change.
- Around line 21-23: The code is calling the private method
EnvSettingsSource._extract_field_info via env_settings._extract_field_info while
iterating settings.model_fields, which risks breaking on future
pydantic-settings releases; update the project dependency spec to pin
pydantic-settings to a safe major range (e.g., pydantic-settings>=2.0,<3.0) and
add an inline comment next to the usage of env_settings._extract_field_info
(and/or above the loop referencing settings.model_fields) documenting that this
relies on a private API and why it's pinned, and include a TODO referencing the
suggested long-term refactor to public APIs (get_field_value,
prepare_field_value or alias-based configuration) so future maintainers know to
replace _extract_field_info.
In `@docs/docs/python-sdk/sdk_ref/infrahub_sdk/client.mdx`:
- Line 855: Update the synchronous method's docstring to rename the parameter
entry from `size` to `prefix_length` so it matches the method signature (use the
same wording as the async variant), ensuring the line that currently reads
"`size`: Length of the prefix to allocate." becomes "`prefix_length`: Length of
the prefix to allocate." and that the docstring for the function that accepts
prefix_length is consistent with its async counterpart.
- Around line 252-254: The docs return-type for the async method
InfrahubClient.filters() is incorrect: change the return annotation from
list[InfrahubNodeSync] to list[InfrahubNode] in the relevant docs block so it
matches the async implementation (keep the sync variant as
list[InfrahubNodeSync]); update the "**Returns:**" section and any nearby
references to use list[InfrahubNode] for the async filters() description.
- Around line 480-482: Add a docstring to the InfrahubClientSync class mirroring
the existing one on InfrahubClient so the sync variant shows up in generated
docs; locate the class declaration for InfrahubClientSync in
infrahub_sdk/client.py, add a short module-level style docstring such as
"GraphQL Client to interact with Infrahub." (matching the InfrahubClient
phrasing) immediately below the class signature so documentation generators pick
it up.
- Around line 384-386: Update the docstrings in infrahub_sdk/client.py for the
methods allocate_next_ip_address and allocate_next_ip_prefix: change the
`timeout` description to "Timeout in seconds for the query", clarify `tracker`
to accurately describe its purpose (e.g., pagination/tracking token rather than
"offset"), and set `raise_for_error` to "Deprecated. Controls only HTTP status
handling"; also in InfrahubClientSync.allocate_next_ip_prefix fix the parameter
name in the docstring from `size` to `prefix_length` to match the function
signature. After updating those docstrings, run the docs-generate task to
regenerate the MDX output so docs/docs/.../client.mdx reflects the corrected
descriptions.
In `@docs/docs/python-sdk/sdk_ref/infrahub_sdk/node/attribute.mdx`:
- Around line 16-26: The file contains duplicate headings for the attribute
accessor `value` (getter and setter) which create conflicting anchor IDs; update
the doc generator mdxify to emit distinct headings for the two accessors (e.g.,
"value (getter)" and "value (setter)" or include property annotations like
"`@property value`" and "`@value.setter`") so the generated sections for the
`value` getter and the `value` setter produce unique anchors and no duplicate
`#### value` headings.
In `@docs/docs/python-sdk/sdk_ref/infrahub_sdk/node/node.mdx`:
- Around line 232-236: The Examples in the docstring for get_flat_value (both
async and sync implementations) render as plain text; update the source
docstrings in infrahub_sdk/node/node.py for get_flat_value to wrap the example
identifiers (e.g., name__value, module.object.value) in a fenced code block with
a language tag (e.g., ```python) so the generated MDX shows them as code; apply
this change to both the async and sync get_flat_value docstrings.
- Around line 162-163: Update the docstrings for both the sync and async
implementations of generate_query_data_node to fix the typo: replace "Indicated
of the attributes and the relationships inherited from generics should be
included as well." with "Indicates whether the attributes and the relationships
inherited from generics should be included as well." Ensure the exact corrected
sentence appears in both docstrings so regenerated MDX (node.mdx) reflects the
fix.
In `@docs/docs/python-sdk/sdk_ref/infrahub_sdk/node/parsers.mdx`:
- Line 16: The docstring for parse_human_friendly_id in
infrahub_sdk/node/parsers.py uses "human friendly" and should be changed to the
hyphenated compound adjective "human-friendly"; update the docstring in the
parse_human_friendly_id function accordingly, save, then regenerate the docs by
running `uv run invoke generate-sdk` so the MDX is rebuilt from the corrected
source.
In `@docs/src/theme/MDXComponents.js`:
- Line 9: The comment in MDXComponents.js incorrectly suggests using the Icon
component as "<icon />" which is misleading because MDX treats lowercase tags as
HTML; update the comment for the Icon mapping (Icon: Icon) to instruct using the
capitalized component name "<Icon />" in MDX so readers know to use the React
component form.
In `@infrahub_sdk/template/jinja2/__init__.py`:
- Around line 189-217: In _identify_faulty_jinja_code, replace the private
Traceback._guess_lexer usage with the public Syntax.guess_lexer API: call
Syntax.guess_lexer(frame.filename, code) to compute lexer_name (keeping the
existing special-case for "<template>" mapping to "text"), so lexer selection
uses the stable public method and its built-in fallback instead of
Traceback._guess_lexer; update the assignment to lexer_name and keep the rest of
the Syntax(...) construction unchanged.
In `@infrahub_sdk/template/jinja2/exceptions.py`:
- Around line 11-41: The exception classes in this file (JinjaTemplateError,
JinjaTemplateNotFoundError, JinjaTemplateSyntaxError,
JinjaTemplateUndefinedError, JinjaTemplateOperationViolationError) set
self.message but never call the base initializer; update each __init__ to call
super().__init__(self.message) after assigning self.message (and keep assigning
any extra attributes like filename, base_template, lineno, errors) so the
underlying Error/Exception gets initialized correctly; ensure JinjaTemplateError
itself calls super().__init__(self.message) in its __init__ as well.
In `@tests/unit/sdk/dummy_template.py`:
- Around line 11-18: The docstring says DummyTemplate should absorb extra
keyword args but __init__ only accepts content, causing TypeError when tests
pass template= or template_directory=; update DummyTemplate.__init__ to accept
**kwargs by changing its signature to def __init__(self, content: str, **kwargs:
Any) -> None and ensure typing.Any is imported (or use typing_extensions if
needed), leaving the docstring as-is; alternatively, if you prefer removing
behavior, update the docstring to drop the **kwargs mention—prefer adding
**kwargs to maintain compatibility with Jinja2Template.
🧹 Nitpick comments (15)
docs/docs/python-sdk/sdk_ref/infrahub_sdk/client.mdx (2)
388-389: InconsistentReturnsformatting — plain text instead of MDX bold/bullet style.Lines 388–389 (and similarly 448–449, 804–805, 864–865) use raw indented text (
Returns:\n InfrahubNode:...) instead of the**Returns:**\n\n- ...pattern used elsewhere in this file (e.g., Lines 209–211). This appears to be a rendering issue in the doc generator for these specific docstrings.
44-54: Duplicate#### request_contextheadings may cause anchor collisions.Both the getter (Line 44) and setter (Line 50) generate the same
#### request_contextheading. In Docusaurus, this can produce duplicate HTMLidanchors, making direct links unreliable. Consider differentiating them (e.g., appending(getter)/(setter)) in the doc generator.docs/docs/python-sdk/sdk_ref/infrahub_sdk/node/constants.mdx (1)
1-8: Consider whether empty module pages add value to the docs.This page states the module is empty or contains only private/internal implementations. Publishing it may confuse users looking for actual API content. If the auto-generation tool produces these by default, consider filtering out empty modules or at minimum adding a note explaining this is intentional.
docs/package.json (1)
25-25:markdownlint-cli2should likely be indevDependencies.This is a linting tool, not a runtime dependency for Docusaurus. Placing it in
dependenciescauses it to be installed in production builds unnecessarily.Proposed fix
Move
markdownlint-cli2fromdependenciestodevDependencies:"dependencies": { "@docusaurus/core": "^3.9.2", "@docusaurus/preset-classic": "^3.9.2", "@iconify/react": "^6.0.0", "@mdx-js/react": "^3.0.0", "clsx": "^2.0.0", - "markdownlint-cli2": "^0.20.0", "prism-react-renderer": "^2.3.0", ... }, "devDependencies": { ... + "markdownlint-cli2": "^0.20.0", "vitest": "^4.0.17" },docs/AGENTS.md (1)
67-70: Consider documenting the newgenerate-sdk-api-docstask in the "Never" section.Line 70 mentions not editing
config.mdxand regenerating withgenerate-sdk, but the PR introduces a newgenerate-sdk-api-docsinvoke task for the SDK API reference pages undersdk_ref/. A corresponding "Never editdocs/python-sdk/sdk_ref/*.mdxdirectly" entry would be consistent.Proposed addition
- Edit `docs/infrahubctl/*.mdx` directly (regenerate with `uv run invoke generate-infrahubctl`) - Edit `docs/python-sdk/reference/config.mdx` directly (regenerate with `uv run invoke generate-sdk`) +- Edit `docs/python-sdk/sdk_ref/*.mdx` directly (regenerate with `uv run invoke generate-sdk-api-docs`)infrahub_sdk/template/jinja2/filters.py (1)
4-8: Consider making the dataclass frozen for immutability.Since these definitions are static configuration data,
frozen=Truewould prevent accidental mutation and make instances hashable.Suggested change
-@dataclass +@dataclass(frozen=True) class FilterDefinition: name: str trusted: bool source: strdocs/docs_generation/content_gen_methods/mdx/mdx_code_doc.py (1)
85-89: Cache ignoresmodules_to_documentparameter on subsequent calls.
generate()caches on the first invocation, but the cache key doesn't includemodules_to_document. If a caller invokesgenerate()a second time with different modules, the stale cached result is returned silently.Currently the single call site in
tasks.pyonly calls this once per instance, so this isn't an active bug, but it's a subtle API contract that could bite future consumers.💡 Option: store modules and validate on subsequent calls
def generate(self, context: Context, modules_to_document: list[str]) -> dict[str, MdxFile]: """Return mdxify results, running the tool on first call only.""" if self._files is None: + self._modules = modules_to_document self._files = self._execute_mdxify(context, modules_to_document) + elif self._modules != modules_to_document: + raise ValueError("generate() called with different modules than the cached run") return self._filestasks.py (3)
155-167: UseExitinstead ofValueErrorfor user-facing task errors.Lines 159 and 164 raise
ValueError, which produces a full Python traceback. The rest of the file (e.g.,require_toolon line 26) raisesExitfor user-facing errors, which gives clean output. These validation errors are user-facing (triggered when someone adds a new package without categorizing it), so they should follow the same pattern.♻️ Proposed fix
if uncategorized: - raise ValueError( + raise Exit( f"Uncategorized packages under infrahub_sdk/: {sorted(uncategorized)}. " - "Add them to packages_to_document or packages_to_ignore in tasks.py" + "Add them to packages_to_document or packages_to_ignore in tasks.py", + code=1, ) if unknown: - raise ValueError(f"Declared packages that no longer exist: {sorted(unknown)}") + raise Exit(f"Declared packages that no longer exist: {sorted(unknown)}", code=1)
189-192: Thereduce(operator.truediv, ...)path construction is dense — consider a clarifying comment.Line 191 converts a hyphenated filename key (e.g.,
"infrahub_sdk-node.mdx") into a nested path (e.g.,infrahub_sdk/node.mdx) by splitting on"-"and joining viaPath /. This is correct but non-obvious at a glance.💡 Add a brief inline comment
for file_key, mdxified_file in generated_files.items(): page = DocPage(content_gen_method=FilePrintingDocContentGenMethod(file=mdxified_file)) + # Convert hyphenated filename to nested path: "a-b-c.mdx" → a/b/c.mdx target_path = output_dir / reduce(operator.truediv, (Path(part) for part in file_key.split("-"))) MDXDocPage(page=page, output_path=target_path).to_mdx()
281-303:docs_validatedoesn't detect staged deletions.
git diff --name-only docs(line 291) only shows unstaged working-tree changes. Ifdocs_generateremoves a file that was committed, the deletion shows up ingit diff. However,git diff --name-onlywithout--cacheddoes not catch changes that have been staged. This is fine for the current flow sincedocs_generatedoesn't stage anything — just confirming the assumption is correct.Also:
context.run(...)can returnNoneif the invoke runner is configured withwarn=Trueand the command fails. Here you guard withif diff_result/if untracked_resultwhich handles that case, but a failedgit diffwould silently pass validation rather than raising. Consider addingwarn=False(the default) or explicitly checking the exit code.docs/docs/python-sdk/sdk_ref/infrahub_sdk/node/relationship.mdx (1)
76-80:removemethods lack descriptions unlike sibling methodsadd/extend.Since this is auto-generated, consider adding a docstring to the
removemethods in the source (RelationshipManager.removeandRelationshipManagerSync.remove) so the generated docs are consistent withaddandextend.Also applies to: 110-113
docs/docs_generation/content_gen_methods/jinja2_method.py (1)
38-39:asyncio.run()will fail if called from within an existing event loop.This is fine for invoke tasks (synchronous entry), but
asyncio.run()raisesRuntimeErrorif an event loop is already running. Worth noting as a constraint if this method is ever called from an async context.docs/docs_generation/pages/base.py (2)
29-30: Minor: docstring says "Decorator" but this is a Wrapper.A GoF Decorator preserves the same interface as the wrapped object.
MDXDocPageexposesto_mdx()rather thancontent(), so it's more accurately a Wrapper or Adapter aroundDocPage.
46-50: Consider usinglogginginstead of
print()on Line 50 works for invoke tasks but doesn't integrate with log levels or structured logging. A minor nit for a docs-generation tool.docs/docs_generation/content_gen_methods/__init__.py (1)
3-10: Mixed absolute and relative imports within the same package.Lines 3–5 use absolute imports (
docs.docs_generation.content_gen_methods.command...,docs.docs_generation.content_gen_methods.mdx...) while Lines 7–10 use relative imports (.base,.command_output_method, etc.). Consider using relative imports consistently for intra-package references.♻️ Suggested fix: use relative imports throughout
-from docs.docs_generation.content_gen_methods.command.command import ACommand -from docs.docs_generation.content_gen_methods.command.typer_command import TyperGroupCommand, TyperSingleCommand -from docs.docs_generation.content_gen_methods.mdx.mdx_code_doc import MdxCodeDocumentation +from .command.command import ACommand +from .command.typer_command import TyperGroupCommand, TyperSingleCommand +from .mdx.mdx_code_doc import MdxCodeDocumentation
| - 'docs/sidebars/sidebars-infrahubctl.ts' | ||
| - 'docs/sidebars/sidebars-python-sdk.ts' |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, check the actual state of the sync-docs.yml file around lines 11-12 and 36-37
head -50 .github/workflows/sync-docs.yml | cat -nRepository: opsmill/infrahub-sdk-python
Length of output: 1685
🏁 Script executed:
# Check if docs/sidebars directory and files exist
fd . docs/sidebars/ -type f 2>/dev/null || echo "docs/sidebars/ not found or empty"
ls -la docs/ | grep -i sidebarRepository: opsmill/infrahub-sdk-python
Length of output: 164
🏁 Script executed:
# Search for imports of sidebar-utils in the sidebar files
rg "sidebar-utils" docs/sidebars/ 2>/dev/null || echo "No sidebar-utils imports found"Repository: opsmill/infrahub-sdk-python
Length of output: 353
🏁 Script executed:
# Check if sidebar-utils.ts exists at root docs level
fd sidebar-utils docs/ -type fRepository: opsmill/infrahub-sdk-python
Length of output: 241
🏁 Script executed:
# Look at the target repo structure expectation - check if there's documentation or other references
rg "target-repo/docs" .github/workflows/sync-docs.yml -A 2 -B 2Repository: opsmill/infrahub-sdk-python
Length of output: 565
🏁 Script executed:
# Check if sidebar-utils.ts exists in docs/sidebars/
ls -la docs/sidebars/Repository: opsmill/infrahub-sdk-python
Length of output: 464
🏁 Script executed:
# Verify the source paths referenced in the workflow don't actually exist at root
ls -la docs/ | grep "sidebars-"Repository: opsmill/infrahub-sdk-python
Length of output: 53
Fix path mismatch in sync commands and add missing sidebar-utils.ts dependency.
Lines 11–12 correctly watch docs/sidebars/sidebars-*.ts, but lines 36–37 copy from non-existent paths: source-repo/docs/sidebars-infrahubctl.ts and source-repo/docs/sidebars-python-sdk.ts. The files are located at source-repo/docs/sidebars/sidebars-infrahubctl.ts and source-repo/docs/sidebars/sidebars-python-sdk.ts. Additionally, both sidebar files import from ./sidebar-utils, but sidebar-utils.ts is not synced, which will break imports in the target repo.
Proposed fix
cp -r source-repo/docs/docs/* target-repo/docs/docs-python-sdk/
- cp source-repo/docs/sidebars-infrahubctl.ts target-repo/docs/
- cp source-repo/docs/sidebars-python-sdk.ts target-repo/docs/
+ cp source-repo/docs/sidebars/sidebars-infrahubctl.ts target-repo/docs/
+ cp source-repo/docs/sidebars/sidebars-python-sdk.ts target-repo/docs/
+ cp source-repo/docs/sidebars/sidebar-utils.ts target-repo/docs/Also verify whether the target repo expects these files at target-repo/docs/ root or in a sidebars/ subdirectory.
🤖 Prompt for AI Agents
In @.github/workflows/sync-docs.yml around lines 11 - 12, Update the sync step
to copy the actual source paths and include the missing dependency: replace
references to source-repo/docs/sidebars-infrahubctl.ts and
source-repo/docs/sidebars-python-sdk.ts with
source-repo/docs/sidebars/sidebars-infrahubctl.ts and
source-repo/docs/sidebars/sidebars-python-sdk.ts respectively, and add
source-repo/docs/sidebars/sidebar-utils.ts to the list of files being copied so
the imports in sidebars-* files resolve; also confirm whether the target expects
these files under target-repo/docs/ or target-repo/docs/sidebars/ and adjust the
destination paths accordingly.
| - '[pP]y.*\b' | ||
| - '\bimport_.*\b' # New filter to ignore variables starting with 'import_' | ||
| - '\w+__value' # New filter to skip Infrahub filters in documentation (name__value) | ||
| - '\b\w+_\w+\b' # Ignore snake_case identifiers (Python parameter/attribute names) |
There was a problem hiding this comment.
Overly broad filter — will suppress spell-checking on all snake_case text, not just Python identifiers.
The pattern \b\w+_\w+\b matches any word containing an underscore, so a misspelled phrase like recieve_data or functon_name in hand-written prose would be silently ignored. Since this filter applies globally (not just to auto-generated MDX files), it reduces Vale's effectiveness on manually authored docs.
Consider either:
- Scoping this rule to only the generated reference paths (e.g., via a separate
.vale.inisection or a dedicated style for generated docs). - Using a slightly more restrictive pattern if possible, though admittedly that's hard with pure regex.
Not a blocker given the volume of Python identifiers in the new generated docs, but worth being aware of the trade-off.
🤖 Prompt for AI Agents
In @.vale/styles/Infrahub/spelling.yml at line 9, The regex '\b\w+_\w+\b' in
spelling.yml is too broad and mutes spell-checking for any underscored word in
prose; remove this global filter and instead scope it to generated reference
docs (apply the pattern in a style used only for generated MDX or add a
path-specific rule in .vale.ini) or tighten the pattern to only match code
identifiers (e.g., only inside code spans/fences or backticks) so hand-written
prose like "recieve_data" still gets flagged; update the spelling.yml entry for
the snake_case rule and move or replace it with the scoped/tighter pattern so it
no longer applies globally.
| def apply(self) -> str: | ||
| with tempfile.NamedTemporaryFile(mode="w", suffix=".mdx", delete=False, encoding="utf-8") as tmp: | ||
| tmp_path = Path(tmp.name) | ||
|
|
||
| full_cmd = f"{self.command.build()} --output {tmp_path}" | ||
| with self.context.cd(self.working_directory): | ||
| self.context.run(full_cmd) | ||
|
|
||
| content = tmp_path.read_text(encoding="utf-8") | ||
| tmp_path.unlink(missing_ok=True) |
There was a problem hiding this comment.
Temp file leaks if context.run raises.
If the command execution on line 46 fails, the temp file created on line 41 is never cleaned up because unlink on line 49 is skipped.
🛡️ Proposed fix: wrap in try/finally
def apply(self) -> str:
with tempfile.NamedTemporaryFile(mode="w", suffix=".mdx", delete=False, encoding="utf-8") as tmp:
tmp_path = Path(tmp.name)
- full_cmd = f"{self.command.build()} --output {tmp_path}"
- with self.context.cd(self.working_directory):
- self.context.run(full_cmd)
-
- content = tmp_path.read_text(encoding="utf-8")
- tmp_path.unlink(missing_ok=True)
- return content
+ try:
+ full_cmd = f"{self.command.build()} --output {tmp_path}"
+ with self.context.cd(self.working_directory):
+ self.context.run(full_cmd)
+
+ content = tmp_path.read_text(encoding="utf-8")
+ finally:
+ tmp_path.unlink(missing_ok=True)
+ return content🤖 Prompt for AI Agents
In `@docs/docs_generation/content_gen_methods/command_output_method.py` around
lines 40 - 49, The apply method creates a temp file (tmp_path) but never removes
it if self.context.run(full_cmd) or subsequent operations raise; wrap the block
that runs the command and reads the file in a try/finally so
tmp_path.unlink(missing_ok=True) is always executed: specifically, after
creating tmp_path in apply(), execute the context.cd / context.run and
tmp_path.read_text() inside a try and move the tmp_path.unlink call into the
finally to guarantee cleanup even on exceptions.
| def _execute_mdxify(self, context: Context, modules_to_document: list[str]) -> dict[str, MdxFile]: | ||
| with tempfile.TemporaryDirectory() as tmp_dir: | ||
| exec_cmd = f"mdxify {' '.join(modules_to_document)} --output-dir {tmp_dir}" | ||
| context.run(exec_cmd, pty=True) | ||
|
|
||
| results: dict[str, MdxFile] = {} | ||
| for mdx_file in Path(tmp_dir).glob("*.mdx"): | ||
| if any(f.lower() in mdx_file.name for f in self.file_filters): | ||
| continue | ||
| content = _wrap_doctest_examples(mdx_file.read_text(encoding="utf-8")) | ||
| results[mdx_file.name] = MdxFile(path=mdx_file, content=content) | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find MdxFile definition and all its usages
fd -n . -type f -name "*.py" | xargs grep -l "class MdxFile\|MdxFile(" | head -20Repository: opsmill/infrahub-sdk-python
Length of output: 301
🏁 Script executed:
# Get file structure and find the actual files mentioned
find . -type f -name "mdx_code_doc.py" -o -name "tasks.py" | grep -E "(docs_generation|infrahub)" | head -10Repository: opsmill/infrahub-sdk-python
Length of output: 133
🏁 Script executed:
# Search for MdxFile class definition
rg "class MdxFile" -A 10Repository: opsmill/infrahub-sdk-python
Length of output: 948
🏁 Script executed:
# Search for FilePrintingDocContentGenMethod
rg "class FilePrintingDocContentGenMethod" -A 20Repository: opsmill/infrahub-sdk-python
Length of output: 1189
🏁 Script executed:
# Search for uses of MdxFile.path
rg "\.path\b" -B 3 -A 3 | grep -i "mdx"Repository: opsmill/infrahub-sdk-python
Length of output: 53
🏁 Script executed:
# Search for all MdxFile usages in the entire codebase
rg "MdxFile" --type=py -B 2 -A 2Repository: opsmill/infrahub-sdk-python
Length of output: 6311
🏁 Script executed:
# Look at the generate() method and caching behavior in mdx_code_doc.py
cat -n docs/docs_generation/content_gen_methods/mdx/mdx_code_doc.py | head -110 | tail -40Repository: opsmill/infrahub-sdk-python
Length of output: 1910
🏁 Script executed:
# Check if there's any code that accesses .path on MdxFile instances
rg "mdx.*\.path|\.path.*mdx" --type=py -iRepository: opsmill/infrahub-sdk-python
Length of output: 53
MdxFile.path becomes invalid after _execute_mdxify returns, and caching ignores modules_to_document argument.
Two issues:
-
Dangling path reference: The
TemporaryDirectoryon line 92 is deleted after thewithblock exits, butMdxFile.pathstores a reference to a file inside that directory (line 101). While this is safe in practice because all consumers use onlyMdxFile.content, thepathfield is misleading—it appears valid but isn't. Remove thepathfield or document that it's only valid during execution. -
Stale cache on different arguments: The
generate()method (lines 85–89) caches results inself._fileswithout accounting for themodules_to_documentargument. Ifgenerate()is called twice with different module arguments on the same instance, the second call silently returns the cached results from the first call instead of re-running mdxify. Cache the results by argument or document this behavior explicitly.
🤖 Prompt for AI Agents
In `@docs/docs_generation/content_gen_methods/mdx/mdx_code_doc.py` around lines 91
- 102, The _execute_mxify implementation stores MdxFile.path pointing into a
TemporaryDirectory that is removed when the with block exits and also generate()
caches results in self._files without accounting for the modules_to_document
argument; to fix, remove the dangling path by populating MdxFile only with
persistent data (e.g., content and name) or explicitly document that path is
ephemeral and only valid inside _execute_mdxify, and change generate() to key
its cache by the modules_to_document argument (e.g., compute a stable key from
modules_to_document and store results per-key in self._files) so repeated calls
with different modules re-run mdxify rather than returning stale results; update
references to MdxFile.path usage elsewhere to use the content/name fields or
handle the ephemeral path accordingly.
| for field_name, field in settings.model_fields.items(): | ||
| for field_key, field_env_name, _ in env_settings._extract_field_info(field, field_name): | ||
| env_vars[field_key].append(field_env_name.upper()) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's read the file to see the actual context
cat -n docs/docs_generation/helpers.py | head -50Repository: opsmill/infrahub-sdk-python
Length of output: 2223
🏁 Script executed:
# Check for pydantic_settings version constraints in the project
find . -name "*.txt" -o -name "*.toml" -o -name "*.cfg" -o -name "*.yaml" | xargs grep -l "pydantic.settings\|pydantic_settings" 2>/dev/null | head -10Repository: opsmill/infrahub-sdk-python
Length of output: 87
🏁 Script executed:
# Look for requirements files
fd -e "requirements*.txt" -e "pyproject.toml" -e "poetry.lock" -e "setup.py" | head -20Repository: opsmill/infrahub-sdk-python
Length of output: 53
🏁 Script executed:
# Read the pyproject.toml to see pydantic_settings version constraints
cat pyproject.tomlRepository: opsmill/infrahub-sdk-python
Length of output: 14423
🌐 Web query:
pydantic_settings EnvSettingsSource public API alternatives to _extract_field_info
💡 Result:
EnvSettingsSource._extract_field_info() is private (underscore) and not part of the supported extension surface. The public/stable ways to achieve the same goals are:
-
Don’t extract env-name candidates yourself—override
get_field_value()instead.EnvSettingsSource’s supported hook isget_field_value(field, field_name), which is the intended customization point for “how do I get a value for this field from env”. If you need different env-key rules, subclassEnvSettingsSource/PydanticBaseEnvSettingsSourceand implement your own lookup logic there. [1] -
Use the public helpers for “complex” + nested env handling instead of re-implementing internals:
prepare_field_value(...)(inEnvSettingsSourceand base) to deserialize/merge nested values for complex fields. [1]field_is_complex(...)anddecode_complex_value(...)(onPydanticBaseSettingsSource) if you only need the “is JSON / decode JSON” behavior. [1]explode_env_vars(...)+next_field(...)if your need is specifically nested env var expansion (e.g.FOO__BAR__BAZ=...). [1]
-
Prefer “declare the env names” via aliases, rather than extracting them from the source. If your real need is “what env var names map to this field?”, the supported approach is to define them using Pydantic’s alias mechanisms (e.g., validation aliases / alias choices) and let the settings sources consume those, rather than reaching into
_extract_field_info. [2] -
If you’re adding/replacing settings sources, do it via
settings_customise_sources. This is the public API for inserting a custom source (including custom env logic) into the settings loading pipeline. [2]
Bottom line: there isn’t a public “replacement function” that returns the same (field_key, env_name, value_is_complex) tuples as _extract_field_info; the supported alternatives are (a) override get_field_value, and (b) rely on the public helper methods (prepare_field_value, explode_env_vars, next_field, field_is_complex, decode_complex_value) plus alias configuration. [1][2]
References:
[1] Pydantic Settings API docs (EnvSettingsSource, PydanticBaseSettingsSource methods) https://docs.pydantic.dev/latest/api/pydantic_settings/
[2] Pydantic “Settings Management” docs (custom sources / customization) https://docs.pydantic.dev/usage/settings/
Reliance on private _extract_field_info API risks future breakage.
EnvSettingsSource._extract_field_info is a private method with no direct public API replacement in pydantic_settings. The current dependency constraint (pydantic-settings>=2.0) is loose and will accept future releases that may remove or change this private method without notice.
Immediate action: Pin pydantic_settings to a specific major version (e.g., pydantic-settings>=2.0,<3.0) and add a comment documenting this dependency on the private API.
Longer-term: Consider refactoring to use public APIs (get_field_value(), prepare_field_value(), or alias-based field configuration), though this would require more significant changes to the documentation generation logic.
🤖 Prompt for AI Agents
In `@docs/docs_generation/helpers.py` around lines 21 - 23, The code is calling
the private method EnvSettingsSource._extract_field_info via
env_settings._extract_field_info while iterating settings.model_fields, which
risks breaking on future pydantic-settings releases; update the project
dependency spec to pin pydantic-settings to a safe major range (e.g.,
pydantic-settings>=2.0,<3.0) and add an inline comment next to the usage of
env_settings._extract_field_info (and/or above the loop referencing
settings.model_fields) documenting that this relies on a private API and why
it's pinned, and include a TODO referencing the suggested long-term refactor to
public APIs (get_field_value, prepare_field_value or alias-based configuration)
so future maintainers know to replace _extract_field_info.
| parse_human_friendly_id(hfid: str | list[str]) -> tuple[str | None, list[str]] | ||
| ``` | ||
|
|
||
| Parse a human friendly ID into a kind and an identifier. |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find the source docstring for parse_human_friendly_id
rg -n "Parse a human friendly" --type=pyRepository: opsmill/infrahub-sdk-python
Length of output: 168
🏁 Script executed:
# Also find where parse_human_friendly_id is defined
rg -n "def parse_human_friendly_id" --type=py -A 5Repository: opsmill/infrahub-sdk-python
Length of output: 560
🏁 Script executed:
# Check if the MDX file has auto-generation markers
head -20 docs/docs/python-sdk/sdk_ref/infrahub_sdk/node/parsers.mdxRepository: opsmill/infrahub-sdk-python
Length of output: 345
🏁 Script executed:
# List the source structure to understand the layout
fd "parsers" --type f | head -20Repository: opsmill/infrahub-sdk-python
Length of output: 158
Minor: "human friendly" → "human-friendly" (hyphenate compound adjective).
This is auto-generated from the docstring of parse_human_friendly_id in infrahub_sdk/node/parsers.py. Fix the source docstring there rather than editing the MDX file directly, then regenerate with uv run invoke generate-sdk.
🧰 Tools
🪛 LanguageTool
[grammar] ~16-~16: Use a hyphen to join words.
Context: ...tr | None, list[str]] ``` Parse a human friendly ID into a kind and an identifie...
(QB_NEW_EN_HYPHEN)
🤖 Prompt for AI Agents
In `@docs/docs/python-sdk/sdk_ref/infrahub_sdk/node/parsers.mdx` at line 16, The
docstring for parse_human_friendly_id in infrahub_sdk/node/parsers.py uses
"human friendly" and should be changed to the hyphenated compound adjective
"human-friendly"; update the docstring in the parse_human_friendly_id function
accordingly, save, then regenerate the docs by running `uv run invoke
generate-sdk` so the MDX is rebuilt from the corrected source.
| export default { | ||
| // Re-use the default mapping | ||
| ...MDXComponents, | ||
| Icon: Icon, // Make the iconify Icon component available in MDX as <icon />. |
There was a problem hiding this comment.
Misleading comment: <icon /> (lowercase) won't resolve to this component in MDX.
In MDX, lowercase tags are treated as native HTML elements, not custom components. The component should be used as <Icon /> (capitalized) in MDX files. Consider updating the comment.
- Icon: Icon, // Make the iconify Icon component available in MDX as <icon />.
+ Icon: Icon, // Make the Iconify Icon component available in MDX as <Icon />.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Icon: Icon, // Make the iconify Icon component available in MDX as <icon />. | |
| Icon: Icon, // Make the Iconify Icon component available in MDX as <Icon />. |
🤖 Prompt for AI Agents
In `@docs/src/theme/MDXComponents.js` at line 9, The comment in MDXComponents.js
incorrectly suggests using the Icon component as "<icon />" which is misleading
because MDX treats lowercase tags as HTML; update the comment for the Icon
mapping (Icon: Icon) to instruct using the capitalized component name "<Icon />"
in MDX so readers know to use the React component form.
| def _identify_faulty_jinja_code(traceback: Traceback, nbr_context_lines: int = 3) -> list[UndefinedJinja2Error]: | ||
| """This function identifies the faulty Jinja2 code and beautify it to provide meaningful information to the user. | ||
| We use the rich's Traceback to parse the complete stack trace and extract Frames for each exception found in the trace. | ||
| """ | ||
| response = [] | ||
|
|
||
| # Extract only the Jinja related exception | ||
| for frame in [frame for frame in traceback.trace.stacks[0].frames if not frame.filename.endswith(".py")]: | ||
| code = "".join(linecache.getlines(frame.filename)) | ||
| if frame.filename == "<template>": | ||
| lexer_name = "text" | ||
| else: | ||
| lexer_name = Traceback._guess_lexer(frame.filename, code) | ||
| syntax = Syntax( | ||
| code, | ||
| lexer_name, | ||
| line_numbers=True, | ||
| line_range=( | ||
| frame.lineno - nbr_context_lines, | ||
| frame.lineno + nbr_context_lines, | ||
| ), | ||
| highlight_lines={frame.lineno}, | ||
| code_width=88, | ||
| theme=traceback.theme, | ||
| dedent=False, | ||
| ) | ||
| response.append(UndefinedJinja2Error(frame=frame, syntax=syntax)) | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd infrahub_sdk && pwd && head -220 template/jinja2/__init__.py | tail -50Repository: opsmill/infrahub-sdk-python
Length of output: 2229
🌐 Web query:
rich library Syntax guess_lexer public method API
💡 Result:
In the Rich Python library, Syntax.guess_lexer is a public @classmethod on rich.syntax.Syntax.
API
- Signature:
Syntax.guess_lexer(path, code=None) -> str(rich.readthedocs.io) - What it does: Returns the Pygments lexer alias/name Rich should use for syntax highlighting, based on a filename/path and (optionally) the file contents; if
codeis provided it may pick a more specific lexer (e.g.,html+djangofor an HTML file containing Django template syntax). (rich.readthedocs.io) - Fallback behavior: If it can’t determine a lexer, it returns
"default". (rich.readthedocs.io)
Typical use
from rich.syntax import Syntax
lexer = Syntax.guess_lexer("index.html", code=template_text)
syntax = Syntax(template_text, lexer)(You can also use Syntax.from_path(...), which calls guess_lexer when no lexer is provided.) (rich.readthedocs.io)
Citations:
- 1: https://rich.readthedocs.io/en/stable/reference/syntax.html?utm_source=openai
- 2: https://rich.readthedocs.io/en/stable/reference/syntax.html?utm_source=openai
- 3: https://rich.readthedocs.io/en/stable/_modules/rich/syntax.html?utm_source=openai
- 4: https://rich.readthedocs.io/en/stable/_modules/rich/syntax.html?utm_source=openai
🌐 Web query:
rich Traceback _guess_lexer private method alternative
💡 Result:
Traceback._guess_lexer() is a private helper (Rich uses it internally when rendering stack frames) and may change without notice. (rich.readthedocs.io)
Public alternatives:
- Use Rich’s public lexer guesser:
rich.syntax.Syntax.guess_lexer(path, code=...)(rich.readthedocs.io)
from rich.syntax import Syntax
lexer_name = Syntax.guess_lexer(filename, code=source_code)- Let Rich do it for you:
Syntax.from_path(...)will auto-detect the lexer whenlexer=None. (rich.readthedocs.io)
from rich.syntax import Syntax
syntax = Syntax.from_path(filename) # auto-detects lexer- Use Pygments directly (what Rich delegates to in traceback rendering):
pygments.lexers.guess_lexer_for_filename(filename, code)(rich.readthedocs.io)
Citations:
- 1: https://rich.readthedocs.io/en/stable/_modules/rich/traceback.html?utm_source=openai
- 2: https://rich.readthedocs.io/en/stable/reference/syntax.html?utm_source=openai
- 3: https://rich.readthedocs.io/en/stable/syntax.html?utm_source=openai
- 4: https://rich.readthedocs.io/en/stable/_modules/rich/traceback.html?utm_source=openai
🏁 Script executed:
git ls-files | grep -E "requirements|setup.py|pyproject.toml" | head -5Repository: opsmill/infrahub-sdk-python
Length of output: 85
🏁 Script executed:
head -30 infrahub_sdk/template/jinja2/__init__.py | grep -E "^import|^from"Repository: opsmill/infrahub-sdk-python
Length of output: 568
Replace the private Traceback._guess_lexer call with the public Syntax.guess_lexer API.
Line 202 uses Traceback._guess_lexer, a private method that may change across rich library versions. Replace it with the public Syntax.guess_lexer classmethod (already imported at line 10), which provides the same functionality with a stable API and built-in fallback to "default" if the lexer cannot be determined.
Fix
if frame.filename == "<template>":
lexer_name = "text"
else:
- lexer_name = Traceback._guess_lexer(frame.filename, code)
+ lexer_name = Syntax.guess_lexer(frame.filename, code)🤖 Prompt for AI Agents
In `@infrahub_sdk/template/jinja2/__init__.py` around lines 189 - 217, In
_identify_faulty_jinja_code, replace the private Traceback._guess_lexer usage
with the public Syntax.guess_lexer API: call Syntax.guess_lexer(frame.filename,
code) to compute lexer_name (keeping the existing special-case for "<template>"
mapping to "text"), so lexer selection uses the stable public method and its
built-in fallback instead of Traceback._guess_lexer; update the assignment to
lexer_name and keep the rest of the Syntax(...) construction unchanged.
| class JinjaTemplateError(Error): | ||
| def __init__(self, message: str) -> None: | ||
| self.message = message | ||
|
|
||
| def __str__(self) -> str: | ||
| return str(self.message) | ||
|
|
||
|
|
||
| class JinjaTemplateNotFoundError(JinjaTemplateError): | ||
| def __init__(self, message: str | None, filename: str, base_template: str | None = None) -> None: | ||
| self.message = message or "Template Not Found" | ||
| self.filename = filename | ||
| self.base_template = base_template | ||
|
|
||
|
|
||
| class JinjaTemplateSyntaxError(JinjaTemplateError): | ||
| def __init__(self, message: str | None, lineno: int, filename: str | None = None) -> None: | ||
| self.message = message or "Syntax Error" | ||
| self.filename = filename | ||
| self.lineno = lineno | ||
|
|
||
|
|
||
| class JinjaTemplateUndefinedError(JinjaTemplateError): | ||
| def __init__(self, message: str | None, errors: list[UndefinedJinja2Error]) -> None: | ||
| self.message = message or "Undefined Error" | ||
| self.errors = errors | ||
|
|
||
|
|
||
| class JinjaTemplateOperationViolationError(JinjaTemplateError): | ||
| def __init__(self, message: str | None = None) -> None: | ||
| self.message = message or "Forbidden code found in the template" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find the Error class definition
rg -n 'class Error' --type=py -C5 | head -100Repository: opsmill/infrahub-sdk-python
Length of output: 1643
🏁 Script executed:
# Check the imports in the exceptions.py file to understand what Error is
head -20 infrahub_sdk/template/jinja2/exceptions.pyRepository: opsmill/infrahub-sdk-python
Length of output: 580
🏁 Script executed:
# Check if there are other exception classes in the codebase that use super().__init__()
rg -n 'super\(\).__init__' --type=py -C2 | head -100Repository: opsmill/infrahub-sdk-python
Length of output: 6248
🏁 Script executed:
# Look for custom Error class definition or import
fd exceptions.py --type f | xargs cat | grep -A 10 -B 10 'class Error'Repository: opsmill/infrahub-sdk-python
Length of output: 1458
Missing super().__init__() calls in exception hierarchy.
All exception classes in this module (including JinjaTemplateError and its subclasses) must call super().__init__() to properly initialize the Exception base class. The base Error class calls super().__init__(self.message) at instantiation, but these classes skip that call, breaking exception initialization. This is inconsistent with the pattern used throughout the codebase in infrahub_sdk/task/exceptions.py, infrahub_sdk/pytest_plugin/exceptions.py, and other exception modules.
For each class, add super().__init__(self.message) after setting self.message:
♻️ Example fix for all classes
class JinjaTemplateError(Error):
def __init__(self, message: str) -> None:
self.message = message
+ super().__init__(self.message)
def __str__(self) -> str:
return str(self.message)
class JinjaTemplateNotFoundError(JinjaTemplateError):
def __init__(self, message: str | None, filename: str, base_template: str | None = None) -> None:
self.message = message or "Template Not Found"
self.filename = filename
self.base_template = base_template
+ super().__init__(self.message)Apply the same pattern to JinjaTemplateSyntaxError, JinjaTemplateUndefinedError, and JinjaTemplateOperationViolationError.
🤖 Prompt for AI Agents
In `@infrahub_sdk/template/jinja2/exceptions.py` around lines 11 - 41, The
exception classes in this file (JinjaTemplateError, JinjaTemplateNotFoundError,
JinjaTemplateSyntaxError, JinjaTemplateUndefinedError,
JinjaTemplateOperationViolationError) set self.message but never call the base
initializer; update each __init__ to call super().__init__(self.message) after
assigning self.message (and keep assigning any extra attributes like filename,
base_template, lineno, errors) so the underlying Error/Exception gets
initialized correctly; ensure JinjaTemplateError itself calls
super().__init__(self.message) in its __init__ as well.
…, allowing the future documentation generation method to be tested in isolation IHS-201" This reverts commit 03fa402.
Why
Problem Statement
tasks.py) was a monolithic set of functions mixing concerns (template rendering, CLI execution, file I/O), making it hard to understand and to extend with new generation methods.Goals
This PR:
nodeand moduleclient.Closes: IHS-195, IHS-196, IHS-197, IHS-198, IHS-199, IHS-200, IHS-201
What changed
generate-sdk-api-docsinvoke task) that produces MDX reference pages forinfrahub_sdk.clientandinfrahub_sdk.nodewith auto-discovery validation ensuring every new package is explicitly categorized.tasks.pyfunctions into adocs/docs_generation/package with a Strategy pattern —ADocContentGenMethod(base),Jinja2DocContentGenMethod,CommandOutputDocContentGenMethod,FilePrintingDocContentGenMethod— composed viaDocPage/MDXDocPagepage objectssidebars-infrahubctl.tsandsidebars-python-sdk.tswith filesystem-reading versions indocs/sidebars/, using shared utility functions (getCommandItems,getItemsWithOrder) with vitest testssdk_refautogenerated category under Reference in the Python SDK sidebardocs-validatecheck IHS-197: Now detects modified, deleted, and new untracked files underdocs/with descriptive error messagesWhat stayed the same
Suggested review order
This group of commits is a mixed between:
One can refer to the internal issue key IHS-<insert_key> to only focus on changes that are tied to the same initial need.
Speaking about the reviewing order, I would suggest:
docs/docs_generation/IHS-201: the new generation architecture (Strategy pattern on documentation generation methods + Page objects) -> Design decisionstasks.pyrefactoring IHS-201: how the new classes are wired into_generate_infrahub_sdk_configuration_documentationand_generate_infrahub_sdk_template_documentation-> Apply the refactoring_generate_sdk_api_docs-> New featuredocs/sidebars/-> Enhance existing documentationdocs/sidebars/-> New documentationdocs-validateIHS-197: docs_validate -> Documentation integrityIHS-195 is about rebasing the old PR.
IHS-199 is about generating the .mdx documentation files.
How to test
Impact & rollout
Summary by CodeRabbit
New Features
Documentation
Chores