Skip to content

#201 / IHS-194: Generate SDK API documentation#822

Open
polmichel wants to merge 29 commits intodevelopfrom
pmi-20260210-sdk-api-documentation
Open

#201 / IHS-194: Generate SDK API documentation#822
polmichel wants to merge 29 commits intodevelopfrom
pmi-20260210-sdk-api-documentation

Conversation

@polmichel
Copy link

@polmichel polmichel commented Feb 11, 2026

Why

Problem Statement

  • There is no Python SDK API Documentation today.
  • The existing documentation generation system (tasks.py) was a monolithic set of functions mixing concerns (template rendering, CLI execution, file I/O), making it hard to understand and to extend with new generation methods.
  • Docusaurus sidebars were statically hardcoded, requiring manual edits whenever a doc page was added or removed.

Goals

This PR:

  • Introduces automatic Python SDK API documentation generation from code docstrings using mdxify, only for package node and module client.
  • Refactors the documentation generation pipeline into an extensible architecture
  • Makes Docusaurus sidebars dynamic.

Closes: IHS-195, IHS-196, IHS-197, IHS-198, IHS-199, IHS-200, IHS-201

What changed

  • New SDK API docs from docstrings IHS-196: Added mdxify-based generation (generate-sdk-api-docs invoke task) that produces MDX reference pages for infrahub_sdk.client and infrahub_sdk.node with auto-discovery validation ensuring every new package is explicitly categorized.
  • Documentation generation refactoring IHS-201: Extracted the monolithic tasks.py functions into a docs/docs_generation/ package with a Strategy pattern — ADocContentGenMethod (base), Jinja2DocContentGenMethod, CommandOutputDocContentGenMethod, FilePrintingDocContentGenMethod — composed via DocPage/MDXDocPage page objects
  • Dynamic Docusaurus sidebars IHS-200: Replaced static sidebars-infrahubctl.ts and sidebars-python-sdk.ts with filesystem-reading versions in docs/sidebars/, using shared utility functions (getCommandItems, getItemsWithOrder) with vitest tests
  • New "Python SDK API" sidebar section IHS-198: Added sdk_ref autogenerated category under Reference in the Python SDK sidebar
  • Enhanced CI docs-validate check IHS-197: Now detects modified, deleted, and new untracked files under docs/ with descriptive error messages

What stayed the same

  • All public SDK APIs unchanged
  • Existing doc content (guides, topics, reference/config, reference/templating, infrahubctl commands) regenerated identically.

Suggested review order

This group of commits is a mixed between:

  • Legacy work which has been squashed into meaningful sets of change
  • Enhancement and completion of each part of the work.

One can refer to the internal issue key IHS-<insert_key> to only focus on changes that are tied to the same initial need.
Speaking about the reviewing order, I would suggest:

  1. docs/docs_generation/ IHS-201: the new generation architecture (Strategy pattern on documentation generation methods + Page objects) -> Design decisions
  2. tasks.pyrefactoring IHS-201: how the new classes are wired into _generate_infrahub_sdk_configuration_documentation and _generate_infrahub_sdk_template_documentation -> Apply the refactoring
  3. tasks.py API docs generation IHS-196: _generate_sdk_api_docs -> New feature
  4. dynamic sidebar logic and tests IHS-200: docs/sidebars/ -> Enhance existing documentation
  5. new "Python SDK API" sidebar section IHS-198: docs/sidebars/ -> New documentation
  6. new CI check docs-validate IHS-197: docs_validate -> Documentation integrity

IHS-195 is about rebasing the old PR.
IHS-199 is about generating the .mdx documentation files.

How to test

# Unit tests for doc generation
uv run pytest tests/unit/doc_generation/ -v

# Sidebar utility tests (vitest)
cd docs && npm test

# Full documentation generation
uv run invoke docs-generate

# CI validation check (should pass with no diff)
uv run invoke docs-validate

# Lint
uv run invoke format lint

Impact & rollout

  • Backward compatibility: No breaking changes. All existing invoke tasks still work.
  • Performance: No runtime impact -> changes only affect doc generation tooling.
  • Config/env changes: New dev dependency mdxify in pyproject.toml, vitest in docs/package.json, upgrade version of Docusaurus
  • Deployment notes: N/A

Summary by CodeRabbit

  • New Features

    • Added automatic Python SDK API documentation generation from source code docstrings, eliminating manual reference maintenance.
    • Expanded Python SDK reference documentation with comprehensive API modules including client, node, relationships, and utilities.
  • Documentation

    • Reorganized documentation sidebar structure for improved navigation and maintainability.
    • Enhanced documentation generation tooling and testing infrastructure.
  • Chores

    • Updated dependencies for documentation build processes.
    • Fixed minor typos in code documentation.

a-delannoy and others added 18 commits February 10, 2026 10:34
…ng the future documentation generation method to be tested in isolation IHS-201
…archy. This is not yet plugged into tasks.py file IHS-201
@coderabbitai
Copy link

coderabbitai bot commented Feb 11, 2026

Walkthrough

The pull request restructures documentation generation infrastructure by introducing a modular framework for content generation strategies (docs/docs_generation/), moving sidebar configurations into a docs/sidebars/ directory with dynamic building utilities, and adding comprehensive Python SDK API documentation pages. GitHub Actions workflows are updated to validate documentation consistency and run tests. Configuration references are adjusted to reflect the new directory structure, and tasks.py is refactored to use the new generation framework. Unit tests are added throughout to validate the new components.

🚥 Pre-merge checks | ✅ 2 | ❌ 2
❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 44.17% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Merge Conflict Detection ⚠️ Warning ❌ Merge conflicts detected (20 files):

⚔️ .github/file-filters.yml (content)
⚔️ .github/workflows/ci.yml (content)
⚔️ .github/workflows/sync-docs.yml (content)
⚔️ .vale/styles/Infrahub/spelling.yml (content)
⚔️ .vale/styles/spelling-exceptions.txt (content)
⚔️ AGENTS.md (content)
⚔️ docs/AGENTS.md (content)
⚔️ docs/_templates/sdk_config.j2 (content)
⚔️ docs/docs/python-sdk/reference/config.mdx (content)
⚔️ docs/docusaurus.config.ts (content)
⚔️ docs/package-lock.json (content)
⚔️ docs/package.json (content)
⚔️ infrahub_sdk/client.py (content)
⚔️ infrahub_sdk/ctl/utils.py (content)
⚔️ infrahub_sdk/jinja2.py (content)
⚔️ infrahub_sdk/node/node.py (content)
⚔️ infrahub_sdk/pytest_plugin/items/base.py (content)
⚔️ pyproject.toml (content)
⚔️ tasks.py (content)
⚔️ uv.lock (content)

These conflicts must be resolved before merging into develop.
Resolve conflicts locally and push changes to this branch.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely summarizes the main change: generating SDK API documentation and the associated refactoring work (identified by issue numbers 201 and IHS-194).
Description check ✅ Passed The description is comprehensive and well-structured with clear sections covering Why (Problem Statement and Goals), What Changed (organized by feature area with issue mappings), What Stayed the Same, Suggested Review Order, How to Test, and Impact & Rollout. All key template sections are addressed with substantial detail.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


No actionable comments were generated in the recent review. 🎉

🧹 Recent nitpick comments
docs/docs_generation/content_gen_methods/jinja2_method.py (1)

38-39: asyncio.run() will fail if an event loop is already running.

If apply() is ever called from an async context (e.g., tests using an async runner, or Jupyter), asyncio.run() raises RuntimeError. Fine for the current CLI invoke-task usage, but worth keeping in mind if this class is reused elsewhere.

pyproject.toml (1)

54-54: Remove the redundant python_version marker.

The environment marker python_version>='3.10' is redundant since the project already declares requires-python = ">=3.10,<3.15" (line 10). Other dependencies in this group omit it (e.g., Jinja2, pyarrow, click), with the exception of numpy which uses version-split markers for actual versioning purposes. The mdxify marker adds unnecessary noise.

Proposed fix
-    "mdxify>=0.2.23; python_version>='3.10'",
+    "mdxify>=0.2.23",

The version constraint >=0.2.23 is valid and available on PyPI (current release is 0.2.36).


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@polmichel polmichel changed the base branch from stable to develop February 11, 2026 14:27
@github-actions github-actions bot added group/ci Issue related to the CI pipeline type/documentation Improvements or additions to documentation labels Feb 11, 2026
@codecov
Copy link

codecov bot commented Feb 11, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

@@           Coverage Diff            @@
##           develop     #822   +/-   ##
========================================
  Coverage    80.34%   80.34%           
========================================
  Files          115      115           
  Lines         9873     9873           
  Branches      1504     1504           
========================================
  Hits          7932     7932           
  Misses        1420     1420           
  Partials       521      521           
Flag Coverage Δ
integration-tests 41.41% <ø> (ø)
python-3.10 51.38% <ø> (ø)
python-3.11 51.38% <ø> (ø)
python-3.12 51.40% <ø> (ø)
python-3.13 51.40% <ø> (ø)
python-3.14 53.03% <ø> (-0.03%) ⬇️
python-filler-3.12 24.06% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
infrahub_sdk/client.py 72.00% <ø> (ø)
infrahub_sdk/ctl/utils.py 68.53% <ø> (ø)
infrahub_sdk/jinja2.py 0.00% <ø> (ø)
infrahub_sdk/node/node.py 85.02% <ø> (ø)
infrahub_sdk/pytest_plugin/items/base.py 78.37% <ø> (ø)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@polmichel polmichel force-pushed the pmi-20260210-sdk-api-documentation branch from 2bb2cbe to 31eee1b Compare February 12, 2026 16:31
@cloudflare-workers-and-pages
Copy link

cloudflare-workers-and-pages bot commented Feb 12, 2026

Deploying infrahub-sdk-python with  Cloudflare Pages  Cloudflare Pages

Latest commit: 6d9e056
Status: ✅  Deploy successful!
Preview URL: https://62f41275.infrahub-sdk-python.pages.dev
Branch Preview URL: https://pmi-20260210-sdk-api-documen.infrahub-sdk-python.pages.dev

View logs

@polmichel polmichel force-pushed the pmi-20260210-sdk-api-documentation branch 3 times, most recently from 037d683 to fb81d93 Compare February 12, 2026 17:27
@polmichel polmichel force-pushed the pmi-20260210-sdk-api-documentation branch from 4347025 to 900096c Compare February 13, 2026 11:03
@polmichel polmichel self-assigned this Feb 13, 2026
@polmichel polmichel marked this pull request as ready for review February 13, 2026 11:15
@polmichel polmichel requested a review from a team as a code owner February 13, 2026 11:15
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 18

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
.github/workflows/ci.yml (1)

193-230: ⚠️ Potential issue | 🟠 Major

Operator precedence bug in the if condition causes the always() guard to be bypassed.

Line 198: The trailing || (needs.files-changed.outputs.documentation_generated == 'true') is evaluated with lower precedence than the preceding && chain. This means when documentation_generated is 'true', the job runs even if upstream jobs failed or were cancelled — bypassing the always() && !cancelled() && !contains(...) guards.

This is pre-existing, but the new steps (NodeJS install, npm deps, vitest) now also execute under this flawed condition.

Proposed fix: wrap the entire condition in parentheses
     if: |
       always() && !cancelled() &&
       !contains(needs.*.result, 'failure') &&
       !contains(needs.*.result, 'cancelled') &&
-      (needs.files-changed.outputs.python == 'true') || (needs.files-changed.outputs.documentation_generated == 'true')
+      ((needs.files-changed.outputs.python == 'true') || (needs.files-changed.outputs.documentation_generated == 'true'))
🤖 Fix all issues with AI agents
In @.github/workflows/sync-docs.yml:
- Around line 11-12: Update the sync step to copy the actual source paths and
include the missing dependency: replace references to
source-repo/docs/sidebars-infrahubctl.ts and
source-repo/docs/sidebars-python-sdk.ts with
source-repo/docs/sidebars/sidebars-infrahubctl.ts and
source-repo/docs/sidebars/sidebars-python-sdk.ts respectively, and add
source-repo/docs/sidebars/sidebar-utils.ts to the list of files being copied so
the imports in sidebars-* files resolve; also confirm whether the target expects
these files under target-repo/docs/ or target-repo/docs/sidebars/ and adjust the
destination paths accordingly.

In @.vale/styles/Infrahub/spelling.yml:
- Line 9: The regex '\b\w+_\w+\b' in spelling.yml is too broad and mutes
spell-checking for any underscored word in prose; remove this global filter and
instead scope it to generated reference docs (apply the pattern in a style used
only for generated MDX or add a path-specific rule in .vale.ini) or tighten the
pattern to only match code identifiers (e.g., only inside code spans/fences or
backticks) so hand-written prose like "recieve_data" still gets flagged; update
the spelling.yml entry for the snake_case rule and move or replace it with the
scoped/tighter pattern so it no longer applies globally.

In `@docs/docs_generation/content_gen_methods/command_output_method.py`:
- Around line 40-49: The apply method creates a temp file (tmp_path) but never
removes it if self.context.run(full_cmd) or subsequent operations raise; wrap
the block that runs the command and reads the file in a try/finally so
tmp_path.unlink(missing_ok=True) is always executed: specifically, after
creating tmp_path in apply(), execute the context.cd / context.run and
tmp_path.read_text() inside a try and move the tmp_path.unlink call into the
finally to guarantee cleanup even on exceptions.

In `@docs/docs_generation/content_gen_methods/mdx/mdx_code_doc.py`:
- Around line 91-102: The _execute_mxify implementation stores MdxFile.path
pointing into a TemporaryDirectory that is removed when the with block exits and
also generate() caches results in self._files without accounting for the
modules_to_document argument; to fix, remove the dangling path by populating
MdxFile only with persistent data (e.g., content and name) or explicitly
document that path is ephemeral and only valid inside _execute_mdxify, and
change generate() to key its cache by the modules_to_document argument (e.g.,
compute a stable key from modules_to_document and store results per-key in
self._files) so repeated calls with different modules re-run mdxify rather than
returning stale results; update references to MdxFile.path usage elsewhere to
use the content/name fields or handle the ephemeral path accordingly.

In `@docs/docs_generation/helpers.py`:
- Around line 28-34: The _resolve_allof function assumes
prop["allOf"][0]["$ref"] exists and will crash for empty or non-$ref allOf
entries; update _resolve_allof to defensively validate that prop.get("allOf") is
a non-empty list and that the first entry is a dict containing a "$ref" string
before accessing it, returning ([], "") if those checks fail, and then continue
to extract ref_name, lookup ref_def from definitions and return
ref_def.get("enum", []), ref_def.get("type", ""). Ensure you reference the
existing function name _resolve_allof and variables prop, definitions, ref_name,
and ref_def when making the change.
- Around line 21-23: The code is calling the private method
EnvSettingsSource._extract_field_info via env_settings._extract_field_info while
iterating settings.model_fields, which risks breaking on future
pydantic-settings releases; update the project dependency spec to pin
pydantic-settings to a safe major range (e.g., pydantic-settings>=2.0,<3.0) and
add an inline comment next to the usage of env_settings._extract_field_info
(and/or above the loop referencing settings.model_fields) documenting that this
relies on a private API and why it's pinned, and include a TODO referencing the
suggested long-term refactor to public APIs (get_field_value,
prepare_field_value or alias-based configuration) so future maintainers know to
replace _extract_field_info.

In `@docs/docs/python-sdk/sdk_ref/infrahub_sdk/client.mdx`:
- Line 855: Update the synchronous method's docstring to rename the parameter
entry from `size` to `prefix_length` so it matches the method signature (use the
same wording as the async variant), ensuring the line that currently reads
"`size`: Length of the prefix to allocate." becomes "`prefix_length`: Length of
the prefix to allocate." and that the docstring for the function that accepts
prefix_length is consistent with its async counterpart.
- Around line 252-254: The docs return-type for the async method
InfrahubClient.filters() is incorrect: change the return annotation from
list[InfrahubNodeSync] to list[InfrahubNode] in the relevant docs block so it
matches the async implementation (keep the sync variant as
list[InfrahubNodeSync]); update the "**Returns:**" section and any nearby
references to use list[InfrahubNode] for the async filters() description.
- Around line 480-482: Add a docstring to the InfrahubClientSync class mirroring
the existing one on InfrahubClient so the sync variant shows up in generated
docs; locate the class declaration for InfrahubClientSync in
infrahub_sdk/client.py, add a short module-level style docstring such as
"GraphQL Client to interact with Infrahub." (matching the InfrahubClient
phrasing) immediately below the class signature so documentation generators pick
it up.
- Around line 384-386: Update the docstrings in infrahub_sdk/client.py for the
methods allocate_next_ip_address and allocate_next_ip_prefix: change the
`timeout` description to "Timeout in seconds for the query", clarify `tracker`
to accurately describe its purpose (e.g., pagination/tracking token rather than
"offset"), and set `raise_for_error` to "Deprecated. Controls only HTTP status
handling"; also in InfrahubClientSync.allocate_next_ip_prefix fix the parameter
name in the docstring from `size` to `prefix_length` to match the function
signature. After updating those docstrings, run the docs-generate task to
regenerate the MDX output so docs/docs/.../client.mdx reflects the corrected
descriptions.

In `@docs/docs/python-sdk/sdk_ref/infrahub_sdk/node/attribute.mdx`:
- Around line 16-26: The file contains duplicate headings for the attribute
accessor `value` (getter and setter) which create conflicting anchor IDs; update
the doc generator mdxify to emit distinct headings for the two accessors (e.g.,
"value (getter)" and "value (setter)" or include property annotations like
"`@property value`" and "`@value.setter`") so the generated sections for the
`value` getter and the `value` setter produce unique anchors and no duplicate
`#### value` headings.

In `@docs/docs/python-sdk/sdk_ref/infrahub_sdk/node/node.mdx`:
- Around line 232-236: The Examples in the docstring for get_flat_value (both
async and sync implementations) render as plain text; update the source
docstrings in infrahub_sdk/node/node.py for get_flat_value to wrap the example
identifiers (e.g., name__value, module.object.value) in a fenced code block with
a language tag (e.g., ```python) so the generated MDX shows them as code; apply
this change to both the async and sync get_flat_value docstrings.
- Around line 162-163: Update the docstrings for both the sync and async
implementations of generate_query_data_node to fix the typo: replace "Indicated
of the attributes and the relationships inherited from generics should be
included as well." with "Indicates whether the attributes and the relationships
inherited from generics should be included as well." Ensure the exact corrected
sentence appears in both docstrings so regenerated MDX (node.mdx) reflects the
fix.

In `@docs/docs/python-sdk/sdk_ref/infrahub_sdk/node/parsers.mdx`:
- Line 16: The docstring for parse_human_friendly_id in
infrahub_sdk/node/parsers.py uses "human friendly" and should be changed to the
hyphenated compound adjective "human-friendly"; update the docstring in the
parse_human_friendly_id function accordingly, save, then regenerate the docs by
running `uv run invoke generate-sdk` so the MDX is rebuilt from the corrected
source.

In `@docs/src/theme/MDXComponents.js`:
- Line 9: The comment in MDXComponents.js incorrectly suggests using the Icon
component as "<icon />" which is misleading because MDX treats lowercase tags as
HTML; update the comment for the Icon mapping (Icon: Icon) to instruct using the
capitalized component name "<Icon />" in MDX so readers know to use the React
component form.

In `@infrahub_sdk/template/jinja2/__init__.py`:
- Around line 189-217: In _identify_faulty_jinja_code, replace the private
Traceback._guess_lexer usage with the public Syntax.guess_lexer API: call
Syntax.guess_lexer(frame.filename, code) to compute lexer_name (keeping the
existing special-case for "<template>" mapping to "text"), so lexer selection
uses the stable public method and its built-in fallback instead of
Traceback._guess_lexer; update the assignment to lexer_name and keep the rest of
the Syntax(...) construction unchanged.

In `@infrahub_sdk/template/jinja2/exceptions.py`:
- Around line 11-41: The exception classes in this file (JinjaTemplateError,
JinjaTemplateNotFoundError, JinjaTemplateSyntaxError,
JinjaTemplateUndefinedError, JinjaTemplateOperationViolationError) set
self.message but never call the base initializer; update each __init__ to call
super().__init__(self.message) after assigning self.message (and keep assigning
any extra attributes like filename, base_template, lineno, errors) so the
underlying Error/Exception gets initialized correctly; ensure JinjaTemplateError
itself calls super().__init__(self.message) in its __init__ as well.

In `@tests/unit/sdk/dummy_template.py`:
- Around line 11-18: The docstring says DummyTemplate should absorb extra
keyword args but __init__ only accepts content, causing TypeError when tests
pass template= or template_directory=; update DummyTemplate.__init__ to accept
**kwargs by changing its signature to def __init__(self, content: str, **kwargs:
Any) -> None and ensure typing.Any is imported (or use typing_extensions if
needed), leaving the docstring as-is; alternatively, if you prefer removing
behavior, update the docstring to drop the **kwargs mention—prefer adding
**kwargs to maintain compatibility with Jinja2Template.
🧹 Nitpick comments (15)
docs/docs/python-sdk/sdk_ref/infrahub_sdk/client.mdx (2)

388-389: Inconsistent Returns formatting — plain text instead of MDX bold/bullet style.

Lines 388–389 (and similarly 448–449, 804–805, 864–865) use raw indented text (Returns:\n InfrahubNode:...) instead of the **Returns:**\n\n- ... pattern used elsewhere in this file (e.g., Lines 209–211). This appears to be a rendering issue in the doc generator for these specific docstrings.


44-54: Duplicate #### request_context headings may cause anchor collisions.

Both the getter (Line 44) and setter (Line 50) generate the same #### request_context heading. In Docusaurus, this can produce duplicate HTML id anchors, making direct links unreliable. Consider differentiating them (e.g., appending (getter) / (setter)) in the doc generator.

docs/docs/python-sdk/sdk_ref/infrahub_sdk/node/constants.mdx (1)

1-8: Consider whether empty module pages add value to the docs.

This page states the module is empty or contains only private/internal implementations. Publishing it may confuse users looking for actual API content. If the auto-generation tool produces these by default, consider filtering out empty modules or at minimum adding a note explaining this is intentional.

docs/package.json (1)

25-25: markdownlint-cli2 should likely be in devDependencies.

This is a linting tool, not a runtime dependency for Docusaurus. Placing it in dependencies causes it to be installed in production builds unnecessarily.

Proposed fix

Move markdownlint-cli2 from dependencies to devDependencies:

   "dependencies": {
     "@docusaurus/core": "^3.9.2",
     "@docusaurus/preset-classic": "^3.9.2",
     "@iconify/react": "^6.0.0",
     "@mdx-js/react": "^3.0.0",
     "clsx": "^2.0.0",
-    "markdownlint-cli2": "^0.20.0",
     "prism-react-renderer": "^2.3.0",
     ...
   },
   "devDependencies": {
     ...
+    "markdownlint-cli2": "^0.20.0",
     "vitest": "^4.0.17"
   },
docs/AGENTS.md (1)

67-70: Consider documenting the new generate-sdk-api-docs task in the "Never" section.

Line 70 mentions not editing config.mdx and regenerating with generate-sdk, but the PR introduces a new generate-sdk-api-docs invoke task for the SDK API reference pages under sdk_ref/. A corresponding "Never edit docs/python-sdk/sdk_ref/*.mdx directly" entry would be consistent.

Proposed addition
 - Edit `docs/infrahubctl/*.mdx` directly (regenerate with `uv run invoke generate-infrahubctl`)
 - Edit `docs/python-sdk/reference/config.mdx` directly (regenerate with `uv run invoke generate-sdk`)
+- Edit `docs/python-sdk/sdk_ref/*.mdx` directly (regenerate with `uv run invoke generate-sdk-api-docs`)
infrahub_sdk/template/jinja2/filters.py (1)

4-8: Consider making the dataclass frozen for immutability.

Since these definitions are static configuration data, frozen=True would prevent accidental mutation and make instances hashable.

Suggested change
-@dataclass
+@dataclass(frozen=True)
 class FilterDefinition:
     name: str
     trusted: bool
     source: str
docs/docs_generation/content_gen_methods/mdx/mdx_code_doc.py (1)

85-89: Cache ignores modules_to_document parameter on subsequent calls.

generate() caches on the first invocation, but the cache key doesn't include modules_to_document. If a caller invokes generate() a second time with different modules, the stale cached result is returned silently.

Currently the single call site in tasks.py only calls this once per instance, so this isn't an active bug, but it's a subtle API contract that could bite future consumers.

💡 Option: store modules and validate on subsequent calls
     def generate(self, context: Context, modules_to_document: list[str]) -> dict[str, MdxFile]:
         """Return mdxify results, running the tool on first call only."""
         if self._files is None:
+            self._modules = modules_to_document
             self._files = self._execute_mdxify(context, modules_to_document)
+        elif self._modules != modules_to_document:
+            raise ValueError("generate() called with different modules than the cached run")
         return self._files
tasks.py (3)

155-167: Use Exit instead of ValueError for user-facing task errors.

Lines 159 and 164 raise ValueError, which produces a full Python traceback. The rest of the file (e.g., require_tool on line 26) raises Exit for user-facing errors, which gives clean output. These validation errors are user-facing (triggered when someone adds a new package without categorizing it), so they should follow the same pattern.

♻️ Proposed fix
     if uncategorized:
-        raise ValueError(
+        raise Exit(
             f"Uncategorized packages under infrahub_sdk/: {sorted(uncategorized)}. "
-            "Add them to packages_to_document or packages_to_ignore in tasks.py"
+            "Add them to packages_to_document or packages_to_ignore in tasks.py",
+            code=1,
         )
 
     if unknown:
-        raise ValueError(f"Declared packages that no longer exist: {sorted(unknown)}")
+        raise Exit(f"Declared packages that no longer exist: {sorted(unknown)}", code=1)

189-192: The reduce(operator.truediv, ...) path construction is dense — consider a clarifying comment.

Line 191 converts a hyphenated filename key (e.g., "infrahub_sdk-node.mdx") into a nested path (e.g., infrahub_sdk/node.mdx) by splitting on "-" and joining via Path /. This is correct but non-obvious at a glance.

💡 Add a brief inline comment
     for file_key, mdxified_file in generated_files.items():
         page = DocPage(content_gen_method=FilePrintingDocContentGenMethod(file=mdxified_file))
+        # Convert hyphenated filename to nested path: "a-b-c.mdx" → a/b/c.mdx
         target_path = output_dir / reduce(operator.truediv, (Path(part) for part in file_key.split("-")))
         MDXDocPage(page=page, output_path=target_path).to_mdx()

281-303: docs_validate doesn't detect staged deletions.

git diff --name-only docs (line 291) only shows unstaged working-tree changes. If docs_generate removes a file that was committed, the deletion shows up in git diff. However, git diff --name-only without --cached does not catch changes that have been staged. This is fine for the current flow since docs_generate doesn't stage anything — just confirming the assumption is correct.

Also: context.run(...) can return None if the invoke runner is configured with warn=True and the command fails. Here you guard with if diff_result / if untracked_result which handles that case, but a failed git diff would silently pass validation rather than raising. Consider adding warn=False (the default) or explicitly checking the exit code.

docs/docs/python-sdk/sdk_ref/infrahub_sdk/node/relationship.mdx (1)

76-80: remove methods lack descriptions unlike sibling methods add/extend.

Since this is auto-generated, consider adding a docstring to the remove methods in the source (RelationshipManager.remove and RelationshipManagerSync.remove) so the generated docs are consistent with add and extend.

Also applies to: 110-113

docs/docs_generation/content_gen_methods/jinja2_method.py (1)

38-39: asyncio.run() will fail if called from within an existing event loop.

This is fine for invoke tasks (synchronous entry), but asyncio.run() raises RuntimeError if an event loop is already running. Worth noting as a constraint if this method is ever called from an async context.

docs/docs_generation/pages/base.py (2)

29-30: Minor: docstring says "Decorator" but this is a Wrapper.

A GoF Decorator preserves the same interface as the wrapped object. MDXDocPage exposes to_mdx() rather than content(), so it's more accurately a Wrapper or Adapter around DocPage.


46-50: Consider using logging instead of print.

print() on Line 50 works for invoke tasks but doesn't integrate with log levels or structured logging. A minor nit for a docs-generation tool.

docs/docs_generation/content_gen_methods/__init__.py (1)

3-10: Mixed absolute and relative imports within the same package.

Lines 3–5 use absolute imports (docs.docs_generation.content_gen_methods.command..., docs.docs_generation.content_gen_methods.mdx...) while Lines 7–10 use relative imports (.base, .command_output_method, etc.). Consider using relative imports consistently for intra-package references.

♻️ Suggested fix: use relative imports throughout
-from docs.docs_generation.content_gen_methods.command.command import ACommand
-from docs.docs_generation.content_gen_methods.command.typer_command import TyperGroupCommand, TyperSingleCommand
-from docs.docs_generation.content_gen_methods.mdx.mdx_code_doc import MdxCodeDocumentation
+from .command.command import ACommand
+from .command.typer_command import TyperGroupCommand, TyperSingleCommand
+from .mdx.mdx_code_doc import MdxCodeDocumentation

Comment on lines +11 to +12
- 'docs/sidebars/sidebars-infrahubctl.ts'
- 'docs/sidebars/sidebars-python-sdk.ts'
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, check the actual state of the sync-docs.yml file around lines 11-12 and 36-37
head -50 .github/workflows/sync-docs.yml | cat -n

Repository: opsmill/infrahub-sdk-python

Length of output: 1685


🏁 Script executed:

# Check if docs/sidebars directory and files exist
fd . docs/sidebars/ -type f 2>/dev/null || echo "docs/sidebars/ not found or empty"
ls -la docs/ | grep -i sidebar

Repository: opsmill/infrahub-sdk-python

Length of output: 164


🏁 Script executed:

# Search for imports of sidebar-utils in the sidebar files
rg "sidebar-utils" docs/sidebars/ 2>/dev/null || echo "No sidebar-utils imports found"

Repository: opsmill/infrahub-sdk-python

Length of output: 353


🏁 Script executed:

# Check if sidebar-utils.ts exists at root docs level
fd sidebar-utils docs/ -type f

Repository: opsmill/infrahub-sdk-python

Length of output: 241


🏁 Script executed:

# Look at the target repo structure expectation - check if there's documentation or other references
rg "target-repo/docs" .github/workflows/sync-docs.yml -A 2 -B 2

Repository: opsmill/infrahub-sdk-python

Length of output: 565


🏁 Script executed:

# Check if sidebar-utils.ts exists in docs/sidebars/
ls -la docs/sidebars/

Repository: opsmill/infrahub-sdk-python

Length of output: 464


🏁 Script executed:

# Verify the source paths referenced in the workflow don't actually exist at root
ls -la docs/ | grep "sidebars-"

Repository: opsmill/infrahub-sdk-python

Length of output: 53


Fix path mismatch in sync commands and add missing sidebar-utils.ts dependency.

Lines 11–12 correctly watch docs/sidebars/sidebars-*.ts, but lines 36–37 copy from non-existent paths: source-repo/docs/sidebars-infrahubctl.ts and source-repo/docs/sidebars-python-sdk.ts. The files are located at source-repo/docs/sidebars/sidebars-infrahubctl.ts and source-repo/docs/sidebars/sidebars-python-sdk.ts. Additionally, both sidebar files import from ./sidebar-utils, but sidebar-utils.ts is not synced, which will break imports in the target repo.

Proposed fix
          cp -r source-repo/docs/docs/* target-repo/docs/docs-python-sdk/
-          cp source-repo/docs/sidebars-infrahubctl.ts target-repo/docs/
-          cp source-repo/docs/sidebars-python-sdk.ts target-repo/docs/
+          cp source-repo/docs/sidebars/sidebars-infrahubctl.ts target-repo/docs/
+          cp source-repo/docs/sidebars/sidebars-python-sdk.ts target-repo/docs/
+          cp source-repo/docs/sidebars/sidebar-utils.ts target-repo/docs/

Also verify whether the target repo expects these files at target-repo/docs/ root or in a sidebars/ subdirectory.

🤖 Prompt for AI Agents
In @.github/workflows/sync-docs.yml around lines 11 - 12, Update the sync step
to copy the actual source paths and include the missing dependency: replace
references to source-repo/docs/sidebars-infrahubctl.ts and
source-repo/docs/sidebars-python-sdk.ts with
source-repo/docs/sidebars/sidebars-infrahubctl.ts and
source-repo/docs/sidebars/sidebars-python-sdk.ts respectively, and add
source-repo/docs/sidebars/sidebar-utils.ts to the list of files being copied so
the imports in sidebars-* files resolve; also confirm whether the target expects
these files under target-repo/docs/ or target-repo/docs/sidebars/ and adjust the
destination paths accordingly.

- '[pP]y.*\b'
- '\bimport_.*\b' # New filter to ignore variables starting with 'import_'
- '\w+__value' # New filter to skip Infrahub filters in documentation (name__value)
- '\b\w+_\w+\b' # Ignore snake_case identifiers (Python parameter/attribute names)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Overly broad filter — will suppress spell-checking on all snake_case text, not just Python identifiers.

The pattern \b\w+_\w+\b matches any word containing an underscore, so a misspelled phrase like recieve_data or functon_name in hand-written prose would be silently ignored. Since this filter applies globally (not just to auto-generated MDX files), it reduces Vale's effectiveness on manually authored docs.

Consider either:

  1. Scoping this rule to only the generated reference paths (e.g., via a separate .vale.ini section or a dedicated style for generated docs).
  2. Using a slightly more restrictive pattern if possible, though admittedly that's hard with pure regex.

Not a blocker given the volume of Python identifiers in the new generated docs, but worth being aware of the trade-off.

🤖 Prompt for AI Agents
In @.vale/styles/Infrahub/spelling.yml at line 9, The regex '\b\w+_\w+\b' in
spelling.yml is too broad and mutes spell-checking for any underscored word in
prose; remove this global filter and instead scope it to generated reference
docs (apply the pattern in a style used only for generated MDX or add a
path-specific rule in .vale.ini) or tighten the pattern to only match code
identifiers (e.g., only inside code spans/fences or backticks) so hand-written
prose like "recieve_data" still gets flagged; update the spelling.yml entry for
the snake_case rule and move or replace it with the scoped/tighter pattern so it
no longer applies globally.

Comment on lines +40 to +49
def apply(self) -> str:
with tempfile.NamedTemporaryFile(mode="w", suffix=".mdx", delete=False, encoding="utf-8") as tmp:
tmp_path = Path(tmp.name)

full_cmd = f"{self.command.build()} --output {tmp_path}"
with self.context.cd(self.working_directory):
self.context.run(full_cmd)

content = tmp_path.read_text(encoding="utf-8")
tmp_path.unlink(missing_ok=True)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Temp file leaks if context.run raises.

If the command execution on line 46 fails, the temp file created on line 41 is never cleaned up because unlink on line 49 is skipped.

🛡️ Proposed fix: wrap in try/finally
     def apply(self) -> str:
         with tempfile.NamedTemporaryFile(mode="w", suffix=".mdx", delete=False, encoding="utf-8") as tmp:
             tmp_path = Path(tmp.name)
 
-        full_cmd = f"{self.command.build()} --output {tmp_path}"
-        with self.context.cd(self.working_directory):
-            self.context.run(full_cmd)
-
-        content = tmp_path.read_text(encoding="utf-8")
-        tmp_path.unlink(missing_ok=True)
-        return content
+        try:
+            full_cmd = f"{self.command.build()} --output {tmp_path}"
+            with self.context.cd(self.working_directory):
+                self.context.run(full_cmd)
+
+            content = tmp_path.read_text(encoding="utf-8")
+        finally:
+            tmp_path.unlink(missing_ok=True)
+        return content
🤖 Prompt for AI Agents
In `@docs/docs_generation/content_gen_methods/command_output_method.py` around
lines 40 - 49, The apply method creates a temp file (tmp_path) but never removes
it if self.context.run(full_cmd) or subsequent operations raise; wrap the block
that runs the command and reads the file in a try/finally so
tmp_path.unlink(missing_ok=True) is always executed: specifically, after
creating tmp_path in apply(), execute the context.cd / context.run and
tmp_path.read_text() inside a try and move the tmp_path.unlink call into the
finally to guarantee cleanup even on exceptions.

Comment on lines +91 to +102
def _execute_mdxify(self, context: Context, modules_to_document: list[str]) -> dict[str, MdxFile]:
with tempfile.TemporaryDirectory() as tmp_dir:
exec_cmd = f"mdxify {' '.join(modules_to_document)} --output-dir {tmp_dir}"
context.run(exec_cmd, pty=True)

results: dict[str, MdxFile] = {}
for mdx_file in Path(tmp_dir).glob("*.mdx"):
if any(f.lower() in mdx_file.name for f in self.file_filters):
continue
content = _wrap_doctest_examples(mdx_file.read_text(encoding="utf-8"))
results[mdx_file.name] = MdxFile(path=mdx_file, content=content)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Find MdxFile definition and all its usages
fd -n . -type f -name "*.py" | xargs grep -l "class MdxFile\|MdxFile(" | head -20

Repository: opsmill/infrahub-sdk-python

Length of output: 301


🏁 Script executed:

# Get file structure and find the actual files mentioned
find . -type f -name "mdx_code_doc.py" -o -name "tasks.py" | grep -E "(docs_generation|infrahub)" | head -10

Repository: opsmill/infrahub-sdk-python

Length of output: 133


🏁 Script executed:

# Search for MdxFile class definition
rg "class MdxFile" -A 10

Repository: opsmill/infrahub-sdk-python

Length of output: 948


🏁 Script executed:

# Search for FilePrintingDocContentGenMethod
rg "class FilePrintingDocContentGenMethod" -A 20

Repository: opsmill/infrahub-sdk-python

Length of output: 1189


🏁 Script executed:

# Search for uses of MdxFile.path
rg "\.path\b" -B 3 -A 3 | grep -i "mdx"

Repository: opsmill/infrahub-sdk-python

Length of output: 53


🏁 Script executed:

# Search for all MdxFile usages in the entire codebase
rg "MdxFile" --type=py -B 2 -A 2

Repository: opsmill/infrahub-sdk-python

Length of output: 6311


🏁 Script executed:

# Look at the generate() method and caching behavior in mdx_code_doc.py
cat -n docs/docs_generation/content_gen_methods/mdx/mdx_code_doc.py | head -110 | tail -40

Repository: opsmill/infrahub-sdk-python

Length of output: 1910


🏁 Script executed:

# Check if there's any code that accesses .path on MdxFile instances
rg "mdx.*\.path|\.path.*mdx" --type=py -i

Repository: opsmill/infrahub-sdk-python

Length of output: 53


MdxFile.path becomes invalid after _execute_mdxify returns, and caching ignores modules_to_document argument.

Two issues:

  1. Dangling path reference: The TemporaryDirectory on line 92 is deleted after the with block exits, but MdxFile.path stores a reference to a file inside that directory (line 101). While this is safe in practice because all consumers use only MdxFile.content, the path field is misleading—it appears valid but isn't. Remove the path field or document that it's only valid during execution.

  2. Stale cache on different arguments: The generate() method (lines 85–89) caches results in self._files without accounting for the modules_to_document argument. If generate() is called twice with different module arguments on the same instance, the second call silently returns the cached results from the first call instead of re-running mdxify. Cache the results by argument or document this behavior explicitly.

🤖 Prompt for AI Agents
In `@docs/docs_generation/content_gen_methods/mdx/mdx_code_doc.py` around lines 91
- 102, The _execute_mxify implementation stores MdxFile.path pointing into a
TemporaryDirectory that is removed when the with block exits and also generate()
caches results in self._files without accounting for the modules_to_document
argument; to fix, remove the dangling path by populating MdxFile only with
persistent data (e.g., content and name) or explicitly document that path is
ephemeral and only valid inside _execute_mdxify, and change generate() to key
its cache by the modules_to_document argument (e.g., compute a stable key from
modules_to_document and store results per-key in self._files) so repeated calls
with different modules re-run mdxify rather than returning stale results; update
references to MdxFile.path usage elsewhere to use the content/name fields or
handle the ephemeral path accordingly.

Comment on lines +21 to +23
for field_name, field in settings.model_fields.items():
for field_key, field_env_name, _ in env_settings._extract_field_info(field, field_name):
env_vars[field_key].append(field_env_name.upper())
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's read the file to see the actual context
cat -n docs/docs_generation/helpers.py | head -50

Repository: opsmill/infrahub-sdk-python

Length of output: 2223


🏁 Script executed:

# Check for pydantic_settings version constraints in the project
find . -name "*.txt" -o -name "*.toml" -o -name "*.cfg" -o -name "*.yaml" | xargs grep -l "pydantic.settings\|pydantic_settings" 2>/dev/null | head -10

Repository: opsmill/infrahub-sdk-python

Length of output: 87


🏁 Script executed:

# Look for requirements files
fd -e "requirements*.txt" -e "pyproject.toml" -e "poetry.lock" -e "setup.py" | head -20

Repository: opsmill/infrahub-sdk-python

Length of output: 53


🏁 Script executed:

# Read the pyproject.toml to see pydantic_settings version constraints
cat pyproject.toml

Repository: opsmill/infrahub-sdk-python

Length of output: 14423


🌐 Web query:

pydantic_settings EnvSettingsSource public API alternatives to _extract_field_info

💡 Result:

EnvSettingsSource._extract_field_info() is private (underscore) and not part of the supported extension surface. The public/stable ways to achieve the same goals are:

  • Don’t extract env-name candidates yourself—override get_field_value() instead. EnvSettingsSource’s supported hook is get_field_value(field, field_name), which is the intended customization point for “how do I get a value for this field from env”. If you need different env-key rules, subclass EnvSettingsSource/PydanticBaseEnvSettingsSource and implement your own lookup logic there. [1]

  • Use the public helpers for “complex” + nested env handling instead of re-implementing internals:

    • prepare_field_value(...) (in EnvSettingsSource and base) to deserialize/merge nested values for complex fields. [1]
    • field_is_complex(...) and decode_complex_value(...) (on PydanticBaseSettingsSource) if you only need the “is JSON / decode JSON” behavior. [1]
    • explode_env_vars(...) + next_field(...) if your need is specifically nested env var expansion (e.g. FOO__BAR__BAZ=...). [1]
  • Prefer “declare the env names” via aliases, rather than extracting them from the source. If your real need is “what env var names map to this field?”, the supported approach is to define them using Pydantic’s alias mechanisms (e.g., validation aliases / alias choices) and let the settings sources consume those, rather than reaching into _extract_field_info. [2]

  • If you’re adding/replacing settings sources, do it via settings_customise_sources. This is the public API for inserting a custom source (including custom env logic) into the settings loading pipeline. [2]

Bottom line: there isn’t a public “replacement function” that returns the same (field_key, env_name, value_is_complex) tuples as _extract_field_info; the supported alternatives are (a) override get_field_value, and (b) rely on the public helper methods (prepare_field_value, explode_env_vars, next_field, field_is_complex, decode_complex_value) plus alias configuration. [1][2]

References:
[1] Pydantic Settings API docs (EnvSettingsSource, PydanticBaseSettingsSource methods) https://docs.pydantic.dev/latest/api/pydantic_settings/
[2] Pydantic “Settings Management” docs (custom sources / customization) https://docs.pydantic.dev/usage/settings/


Reliance on private _extract_field_info API risks future breakage.

EnvSettingsSource._extract_field_info is a private method with no direct public API replacement in pydantic_settings. The current dependency constraint (pydantic-settings>=2.0) is loose and will accept future releases that may remove or change this private method without notice.

Immediate action: Pin pydantic_settings to a specific major version (e.g., pydantic-settings>=2.0,<3.0) and add a comment documenting this dependency on the private API.

Longer-term: Consider refactoring to use public APIs (get_field_value(), prepare_field_value(), or alias-based field configuration), though this would require more significant changes to the documentation generation logic.

🤖 Prompt for AI Agents
In `@docs/docs_generation/helpers.py` around lines 21 - 23, The code is calling
the private method EnvSettingsSource._extract_field_info via
env_settings._extract_field_info while iterating settings.model_fields, which
risks breaking on future pydantic-settings releases; update the project
dependency spec to pin pydantic-settings to a safe major range (e.g.,
pydantic-settings>=2.0,<3.0) and add an inline comment next to the usage of
env_settings._extract_field_info (and/or above the loop referencing
settings.model_fields) documenting that this relies on a private API and why
it's pinned, and include a TODO referencing the suggested long-term refactor to
public APIs (get_field_value, prepare_field_value or alias-based configuration)
so future maintainers know to replace _extract_field_info.

parse_human_friendly_id(hfid: str | list[str]) -> tuple[str | None, list[str]]
```

Parse a human friendly ID into a kind and an identifier.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Find the source docstring for parse_human_friendly_id
rg -n "Parse a human friendly" --type=py

Repository: opsmill/infrahub-sdk-python

Length of output: 168


🏁 Script executed:

# Also find where parse_human_friendly_id is defined
rg -n "def parse_human_friendly_id" --type=py -A 5

Repository: opsmill/infrahub-sdk-python

Length of output: 560


🏁 Script executed:

# Check if the MDX file has auto-generation markers
head -20 docs/docs/python-sdk/sdk_ref/infrahub_sdk/node/parsers.mdx

Repository: opsmill/infrahub-sdk-python

Length of output: 345


🏁 Script executed:

# List the source structure to understand the layout
fd "parsers" --type f | head -20

Repository: opsmill/infrahub-sdk-python

Length of output: 158


Minor: "human friendly" → "human-friendly" (hyphenate compound adjective).

This is auto-generated from the docstring of parse_human_friendly_id in infrahub_sdk/node/parsers.py. Fix the source docstring there rather than editing the MDX file directly, then regenerate with uv run invoke generate-sdk.

🧰 Tools
🪛 LanguageTool

[grammar] ~16-~16: Use a hyphen to join words.
Context: ...tr | None, list[str]] ``` Parse a human friendly ID into a kind and an identifie...

(QB_NEW_EN_HYPHEN)

🤖 Prompt for AI Agents
In `@docs/docs/python-sdk/sdk_ref/infrahub_sdk/node/parsers.mdx` at line 16, The
docstring for parse_human_friendly_id in infrahub_sdk/node/parsers.py uses
"human friendly" and should be changed to the hyphenated compound adjective
"human-friendly"; update the docstring in the parse_human_friendly_id function
accordingly, save, then regenerate the docs by running `uv run invoke
generate-sdk` so the MDX is rebuilt from the corrected source.

export default {
// Re-use the default mapping
...MDXComponents,
Icon: Icon, // Make the iconify Icon component available in MDX as <icon />.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Misleading comment: <icon /> (lowercase) won't resolve to this component in MDX.

In MDX, lowercase tags are treated as native HTML elements, not custom components. The component should be used as <Icon /> (capitalized) in MDX files. Consider updating the comment.

-    Icon: Icon, // Make the iconify Icon component available in MDX as <icon />.
+    Icon: Icon, // Make the Iconify Icon component available in MDX as <Icon />.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Icon: Icon, // Make the iconify Icon component available in MDX as <icon />.
Icon: Icon, // Make the Iconify Icon component available in MDX as <Icon />.
🤖 Prompt for AI Agents
In `@docs/src/theme/MDXComponents.js` at line 9, The comment in MDXComponents.js
incorrectly suggests using the Icon component as "<icon />" which is misleading
because MDX treats lowercase tags as HTML; update the comment for the Icon
mapping (Icon: Icon) to instruct using the capitalized component name "<Icon />"
in MDX so readers know to use the React component form.

Comment on lines 189 to 217
def _identify_faulty_jinja_code(traceback: Traceback, nbr_context_lines: int = 3) -> list[UndefinedJinja2Error]:
"""This function identifies the faulty Jinja2 code and beautify it to provide meaningful information to the user.
We use the rich's Traceback to parse the complete stack trace and extract Frames for each exception found in the trace.
"""
response = []

# Extract only the Jinja related exception
for frame in [frame for frame in traceback.trace.stacks[0].frames if not frame.filename.endswith(".py")]:
code = "".join(linecache.getlines(frame.filename))
if frame.filename == "<template>":
lexer_name = "text"
else:
lexer_name = Traceback._guess_lexer(frame.filename, code)
syntax = Syntax(
code,
lexer_name,
line_numbers=True,
line_range=(
frame.lineno - nbr_context_lines,
frame.lineno + nbr_context_lines,
),
highlight_lines={frame.lineno},
code_width=88,
theme=traceback.theme,
dedent=False,
)
response.append(UndefinedJinja2Error(frame=frame, syntax=syntax))

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cd infrahub_sdk && pwd && head -220 template/jinja2/__init__.py | tail -50

Repository: opsmill/infrahub-sdk-python

Length of output: 2229


🌐 Web query:

rich library Syntax guess_lexer public method API

💡 Result:

In the Rich Python library, Syntax.guess_lexer is a public @classmethod on rich.syntax.Syntax.

API

  • Signature: Syntax.guess_lexer(path, code=None) -> str (rich.readthedocs.io)
  • What it does: Returns the Pygments lexer alias/name Rich should use for syntax highlighting, based on a filename/path and (optionally) the file contents; if code is provided it may pick a more specific lexer (e.g., html+django for an HTML file containing Django template syntax). (rich.readthedocs.io)
  • Fallback behavior: If it can’t determine a lexer, it returns "default". (rich.readthedocs.io)

Typical use

from rich.syntax import Syntax

lexer = Syntax.guess_lexer("index.html", code=template_text)
syntax = Syntax(template_text, lexer)

(You can also use Syntax.from_path(...), which calls guess_lexer when no lexer is provided.) (rich.readthedocs.io)

Citations:


🌐 Web query:

rich Traceback _guess_lexer private method alternative

💡 Result:

Traceback._guess_lexer() is a private helper (Rich uses it internally when rendering stack frames) and may change without notice. (rich.readthedocs.io)

Public alternatives:

  1. Use Rich’s public lexer guesser: rich.syntax.Syntax.guess_lexer(path, code=...) (rich.readthedocs.io)
from rich.syntax import Syntax

lexer_name = Syntax.guess_lexer(filename, code=source_code)
  1. Let Rich do it for you: Syntax.from_path(...) will auto-detect the lexer when lexer=None. (rich.readthedocs.io)
from rich.syntax import Syntax
syntax = Syntax.from_path(filename)  # auto-detects lexer
  1. Use Pygments directly (what Rich delegates to in traceback rendering): pygments.lexers.guess_lexer_for_filename(filename, code) (rich.readthedocs.io)

Citations:


🏁 Script executed:

git ls-files | grep -E "requirements|setup.py|pyproject.toml" | head -5

Repository: opsmill/infrahub-sdk-python

Length of output: 85


🏁 Script executed:

head -30 infrahub_sdk/template/jinja2/__init__.py | grep -E "^import|^from"

Repository: opsmill/infrahub-sdk-python

Length of output: 568


Replace the private Traceback._guess_lexer call with the public Syntax.guess_lexer API.

Line 202 uses Traceback._guess_lexer, a private method that may change across rich library versions. Replace it with the public Syntax.guess_lexer classmethod (already imported at line 10), which provides the same functionality with a stable API and built-in fallback to "default" if the lexer cannot be determined.

Fix
         if frame.filename == "<template>":
             lexer_name = "text"
         else:
-            lexer_name = Traceback._guess_lexer(frame.filename, code)
+            lexer_name = Syntax.guess_lexer(frame.filename, code)
🤖 Prompt for AI Agents
In `@infrahub_sdk/template/jinja2/__init__.py` around lines 189 - 217, In
_identify_faulty_jinja_code, replace the private Traceback._guess_lexer usage
with the public Syntax.guess_lexer API: call Syntax.guess_lexer(frame.filename,
code) to compute lexer_name (keeping the existing special-case for "<template>"
mapping to "text"), so lexer selection uses the stable public method and its
built-in fallback instead of Traceback._guess_lexer; update the assignment to
lexer_name and keep the rest of the Syntax(...) construction unchanged.

Comment on lines 11 to 41
class JinjaTemplateError(Error):
def __init__(self, message: str) -> None:
self.message = message

def __str__(self) -> str:
return str(self.message)


class JinjaTemplateNotFoundError(JinjaTemplateError):
def __init__(self, message: str | None, filename: str, base_template: str | None = None) -> None:
self.message = message or "Template Not Found"
self.filename = filename
self.base_template = base_template


class JinjaTemplateSyntaxError(JinjaTemplateError):
def __init__(self, message: str | None, lineno: int, filename: str | None = None) -> None:
self.message = message or "Syntax Error"
self.filename = filename
self.lineno = lineno


class JinjaTemplateUndefinedError(JinjaTemplateError):
def __init__(self, message: str | None, errors: list[UndefinedJinja2Error]) -> None:
self.message = message or "Undefined Error"
self.errors = errors


class JinjaTemplateOperationViolationError(JinjaTemplateError):
def __init__(self, message: str | None = None) -> None:
self.message = message or "Forbidden code found in the template"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Find the Error class definition
rg -n 'class Error' --type=py -C5 | head -100

Repository: opsmill/infrahub-sdk-python

Length of output: 1643


🏁 Script executed:

# Check the imports in the exceptions.py file to understand what Error is
head -20 infrahub_sdk/template/jinja2/exceptions.py

Repository: opsmill/infrahub-sdk-python

Length of output: 580


🏁 Script executed:

# Check if there are other exception classes in the codebase that use super().__init__()
rg -n 'super\(\).__init__' --type=py -C2 | head -100

Repository: opsmill/infrahub-sdk-python

Length of output: 6248


🏁 Script executed:

# Look for custom Error class definition or import
fd exceptions.py --type f | xargs cat | grep -A 10 -B 10 'class Error'

Repository: opsmill/infrahub-sdk-python

Length of output: 1458


Missing super().__init__() calls in exception hierarchy.

All exception classes in this module (including JinjaTemplateError and its subclasses) must call super().__init__() to properly initialize the Exception base class. The base Error class calls super().__init__(self.message) at instantiation, but these classes skip that call, breaking exception initialization. This is inconsistent with the pattern used throughout the codebase in infrahub_sdk/task/exceptions.py, infrahub_sdk/pytest_plugin/exceptions.py, and other exception modules.

For each class, add super().__init__(self.message) after setting self.message:

♻️ Example fix for all classes
 class JinjaTemplateError(Error):
     def __init__(self, message: str) -> None:
         self.message = message
+        super().__init__(self.message)
 
     def __str__(self) -> str:
         return str(self.message)
 
 
 class JinjaTemplateNotFoundError(JinjaTemplateError):
     def __init__(self, message: str | None, filename: str, base_template: str | None = None) -> None:
         self.message = message or "Template Not Found"
         self.filename = filename
         self.base_template = base_template
+        super().__init__(self.message)

Apply the same pattern to JinjaTemplateSyntaxError, JinjaTemplateUndefinedError, and JinjaTemplateOperationViolationError.

🤖 Prompt for AI Agents
In `@infrahub_sdk/template/jinja2/exceptions.py` around lines 11 - 41, The
exception classes in this file (JinjaTemplateError, JinjaTemplateNotFoundError,
JinjaTemplateSyntaxError, JinjaTemplateUndefinedError,
JinjaTemplateOperationViolationError) set self.message but never call the base
initializer; update each __init__ to call super().__init__(self.message) after
assigning self.message (and keep assigning any extra attributes like filename,
base_template, lineno, errors) so the underlying Error/Exception gets
initialized correctly; ensure JinjaTemplateError itself calls
super().__init__(self.message) in its __init__ as well.

…, allowing the future documentation generation method to be tested in isolation IHS-201"

This reverts commit 03fa402.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

group/ci Issue related to the CI pipeline type/documentation Improvements or additions to documentation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants