Skip to content

Comments

ci: PRの停車パターン変更を可視化するワークフローを追加#1397

Merged
TinyKitten merged 4 commits intodevfrom
feature/visualize-actions
Feb 21, 2026
Merged

ci: PRの停車パターン変更を可視化するワークフローを追加#1397
TinyKitten merged 4 commits intodevfrom
feature/visualize-actions

Conversation

@TinyKitten
Copy link
Member

@TinyKitten TinyKitten commented Feb 21, 2026

Co-Authored-By: Claude Opus 4.6 noreply@anthropic.com

Summary by CodeRabbit

リリースノート

  • New Features
    • PRの差分から停車パターンを解析し、グループごとに新規/削除/変更バッジ付きのMarkdown可視化を投稿または更新します。駅別の増減やパス変化の詳細表、全体要約と凡例を含み、出力が長い場合は要約表示に切り替わります。
  • Chores
    • 上記可視化を実行する自動ワークフローを追加しました(変更がない場合は既存コメントを削除)。

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@github-actions github-actions bot added the ci/cd label Feb 21, 2026
@TinyKitten TinyKitten self-assigned this Feb 21, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 21, 2026

📝 Walkthrough

Walkthrough

Pull request は data/*.csv 変更を検出して、.github/scripts/visualize_stopping_patterns.py を新規追加し、CSV の差分を解析して Markdown コメントを生成・/tmp/visualization_comment.md に出力。対応する GitHub Actions ワークフローで PR コメントの投稿・更新・削除を自動化します。

Changes

Cohort / File(s) Summary
Visualization Script
/.github/scripts/visualize_stopping_patterns.py
新規追加: BASE_REF 環境変数を用いて data/*.csv の git diff を取得・解析。駅・種別・路線のマッピングを作成し、追加/削除/変更をグループ化して Markdown(グループ別テーブルとサマリ、レジェンド)を生成。結果を /tmp/visualization_comment.md に書き出し、has_changes を GitHub Actions 出力に設定。長大出力や BASE_REF 未設定時の分岐を含む(約439行)。
Workflow Orchestration
/.github/workflows/visualize_stopping_patterns.yml
新規追加: pull_request イベントで data/*.csv 変更をトリガー。リポジトリをフル履歴で checkout、PR base を取得し Python 3.12 をセットアップしてスクリプト実行。既存の <!-- station-visualizer --> コメントを検索し、has_changes に応じてコメントの投稿/更新または削除を実行する制御を追加(約59行)。

Sequence Diagram(s)

sequenceDiagram
    participant PR as Pull Request
    participant GHA as GitHub Actions
    participant Py as Visualization Script
    participant Repo as Repository CSVs
    participant API as GitHub API

    PR->>GHA: Trigger on data/*.csv change
    GHA->>GHA: Checkout repo & fetch base (full history)
    GHA->>GHA: Setup Python 3.12
    GHA->>Py: Run script with BASE_REF env
    Py->>Repo: Load stations/types/lines CSVs
    Py->>Repo: Compute git diff for data/*.csv
    Py->>Py: Parse diffs, classify groups (new/changed/deleted), build Markdown
    Py-->>GHA: Emit has_changes + /tmp/visualization_comment.md
    GHA->>API: Find existing comment with marker <!-- station-visualizer -->
    alt has_changes == true
        GHA->>API: Post or Update PR comment with generated Markdown
    else has_changes == false
        GHA->>API: Delete existing comment (if any)
    end
    API-->>PR: Comment posted/updated/deleted
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐇 ぴょん、差分の野へ跳ぶよ
CSVの葉を嗅ぎ分けて、変化を見つけるんだ
マークダウンの花を編んで、コメントに飾るよ
Actionsの合図で、僕は報告に飛ぶよ
停車パターンの小さな祝祭、今日もひとつ。

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed PRのタイトルは停車パターン変更を可視化するワークフローの追加について正確に説明しており、変更内容の主要な目的を明確に反映しています。
Docstring Coverage ✅ Passed Docstring coverage is 86.67% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/visualize-actions

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/scripts/visualize_stopping_patterns.py:
- Line 270: Remove the unnecessary f-string prefix on the lines.append call that
produces "<details>" (e.g., the lines.append(f"<details>") usage) because there
are no placeholders and it triggers Ruff F541; change it to
lines.append("<details>") wherever that exact f-prefixed string occurs in
visualize_stopping_patterns.py.
- Around line 37-43: The git_diff function currently ignores subprocess.run's
return value; update git_diff to check result.returncode after the
subprocess.run call (for the ["git", "diff", f"{base_ref}...HEAD", "--",
file_path] invocation) and treat non-zero as failure: emit the stderr
(result.stderr) to the console or process logger and abort (raise an exception
or call sys.exit(1)) so a failing git diff isn't silently treated as an empty
diff.

In @.github/workflows/visualize_stopping_patterns.yml:
- Around line 9-11: Update the workflow permissions block so actions/checkout@v4
can run and PR comment actions succeed: add contents: read alongside the
existing pull-requests: write in the permissions mapping, and if the workflow
uses gh api to delete or modify issue/PR comments also include issues: write;
locate the permissions block (symbol "permissions") in the workflow file and
ensure it contains at minimum "contents: read" and "pull-requests: write" (and
add "issues: write" when using peter-evans/create-or-update-comment or gh api
comment deletion).

- git diffの戻り値チェックを追加しエラー時に中断するように変更
- 不要なf-stringプレフィックスを削除
- ワークフローのpermissionsにcontents: readとissues: writeを追加

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
.github/scripts/visualize_stopping_patterns.py (2)

55-57: zip()strict= パラメータの追加を検討してください

Line 55-56 で len(row) < len(header) のチェックはありますが、len(row) > len(header) の場合は余分なカラムが暗黙的に無視されます。意図的な動作であれば問題ありませんが、明示的に strict=False を指定することで意図を明確にできます。

♻️ 修正案
         if len(row) < len(header):
             return None
-        return dict(zip(header, row))
+        return dict(zip(header, row, strict=False))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/scripts/visualize_stopping_patterns.py around lines 55 - 57, The
current mapping uses dict(zip(header, row)) after checking only if len(row) <
len(header), which silently ignores extra columns when len(row) > len(header);
update the dict creation to be explicit by calling zip(header, row,
strict=False) (i.e., replace dict(zip(header, row)) with dict(zip(header, row,
strict=False)) or an equivalent explicit handling) so the intent to allow
unequal lengths is clear; keep the existing len(row) < len(header) guard and add
a short comment near header/row to document why strict=False is used.

427-429: tempfile モジュールの使用を検討してください

/tmp/visualization_comment.md のハードコードパスは CI 環境では問題ありませんが、tempfile モジュールを使用するとより安全です。

♻️ 修正案
+import tempfile
+
 ...
 
-    output_path = "/tmp/visualization_comment.md"
-    with open(output_path, "w", encoding="utf-8") as f:
-        f.write(comment)
+    with tempfile.NamedTemporaryFile(
+        mode="w",
+        suffix="_visualization_comment.md",
+        delete=False,
+        encoding="utf-8",
+    ) as f:
+        f.write(comment)
+        output_path = f.name

注意: この変更を行う場合、ワークフロー側でパスを動的に受け取る必要があります(set_output で出力パスを渡すなど)。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/scripts/visualize_stopping_patterns.py around lines 427 - 429, The
code hardcodes output_path = "/tmp/visualization_comment.md" and writes comment
to it; replace this with a tempfile-backed path using the tempfile module (e.g.,
tempfile.NamedTemporaryFile or tempfile.TemporaryDirectory) to create a safe,
unique file and write the comment there, ensuring you set delete=False if the
path must persist after closing; update any caller or workflow that expects the
static path to accept the generated temp path (for example by returning or
printing the path or using set_output) so consumers can locate the file instead
of relying on the hardcoded output_path variable.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/scripts/visualize_stopping_patterns.py:
- Around line 421-424: The current truncation trims the raw string (variable
comment) at a character boundary which can break Markdown/HTML (e.g., cutting
inside <details> or table rows); instead, build the comment by appending whole
sections and stop when the next section would exceed MAX_COMMENT_LENGTH, then
append truncation_msg; specifically, replace the direct slice logic that uses
comment[: MAX_COMMENT_LENGTH - len(truncation_msg)] with a loop that iterates
group_sections (or whatever list produces each section), keeps a running
current_length (starting from len("\n".join(parts)) or 0), checks if
current_length + len(section) + len(truncation_msg) > MAX_COMMENT_LENGTH, and if
so appends truncation_msg and breaks, otherwise appends the section to parts and
updates current_length — finally join parts to form comment so no tag/row is cut
mid-section.

---

Nitpick comments:
In @.github/scripts/visualize_stopping_patterns.py:
- Around line 55-57: The current mapping uses dict(zip(header, row)) after
checking only if len(row) < len(header), which silently ignores extra columns
when len(row) > len(header); update the dict creation to be explicit by calling
zip(header, row, strict=False) (i.e., replace dict(zip(header, row)) with
dict(zip(header, row, strict=False)) or an equivalent explicit handling) so the
intent to allow unequal lengths is clear; keep the existing len(row) <
len(header) guard and add a short comment near header/row to document why
strict=False is used.
- Around line 427-429: The code hardcodes output_path =
"/tmp/visualization_comment.md" and writes comment to it; replace this with a
tempfile-backed path using the tempfile module (e.g.,
tempfile.NamedTemporaryFile or tempfile.TemporaryDirectory) to create a safe,
unique file and write the comment there, ensuring you set delete=False if the
path must persist after closing; update any caller or workflow that expects the
static path to accept the generated temp path (for example by returning or
printing the path or using set_output) so consumers can locate the file instead
of relying on the hardcoded output_path variable.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
.github/scripts/visualize_stopping_patterns.py (2)

431-433: /tmp の直接使用について

Ruff S108 が /tmp の直接使用を警告しています。GitHub Actions ランナーは一時的な環境なので実害はありませんが、より堅牢にするには tempfile モジュールの使用を検討できます。ただし、ワークフローが固定パスを期待している場合は現状のままで問題ありません。

♻️ 代替案(必要に応じて)
import tempfile
# ...
output_path = os.path.join(tempfile.gettempdir(), "visualization_comment.md")

または、ワークフロー側で GITHUB_WORKSPACE 配下の一時ディレクトリを使用することも検討できます。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/scripts/visualize_stopping_patterns.py around lines 431 - 433, The
code writes to a hardcoded "/tmp" path via the variable output_path in
visualize_stopping_patterns.py; replace this direct /tmp usage with the tempfile
module (e.g., use tempfile.gettempdir() or a NamedTemporaryFile) to compute a
safe temporary file location and then open/write to that path; update the
output_path assignment and any consumers of output_path (the
open(...).write(comment) usage) accordingly so the workflow can still find the
file if required.

49-58: zip()strict= パラメータを明示的に指定することを検討してください

Line 57 の zip(header, row)strict= パラメータが省略されています。Line 55-56 で長さチェックを行っているため実害はありませんが、Ruff B905 の警告を解消し、意図を明確にするために strict=False を追加することをお勧めします。

♻️ 修正案
-            return dict(zip(header, row))
+            return dict(zip(header, row, strict=False))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/scripts/visualize_stopping_patterns.py around lines 49 - 58, In
parse_csv_line, make the use of zip explicit by adding the strict parameter:
replace the call to zip(header, row) in the parse_csv_line function with
zip(header, row, strict=False) so Ruff B905 is silenced and the intent (allow
differing lengths despite the preceding length check) is clear; update only the
zip invocation in parse_csv_line.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/scripts/visualize_stopping_patterns.py:
- Around line 418-426: Remove the unused variable assignment for truncated in
the loop; the code in the loop that sets truncated = True (referencing variable
truncated) should be deleted because the flag is never read later. Locate the
for loop iterating over group_sections and remove the truncated variable
declaration and the assignment (the initial truncated = False and the truncated
= True inside the if), leaving the current logic that appends truncation_msg to
parts and breaks out when current_length + section_length + len(truncation_msg)
> MAX_COMMENT_LENGTH; ensure group_sections, parts, current_length,
truncation_msg and MAX_COMMENT_LENGTH usage remains unchanged.

---

Nitpick comments:
In @.github/scripts/visualize_stopping_patterns.py:
- Around line 431-433: The code writes to a hardcoded "/tmp" path via the
variable output_path in visualize_stopping_patterns.py; replace this direct /tmp
usage with the tempfile module (e.g., use tempfile.gettempdir() or a
NamedTemporaryFile) to compute a safe temporary file location and then
open/write to that path; update the output_path assignment and any consumers of
output_path (the open(...).write(comment) usage) accordingly so the workflow can
still find the file if required.
- Around line 49-58: In parse_csv_line, make the use of zip explicit by adding
the strict parameter: replace the call to zip(header, row) in the parse_csv_line
function with zip(header, row, strict=False) so Ruff B905 is silenced and the
intent (allow differing lengths despite the preceding length check) is clear;
update only the zip invocation in parse_csv_line.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/scripts/visualize_stopping_patterns.py:
- Around line 55-57: The current row-to-dict mapping quietly truncates extra CSV
columns by using dict(zip(header, row)); change it to require exact column count
and use zip(..., strict=True) so mismatched lengths raise a ValueError: check
that len(row) == len(header) before zipping (or call dict(zip(header, row,
strict=True)) inside a try/except), and on mismatch return None or log/raise as
appropriate; update the code that references header and row in the mapping block
to use this strict check.
- Around line 428-431: The script currently hardcodes output_path =
"/tmp/visualization_comment.md" which triggers Ruff S108 and allows predictable
symlink attacks; change the script to create a secure temporary file using the
tempfile module (e.g., tempfile.NamedTemporaryFile(delete=False) or
tempfile.mkstemp) and write comment to that dynamic path, exposing the final
path via stdout or an environment/exit file so callers can consume it; update
any callers (notably the workflow's body-path reference) to read the produced
path from the script output or an environment variable instead of using the
fixed /tmp path.

@TinyKitten TinyKitten merged commit 374f381 into dev Feb 21, 2026
2 checks passed
@TinyKitten TinyKitten deleted the feature/visualize-actions branch February 21, 2026 23:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant