-
Notifications
You must be signed in to change notification settings - Fork 132
Harden CI for self-hosted SLURM runners #1135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
sbryngelson
wants to merge
34
commits into
MFlowCode:master
Choose a base branch
from
sbryngelson:ci-fixes
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+249
−214
Open
Changes from all commits
Commits
Show all changes
34 commits
Select commit
Hold shift + click to select a range
e779ed4
Switch Phoenix GPU jobs to H200 nodes for faster scheduling
sbryngelson 9cf00d3
Fix bash segfault in monitor_slurm_job.sh from fractional read timeout
sbryngelson 7faf2d6
Merge branch 'master' into ci-fixes
sbryngelson a59db02
Restore pull_request_review trigger for benchmark workflow
sbryngelson 2efc61e
Auto-retry sporadic test failures in CI
sbryngelson 0658bd3
Preserve exit code for catastrophic test failures
sbryngelson c6b6f81
Harden SLURM monitor: robust state checks, orphan cleanup
sbryngelson a82959e
Use parsable sacct flags for robust state parsing
sbryngelson 8022969
Guard squeue/sacct pipelines against set -euo pipefail
sbryngelson 88d19ce
Retry delete_directory on Lustre ENOTEMPTY race
sbryngelson 05d28f3
Remove stale failed_uuids.txt before test run
sbryngelson 9eed0c6
Split benchmark concurrency group by event type
sbryngelson 02c658d
Merge branch 'master' into ci-fixes
sbryngelson edefc01
Revert Phoenix test jobs to multi-partition GPU scheduling
sbryngelson 2e15ab6
Fix doc lint for generated pages and hyphenated page IDs
sbryngelson 0267756
Merge branch 'master' into ci-fixes
sbryngelson dfc524c
Add Lustre-safe workspace cleanup for self-hosted runners
sbryngelson ece1951
Revert Phoenix benchmark jobs to L40S GPU scheduling
sbryngelson a553a75
Improve Lustre-safe workspace cleanup with dotglob and nullglob
sbryngelson a149886
Auto-requeue SLURM jobs on preemption
sbryngelson ed8abd5
Merge branch 'master' into ci-fixes
sbryngelson 273cced
Remove aggressive workspace cleanup from test jobs
sbryngelson c2e6543
Propagate exit code from test retry command
sbryngelson c0b1cd1
Restore default checkout clean for test jobs; tune PR reviewer
sbryngelson 7a764d5
Remove aggressive workspace cleanup from bench jobs
sbryngelson b5dfa1f
Add test sharding for Frontier CI; switch to batch/hackathon partition
sbryngelson 475caa3
Validate --shard argument format and bounds
sbryngelson ddd95ac
Use nick-fields/retry for Frontier builds; reduce -j to 4
sbryngelson 197813a
Add set -e to Frontier build scripts for fail-fast behavior
sbryngelson 73072bf
Add required timeout_minutes to nick-fields/retry steps
sbryngelson 4f39d1b
Add on_retry_command to clean build state between retry attempts
sbryngelson eed3af5
Fix bench build: use PIDs instead of job specs for wait
sbryngelson 0f580e8
Disable Frontier AMD bench: IBM case hits zero-size GPU transfer bug
sbryngelson 4ff4167
Restore Frontier AMD bench entry
sbryngelson File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -4,11 +4,17 @@ | |
|
|
||
| set -euo pipefail | ||
|
|
||
| # Cleanup handler to prevent orphaned tail processes | ||
| # Cleanup handler to prevent orphaned tail processes and cancel orphaned jobs | ||
| cleanup() { | ||
| if [ -n "${tail_pid:-}" ]; then | ||
| kill "${tail_pid}" 2>/dev/null || true | ||
| fi | ||
| # Cancel the SLURM job if the monitor is exiting due to an error | ||
| # (e.g., the CI runner is being killed). Don't cancel on success. | ||
| if [ "${monitor_success:-0}" -ne 1 ] && [ -n "${job_id:-}" ]; then | ||
| echo "Monitor exiting abnormally — cancelling SLURM job $job_id" | ||
| scancel "$job_id" 2>/dev/null || true | ||
| fi | ||
sbryngelson marked this conversation as resolved.
Show resolved
Hide resolved
sbryngelson marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| } | ||
| trap cleanup EXIT | ||
|
|
||
|
|
@@ -23,30 +29,78 @@ output_file="$2" | |
| echo "Submitted batch job $job_id" | ||
| echo "Monitoring output file: $output_file" | ||
|
|
||
| # Wait for file to appear with retry logic for transient squeue failures | ||
| # Robustly check SLURM job state using squeue with sacct fallback. | ||
| # Returns the state string (PENDING, RUNNING, COMPLETED, FAILED, etc.) | ||
| # or "UNKNOWN" if both commands fail. | ||
| get_job_state() { | ||
| local jid="$1" | ||
| local state | ||
|
|
||
| # Try squeue first (fast, works for active jobs) | ||
| state=$(squeue -j "$jid" -h -o '%T' 2>/dev/null | head -n1 | tr -d ' ' || true) | ||
| if [ -n "$state" ]; then | ||
| echo "$state" | ||
| return | ||
| fi | ||
|
|
||
| # Fallback to sacct (works for completed/historical jobs) | ||
| if command -v sacct >/dev/null 2>&1; then | ||
| state=$(sacct -j "$jid" -n -X -P -o State 2>/dev/null | head -n1 | cut -d'|' -f1 || true) | ||
| if [ -n "$state" ]; then | ||
| echo "$state" | ||
| return | ||
| fi | ||
| fi | ||
|
|
||
| echo "UNKNOWN" | ||
| } | ||
sbryngelson marked this conversation as resolved.
Show resolved
Hide resolved
sbryngelson marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| # Check if a state is terminal (job is done, for better or worse) | ||
| is_terminal_state() { | ||
| case "$1" in | ||
| COMPLETED|FAILED|CANCELLED|CANCELLED+|TIMEOUT|OUT_OF_MEMORY|NODE_FAIL|BOOT_FAIL|DEADLINE) | ||
| return 0 ;; | ||
| *) | ||
| return 1 ;; | ||
| esac | ||
| } | ||
sbryngelson marked this conversation as resolved.
Show resolved
Hide resolved
sbryngelson marked this conversation as resolved.
Show resolved
Hide resolved
sbryngelson marked this conversation as resolved.
Show resolved
Hide resolved
sbryngelson marked this conversation as resolved.
Show resolved
Hide resolved
sbryngelson marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| # Wait for file to appear, using robust state checking. | ||
| # Never give up due to transient squeue/sacct failures — the CI job timeout | ||
| # is the ultimate backstop. | ||
| echo "Waiting for job to start..." | ||
| squeue_retries=0 | ||
| max_squeue_retries=5 | ||
| unknown_count=0 | ||
| while [ ! -f "$output_file" ]; do | ||
| # Check if job is still queued/running | ||
| if squeue -j "$job_id" &>/dev/null; then | ||
| squeue_retries=0 # Reset on success | ||
| sleep 5 | ||
| else | ||
| squeue_retries=$((squeue_retries + 1)) | ||
| if [ $squeue_retries -ge $max_squeue_retries ]; then | ||
| # Job not in queue and output file doesn't exist | ||
| if [ ! -f "$output_file" ]; then | ||
| echo "ERROR: Job $job_id not in queue and output file not created" | ||
| state=$(get_job_state "$job_id") | ||
|
|
||
| case "$state" in | ||
| PENDING|CONFIGURING) | ||
| unknown_count=0 | ||
| sleep 5 | ||
| ;; | ||
| RUNNING|COMPLETING) | ||
| unknown_count=0 | ||
| # Job is running but output file not yet visible (NFS delay) | ||
| sleep 2 | ||
| ;; | ||
| UNKNOWN) | ||
| unknown_count=$((unknown_count + 1)) | ||
| # Only print warning periodically to avoid log spam | ||
| if [ $((unknown_count % 12)) -eq 1 ]; then | ||
| echo "Warning: Could not query job $job_id state (SLURM may be temporarily unavailable)..." | ||
| fi | ||
| sleep 5 | ||
| ;; | ||
| *) | ||
| # Terminal state — job finished without creating output | ||
| if is_terminal_state "$state"; then | ||
| echo "ERROR: Job $job_id reached terminal state ($state) without creating output file" | ||
| exit 1 | ||
| fi | ||
| break | ||
| fi | ||
| # Exponential backoff | ||
| sleep_time=$((2 ** squeue_retries)) | ||
| echo "Warning: squeue check failed, retrying in ${sleep_time}s..." | ||
| sleep $sleep_time | ||
| fi | ||
| # Unrecognized state, keep waiting | ||
| sleep 5 | ||
| ;; | ||
| esac | ||
| done | ||
sbryngelson marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| echo "=== Streaming output for job $job_id ===" | ||
|
|
@@ -57,14 +111,13 @@ exec 3< <(stdbuf -oL -eL tail -f "$output_file" 2>&1) | |
| tail_pid=$! | ||
sbryngelson marked this conversation as resolved.
Show resolved
Hide resolved
sbryngelson marked this conversation as resolved.
Show resolved
Hide resolved
sbryngelson marked this conversation as resolved.
Show resolved
Hide resolved
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Suggestion: Use New proposed code: # Start tail and redirect its output to file descriptor 3 for multiplexing
# This allows us to stream tail output while also printing heartbeat messages
-exec 3< <(stdbuf -oL -eL tail -f "$output_file" 2>&1)
-tail_pid=$!
+coproc TAIL_PROC { stdbuf -oL -eL tail -f "$output_file" 2>&1; }
+exec 3<&"${TAIL_PROC[0]}"
+tail_pid="${TAIL_PROC_PID}" |
||
|
|
||
| # Monitor job status and stream output simultaneously | ||
| squeue_failures=0 | ||
| last_heartbeat=$(date +%s) | ||
|
|
||
| while true; do | ||
| # Try to read from tail output (non-blocking via timeout) | ||
| # Read multiple lines if available to avoid falling behind | ||
| lines_read=0 | ||
| while IFS= read -r -t 0.1 line <&3 2>/dev/null; do | ||
| while IFS= read -r -t 1 line <&3 2>/dev/null; do | ||
| echo "$line" | ||
| lines_read=$((lines_read + 1)) | ||
sbryngelson marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| last_heartbeat=$(date +%s) | ||
|
|
@@ -73,49 +126,30 @@ while true; do | |
| break | ||
| fi | ||
| done | ||
|
|
||
| # Check job status | ||
| current_time=$(date +%s) | ||
| if ! squeue -j "$job_id" &>/dev/null; then | ||
| squeue_failures=$((squeue_failures + 1)) | ||
| # Check if job actually completed using sacct (if available) | ||
| if [ $squeue_failures -ge 3 ]; then | ||
| if command -v sacct >/dev/null 2>&1; then | ||
| state=$(sacct -j "$job_id" --format=State --noheader 2>/dev/null | head -n1 | awk '{print $1}') | ||
| # Consider job done only if it reached a terminal state | ||
| case "$state" in | ||
| COMPLETED|FAILED|CANCELLED|TIMEOUT|OUT_OF_MEMORY) | ||
| echo "[$(date +%H:%M:%S)] Job $job_id reached terminal state: $state" | ||
| break | ||
| ;; | ||
| *) | ||
| # treat as transient failure, reset failures and continue polling | ||
| squeue_failures=0 | ||
| ;; | ||
| esac | ||
| else | ||
| # No sacct: assume job completed after 3 failures | ||
| echo "[$(date +%H:%M:%S)] Job $job_id no longer in queue" | ||
| break | ||
| fi | ||
| fi | ||
| state=$(get_job_state "$job_id") | ||
|
|
||
| if is_terminal_state "$state"; then | ||
| echo "[$(date +%H:%M:%S)] Job $job_id reached terminal state: $state" | ||
| break | ||
| else | ||
| squeue_failures=0 | ||
| # Print heartbeat if no output for 60 seconds | ||
| if [ $((current_time - last_heartbeat)) -ge 60 ]; then | ||
| echo "[$(date +%H:%M:%S)] Job $job_id still running (no new output for 60s)..." | ||
| echo "[$(date +%H:%M:%S)] Job $job_id state=$state (no new output for 60s)..." | ||
| last_heartbeat=$current_time | ||
| fi | ||
| fi | ||
sbryngelson marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| # Sleep briefly between status checks | ||
| sleep 1 | ||
| done | ||
|
|
||
| # Drain any remaining output from tail after job completes | ||
| echo "Draining remaining output..." | ||
| drain_count=0 | ||
| while IFS= read -r -t 0.5 line <&3 2>/dev/null; do | ||
| while IFS= read -r -t 1 line <&3 2>/dev/null; do | ||
| echo "$line" | ||
| drain_count=$((drain_count + 1)) | ||
| # Safety limit to avoid infinite loop | ||
|
|
@@ -128,8 +162,9 @@ done | |
| # Close the file descriptor and kill tail | ||
| exec 3<&- | ||
| kill "${tail_pid}" 2>/dev/null || true | ||
| tail_pid="" | ||
|
|
||
| # Wait for output file to finish growing (stabilize) before stopping tail | ||
| # Wait for output file to stabilize (NFS flush) before final read | ||
| if [ -f "$output_file" ]; then | ||
| last_size=-1 | ||
| same_count=0 | ||
|
|
@@ -149,9 +184,6 @@ if [ -f "$output_file" ]; then | |
| done | ||
| fi | ||
|
|
||
| # Stop tailing (trap will also handle this on exit) | ||
| kill "${tail_pid}" 2>/dev/null || true | ||
|
|
||
| echo "" | ||
| echo "=== Final output ===" | ||
| cat "$output_file" | ||
|
|
@@ -187,6 +219,6 @@ if [ "$exit_code" != "0:0" ]; then | |
| exit 1 | ||
| fi | ||
|
|
||
| monitor_success=1 | ||
| echo "Job $job_id completed successfully" | ||
| exit 0 | ||
|
|
||
sbryngelson marked this conversation as resolved.
Show resolved
Hide resolved
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.