Add challenge 80: Grouped Query Attention (Medium)#215
Merged
kunal-mansukhani merged 2 commits intomainfrom Mar 11, 2026
Merged
Add challenge 80: Grouped Query Attention (Medium)#215kunal-mansukhani merged 2 commits intomainfrom
kunal-mansukhani merged 2 commits intomainfrom
Conversation
Implements a GQA forward pass challenge inspired by real-world LLM inference (LLaMA-3, Mistral, Gemma). Solvers must correctly handle Q/K/V tensors with different head counts and implement scaled dot-product attention with softmax over grouped KV heads. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
shxjames
previously approved these changes
Mar 10, 2026
- Add missing output matrices to the Example section (required by checklist)
- Convert example from <pre> notation to LaTeX \begin{bmatrix} for all
Q, K, V, and output head matrices (required for 2D/3D data per CLAUDE.md)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
shxjames
approved these changes
Mar 11, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
num_q_heads / num_kv_headsconsecutive Q heads attend to the same K/V head, requiring correct understanding of memory layout, head grouping, and softmax normalizationChecklist
challenge.htmlstarts with<p>, has<h2>sections for Implementation Requirements, Example, Constraintsgenerate_example_test()values<pre>for 1D data (consistent)num_q_heads=32, num_kv_heads=8, seq_len=1024, head_dim=128#222background)challenge.pyinheritsChallengeBase, all 6 methods presentreference_implhas assertions on shape, dtype, devicegenerate_functional_testreturns 10 cases covering edge cases, powers-of-2, non-powers-of-2, MQA, MHA-equivalent, zero inputs, realistic sizesrun_challenge.py --action submit— all tests pass on NVIDIA TESLA T4🤖 Generated with Claude Code