Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 Walkthrough🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Caution
Changes requested ❌
Reviewed everything up to 987d252 in 2 minutes and 27 seconds. Click for details.
- Reviewed
132lines of code in3files - Skipped
0files when reviewing. - Skipped posting
2draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. semantic_conventions_ai/lib/opentelemetry/semantic_conventions.rb:30
- Draft comment:
New GEN_AI constants are added. Please ensure documentation is updated to clarify when to use these over the existing LLM constants. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%The comment is asking the PR author to ensure that documentation is updated, which violates the rule against asking the author to ensure something is done. It doesn't provide a specific code suggestion or point out a specific issue with the code itself.
2. traceloop-sdk/traceloop-sdk.gemspec:20
- Draft comment:
Dependency versions for opentelemetry-exporter-otlp and opentelemetry-sdk have been updated. Ensure these versions are compatible with all of the newly introduced GEN_AI features and any related semantic changes. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%This comment is asking the PR author to ensure compatibility of dependency versions with new features, which violates the rule against asking for confirmation or verification of intentions. It does not provide a specific suggestion or point out a specific issue.
Workflow ID: wflow_mzK0dcLcsfaIhT8h
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
traceloop-sdk/lib/traceloop/sdk.rb
Outdated
|
|
||
| def log_ruby_llm_halt(response) | ||
| @span.add_attributes({ | ||
| OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_RESPONSE_MODEL => @model, |
There was a problem hiding this comment.
In log_ruby_llm_halt, the response model key uses LLM_RESPONSE_MODEL, whereas log_ruby_llm_message uses GEN_AI_RESPONSE_MODEL. For consistency, consider switching to GEN_AI_RESPONSE_MODEL.
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (2)
traceloop-sdk/lib/traceloop/sdk.rb (2)
62-66: Consider usingis_a?instead ofinstance_of?for type checking.The code uses
instance_of?which only matches the exact class and not subclasses. If RubyLLM has any subclasses ofMessageorTool::Halt, they won't be caught by this check.Unless there's a specific reason to exclude subclasses, consider using
is_a?for more flexible type checking:- elsif response.instance_of?(::RubyLLM::Message) + elsif response.is_a?(::RubyLLM::Message) log_ruby_llm_message(response) - elsif response.instance_of?(::RubyLLM::Tool::Halt) + elsif response.is_a?(::RubyLLM::Tool::Halt) log_ruby_llm_halt(response)
87-95: RubyLLM message logging looks good, but consider nil handling.The implementation correctly uses the new
GEN_AI_*semantic conventions and extracts the relevant attributes from the RubyLLM message.Consider adding nil checks or safe navigation for robustness, especially for optional fields like token counts:
def log_ruby_llm_message(response) @span.add_attributes({ OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_RESPONSE_MODEL => response.model_id, - OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_USAGE_COMPLETION_TOKENS => response.output_tokens, - OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_USAGE_PROMPT_TOKENS => response.input_tokens, + OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_USAGE_COMPLETION_TOKENS => response.output_tokens || 0, + OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_USAGE_PROMPT_TOKENS => response.input_tokens || 0, "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.role" => response.role.to_s, "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.content" => response.content }) end
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
semantic_conventions_ai/lib/opentelemetry/semantic_conventions.rb(1 hunks)traceloop-sdk/lib/traceloop/sdk.rb(4 hunks)traceloop-sdk/traceloop-sdk.gemspec(1 hunks)
🔇 Additional comments (4)
traceloop-sdk/lib/traceloop/sdk.rb (3)
69-69: Good defensive check for Gemini responses.Adding
respond_to?(:has_key?)before callinghas_key?prevents potential NoMethodError exceptions for response types that don't support hash-like access.
152-158: Good migration to Gen AI semantic conventions in llm_call.The migration from
LLM_REQUEST_MODELtoGEN_AI_REQUEST_MODELand the addition ofGEN_AI_SYSTEMfor the provider align with the new semantic conventions being introduced.
43-57: Code under review is correct; verify if broader LLM_/GEN_AI_ consistency is in scope for this PR.The
log_promptmethod (lines 43-57) correctly usesGEN_AI_PROMPTSconstants. However, the verification reveals mixed usage across other methods in the same file:
log_gemini_response: Still usesLLM_*constants only (lines 78, 82–83)log_bedrock_responseandlog_openai_response: Use bothLLM_*andGEN_AI_*constants (partial migration)This indicates an incomplete refactoring. Confirm whether this PR should address the full consistency migration to
GEN_AI_*across all methods, or if the mixed usage is intentional for this phase.traceloop-sdk/traceloop-sdk.gemspec (1)
20-21: Dependency versions verified—both are valid and safe.Both
opentelemetry-sdk1.10.0 andopentelemetry-exporter-otlp0.31.1 are legitimate releases on rubygems with no security advisories. No breaking API changes were found between the updated versions (1.9.0 → 1.10.0 and 0.30.x → 0.31.1). Both gems require Ruby ≥ 3.1 and are compatible with the existing codebase.
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
traceloop-sdk/lib/traceloop/sdk.rb (2)
36-44: Inconsistent semantic convention - migrate to GEN_AI_PROMPTS.The
log_messagesmethod usesLLM_PROMPTS(lines 40-41) whilelog_prompt(lines 49-57) usesGEN_AI_PROMPTS. This inconsistency may complicate telemetry querying and analysis.Apply this diff to align with the GEN_AI conventions:
def log_messages(messages) messages.each_with_index do |message, index| content = message[:content].is_a?(Array) ? message[:content].to_json : (message[:content] || "") @span.add_attributes({ - "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_PROMPTS}.#{index}.role" => message[:role] || "user", - "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_PROMPTS}.#{index}.content" => content, + "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_PROMPTS}.#{index}.role" => message[:role] || "user", + "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_PROMPTS}.#{index}.content" => content, }) end end
79-89: Inconsistent semantic conventions and awkward formatting.This method uses
LLM_RESPONSE_MODELandLLM_COMPLETIONSwhile the new RubyLLM methods useGEN_AI_RESPONSE_MODELandGEN_AI_COMPLETIONS. Additionally, thedigcall formatting (lines 86-87) is awkward and reduces readability.Apply this diff to align with GEN_AI conventions and improve formatting:
def log_gemini_response(response) @span.add_attributes({ - OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_RESPONSE_MODEL => @model, + OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_RESPONSE_MODEL => @model, }) @span.add_attributes({ - "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_COMPLETIONS}.0.role" => "assistant", - "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_COMPLETIONS}.0.content" => response.dig( -"candidates", 0, "content", "parts", 0, "text") + "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.role" => "assistant", + "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.content" => response.dig("candidates", 0, "content", "parts", 0, "text") }) end
🧹 Nitpick comments (2)
traceloop-sdk/lib/traceloop/sdk.rb (2)
109-133: Consider migrating to GEN_AI semantic conventions.This method uses legacy
LLM_*constants (LLM_RESPONSE_MODEL, LLM_USAGE_*, LLM_COMPLETIONS) while the new RubyLLM methods useGEN_AI_*constants. For consistency across providers and to align with current best practices, consider migrating to the GEN_AI conventions.
135-157: Consider migrating to GEN_AI conventions and improving formatting.Similar to the Bedrock and Gemini methods, this uses legacy
LLM_*constants. Additionally, the formatting with line breaks (lines 141-146, 151-154) reduces readability. Consider refactoring for consistency and clarity.Example refactor for cleaner formatting:
def log_openai_response(response) @span.add_attributes({ OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_RESPONSE_MODEL => response.dig("model"), }) if response.has_key?("usage") @span.add_attributes({ OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_USAGE_TOTAL_TOKENS => response.dig("usage", "total_tokens"), OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_USAGE_COMPLETION_TOKENS => response.dig("usage", "completion_tokens"), OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_USAGE_PROMPT_TOKENS => response.dig("usage", "prompt_tokens"), }) end if response.has_key?("choices") @span.add_attributes({ "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.role" => response.dig("choices", 0, "message", "role"), "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.content" => response.dig("choices", 0, "message", "content") }) end end
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
traceloop-sdk/lib/traceloop/sdk.rb(5 hunks)
🔇 Additional comments (7)
traceloop-sdk/lib/traceloop/sdk.rb (7)
9-10: LGTM! Good defensive validation.The API key validation provides a clear error message early in initialization, preventing downstream failures with unclear error messages.
14-21: LGTM! Proper production configuration.The switch to
BatchSpanProcessorwith string-keyed headers is appropriate for production use and addresses previous review concerns.
46-60: LGTM! Proper migration to GEN_AI conventions.The migration from
LLM_PROMPTStoGEN_AI_PROMPTSis correctly implemented for both the system+user and user-only prompt paths.
62-77: LGTM! Proper type detection for RubyLLM responses.The RubyLLM type checks use fully-qualified names and
is_a?for robust type detection. The addedhas_key?guard for Gemini (line 72) prevents potential errors with hash-like objects.
91-99: LGTM! Well-implemented RubyLLM message logging.The method correctly uses GEN_AI semantic conventions and includes defensive fallbacks for token counts. The use of
response.model_idis appropriate for capturing the actual model used.
101-107: LGTM! Correct implementation of halt response logging.The method appropriately uses GEN_AI semantic conventions and logs the halt response with a "tool" role. The use of
@model(instead ofresponse.model_id) is likely correct ifHaltobjects don't expose model information.
160-168: LGTM! Proper migration to GEN_AI conventions.The migration to
GEN_AI_REQUEST_MODELand the addition ofGEN_AI_SYSTEMfor provider tracking are appropriate improvements for telemetry.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (6)
semantic_conventions_ai/lib/opentelemetry/semantic_conventions.rb (1)
34-43: Avoid introducing new usages of deprecated GenAI attributes (gen_ai.completion,gen_ai.prompt,gen_ai.system).The new constants:
GEN_AI_COMPLETIONS = "gen_ai.completion"GEN_AI_PROMPTS = "gen_ai.prompt"GEN_AI_SYSTEM = "gen_ai.system"are all wired to attributes that the current OpenTelemetry GenAI spec explicitly marks as deprecated, with
gen_ai.systemreplaced bygen_ai.provider.nameandgen_ai.prompt/gen_ai.completionremoved with no attribute replacement (content should go via events orgen_ai.input.messages/gen_ai.output.messages). (opentelemetry.io)Using these for new instrumentation will make your traces look “legacy” to GenAI-aware backends and risks future incompatibility.
I’d recommend:
- Prefer
GEN_AI_PROVIDER("gen_ai.provider.name") overGEN_AI_SYSTEM, and drop or clearly deprecateGEN_AI_SYSTEM.- For prompts/completions, either:
- keep these constants but mark them as deprecated/compat-only, or
- migrate toward
gen_ai.input.messages/gen_ai.output.messages(and/or GenAI content events) for new features.If you want, I can propose a concrete mapping (including potential
GEN_AI_INPUT_MESSAGES/GEN_AI_OUTPUT_MESSAGESconstants) that aligns with the current spec while keeping backward compatibility.traceloop-sdk/lib/traceloop/sdk.rb (5)
9-23: Initialization & OTLP exporter configuration look solid, with one small ergonomics note.
- Pre-validating
TRACELOOP_API_KEYand raising early avoids obscure failures during exporter setup.- Switching to
BatchSpanProcessorand using a string"Authorization"header key are appropriate for production use.- Using
TRACELOOP_AUTH_SCHEMEto parameterize the auth scheme is a nice touch.Only minor consideration:
puts "Traceloop exporting traces to ..."will print on every SDK initialization; if this is used in long-running server apps or CLIs, consider switching to a proper logger or gating behind an env flag so users can silence it if needed.
49-57: GEN_AI_PROMPTS constant is backed by deprecatedgen_ai.promptattribute.The move from
LLM_PROMPTStoGEN_AI_PROMPTSinlog_promptis consistent with the new constants you added, but note that the underlying attribute value is still"gen_ai.prompt", which the current GenAI spec classifies as deprecated (with content recommended to be sent via events orgen_ai.input.messages). (opentelemetry.io)This is fine as a compatibility path, but you may want to:
- Explicitly document that
GEN_AI_PROMPTS(and the structured*.N.role/contentpattern) is legacy / compat-oriented.- Plan a follow-up that emits prompt content via the newer conventions (
gen_ai.input.messages/gen_ai.system_instructionsor GenAI content events), especially if you’re targeting tools that expect the latest GenAI semantics.
91-99: RubyLLM message logging is well structured; consider when to emit token attributes.The mapping here looks good:
GEN_AI_RESPONSE_MODEL←response.model_idGEN_AI_USAGE_OUTPUT_TOKENS←response.output_tokensGEN_AI_USAGE_INPUT_TOKENS←response.input_tokensGEN_AI_COMPLETIONS.*is populated from the RubyLLM message role/content.This aligns with GenAI conventions for
gen_ai.request/response.modelandgen_ai.usage.{input,output}_tokens. (opentelemetry.io)One small nuance: you coerce missing token counts to zero:
GEN_AI_USAGE_OUTPUT_TOKENS => response.output_tokens || 0, GEN_AI_USAGE_INPUT_TOKENS => response.input_tokens || 0,If
nilmeans “provider didn’t report tokens”, emitting0could be misleading in downstream analytics (distinguishing “no tokens” from “unknown”). Consider only setting these attributes when the value is non-nil:attributes = { OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_RESPONSE_MODEL => response.model_id, - OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_USAGE_OUTPUT_TOKENS => response.output_tokens || 0, - OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_USAGE_INPUT_TOKENS => response.input_tokens || 0, +}.tap do |attrs| + attrs[OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_USAGE_OUTPUT_TOKENS] = response.output_tokens if response.output_tokens + attrs[OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_USAGE_INPUT_TOKENS] = response.input_tokens if response.input_tokens endNot a blocker, but it will make telemetry more faithful.
101-107: RubyLLM halt logging is consistent; potential enhancement if model_id is available.Using:
GEN_AI_RESPONSE_MODEL => @modelGEN_AI_COMPLETIONS.*withrole: "tool"andresponse.contentis consistent with your RubyLLM message logging and provides enough information to understand the tool halt result.
If
RubyLLM::Tool::Haltever carries amodel_id(now or in future RubyLLM versions), it might be slightly more accurate to prefer that over@modelwhen present, e.g.:model_id = response.respond_to?(:model_id) && response.model_id ? response.model_id : @model @span.add_attributes({ OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_RESPONSE_MODEL => model_id, # ... })Not required now, but worth keeping in mind as RubyLLM evolves.
160-172: llm_call: consider GenAI-required attributes, deprecatedgen_ai.system, and API compatibility.This block is a good step toward GenAI conventions, but there are a few points to tighten:
Missing
gen_ai.operation.name:
The GenAI span spec treatsgen_ai.operation.nameas a required attribute (e.g.,"chat","generate_content","text_completion"). (opentelemetry.io)
Since the span name is"#{provider}.chat", you can safely set:attributes[:"gen_ai.operation.name"] = "chat"(via a constant if you add one in the semantic conventions module).
gen_ai.systemvsgen_ai.provider.name:
You currently set both:GEN_AI_SYSTEM => provider, GEN_AI_PROVIDER => provider,but
gen_ai.systemis deprecated in favor ofgen_ai.provider.nameper the registry. (opentelemetry.io)
I’d suggest droppingGEN_AI_SYSTEMhere (or clearly marking it legacy) and relying onGEN_AI_PROVIDERas the canonical attribute.Potential breaking change in method signature:
Changing from a positional third argument to a keyword:def llm_call(provider, model, conversation_id: nil)will break any existing callers that previously passed
conversation_idpositionally (llm_call("openai", "gpt-4o", conv_id)). If this method is part of the public SDK surface, consider either:
- keeping a positional third arg for backward compatibility, or
- documenting this as a breaking change in release notes / bumping the major version.
Overall, the direction is good; these tweaks will make the spans more in line with the official GenAI conventions and reduce surprises for existing users.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
semantic_conventions_ai/lib/opentelemetry/semantic_conventions.rb(1 hunks)traceloop-sdk/lib/traceloop/sdk.rb(4 hunks)
🔇 Additional comments (2)
traceloop-sdk/lib/traceloop/sdk.rb (2)
141-146: OpenAI usage token attributes reformatting is fine.These lines just reflow the existing
LLM_USAGE_*attributes over multiple lines without changing keys or values. Behavior remains the same and is consistent with how Bedrock/Gemini usage is reported in this file.
151-154: OpenAI completion attributes reformatting is fine.Same here: the
LLM_COMPLETIONS.*attributes are unchanged apart from formatting, matching the existing pattern used for Bedrock and Gemini completions.
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
traceloop-sdk/lib/traceloop/sdk.rb (4)
36-44: Migrate to GEN_AI_PROMPTS for consistency.The
log_messagesmethod still usesLLM_PROMPTSwhilelog_prompt(lines 49-57) usesGEN_AI_PROMPTS. This creates inconsistent telemetry—different code paths will report prompt attributes under different names, complicating trace analysis.Apply this diff to align with the GEN_AI conventions:
def log_messages(messages) messages.each_with_index do |message, index| content = message[:content].is_a?(Array) ? message[:content].to_json : (message[:content] || "") @span.add_attributes({ - "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_PROMPTS}.#{index}.role" => message[:role] || "user", - "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_PROMPTS}.#{index}.content" => content, + "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_PROMPTS}.#{index}.role" => message[:role] || "user", + "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_PROMPTS}.#{index}.content" => content, }) end end
79-89: Migrate to GEN_AI_ constants for consistency.*The
log_gemini_responsemethod uses legacyLLM_RESPONSE_MODELandLLM_COMPLETIONSconstants, while the new RubyLLM methods useGEN_AI_RESPONSE_MODELandGEN_AI_COMPLETIONS. This inconsistency means Gemini traces will report under different attribute names, fragmenting telemetry data.Apply this diff to align with the new conventions:
def log_gemini_response(response) @span.add_attributes({ - OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_RESPONSE_MODEL => @model, + OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_RESPONSE_MODEL => @model, }) @span.add_attributes({ - "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_COMPLETIONS}.0.role" => "assistant", - "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_COMPLETIONS}.0.content" => response.dig( + "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.role" => "assistant", + "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.content" => response.dig( "candidates", 0, "content", "parts", 0, "text") }) end
109-133: Migrate to GEN_AI_ constants for consistency.*The
log_bedrock_responsemethod uses legacyLLM_*constants throughout (lines 113, 120-122, 127-128), while the new RubyLLM methods useGEN_AI_*conventions. This fragmentation means Bedrock, RubyLLM, OpenAI, and Gemini traces will all report under different attribute names, severely impacting trace analysis and observability.Apply this diff to align with the new conventions:
def log_bedrock_response(response) body = JSON.parse(response.body.read()) @span.add_attributes({ - OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_RESPONSE_MODEL => body.dig("model"), + OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_RESPONSE_MODEL => body.dig("model"), }) if body.has_key?("usage") input_tokens = body.dig("usage", "input_tokens") output_tokens = body.dig("usage", "output_tokens") @span.add_attributes({ - OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_USAGE_TOTAL_TOKENS => input_tokens + output_tokens, - OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_USAGE_COMPLETION_TOKENS => output_tokens, - OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_USAGE_PROMPT_TOKENS => input_tokens, + OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_USAGE_TOTAL_TOKENS => input_tokens + output_tokens, + OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_USAGE_OUTPUT_TOKENS => output_tokens, + OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_USAGE_INPUT_TOKENS => input_tokens, }) end if body.has_key?("content") @span.add_attributes({ - "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_COMPLETIONS}.0.role" => body.dig("role"), - "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_COMPLETIONS}.0.content" => body.dig("content").first.dig("text") + "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.role" => body.dig("role"), + "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.content" => body.dig("content").first.dig("text") }) end response.body.rewind() endNote: Also aligned token attribute names to match the GEN_AI conventions (e.g.,
GEN_AI_USAGE_OUTPUT_TOKENSinstead ofLLM_USAGE_COMPLETION_TOKENS).
135-157: Migrate to GEN_AI_ constants for consistency.*The
log_openai_responsemethod uses legacyLLM_*constants throughout (lines 137, 141-146, 151-154). Since OpenAI is likely the most commonly used provider, this inconsistency with the new GEN_AI conventions (used by RubyLLM methods) is particularly impactful for telemetry uniformity.Apply this diff to align with the new conventions:
def log_openai_response(response) @span.add_attributes({ - OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_RESPONSE_MODEL => response.dig("model"), + OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_RESPONSE_MODEL => response.dig("model"), }) if response.has_key?("usage") @span.add_attributes({ - OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_USAGE_TOTAL_TOKENS => response.dig("usage", - "total_tokens"), - OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_USAGE_COMPLETION_TOKENS => response.dig( -"usage", "completion_tokens"), - OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_USAGE_PROMPT_TOKENS => response.dig("usage", - "prompt_tokens"), + OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_USAGE_TOTAL_TOKENS => response.dig("usage", + "total_tokens"), + OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_USAGE_OUTPUT_TOKENS => response.dig( +"usage", "completion_tokens"), + OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_USAGE_INPUT_TOKENS => response.dig("usage", + "prompt_tokens"), }) end if response.has_key?("choices") @span.add_attributes({ - "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_COMPLETIONS}.0.role" => response.dig( + "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.role" => response.dig( "choices", 0, "message", "role"), - "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_COMPLETIONS}.0.content" => response.dig( + "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.content" => response.dig( "choices", 0, "message", "content") }) end endNote: Also aligned token attribute names to match the GEN_AI conventions (e.g.,
GEN_AI_USAGE_OUTPUT_TOKENSandGEN_AI_USAGE_INPUT_TOKENS).
🧹 Nitpick comments (3)
traceloop-sdk/lib/traceloop/sdk.rb (3)
62-77: LGTM! RubyLLM constant guards prevent NameError.The
defined?checks on lines 66 and 68 properly guard againstNameErrorwhen RubyLLM is not loaded, addressing the critical issue from previous reviews.Optional refinement: Line 68 could additionally check
defined?(::RubyLLM::Tool)beforedefined?(::RubyLLM::Tool::Halt)for extra safety, though the current implementation should work in practice.
91-99: LGTM! Proper use of GEN_AI_ conventions with defensive nil handling.*The method correctly uses the new semantic conventions and provides fallback values for token counts (lines 94-95). The
.to_sconversion onresponse.role(line 96) adds safety.Optional: Consider adding nil fallbacks for
model_idandcontentsimilar to the token attributes, e.g.,response.model_id || "unknown"andresponse.content || "", for extra defensive coding.
101-107: Consider using response.model_id for consistency.While
log_ruby_llm_messageusesresponse.model_id(line 93), this method uses@modelfor the response model attribute. IfRubyLLM::Tool::Haltprovides amodel_idmethod, using it would improve consistency.Optional diff if
RubyLLM::Tool::Haltsupportsmodel_id:def log_ruby_llm_halt(response) @span.add_attributes({ - OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_RESPONSE_MODEL => @model, + OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_RESPONSE_MODEL => response.respond_to?(:model_id) ? response.model_id : @model, "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.role" => "tool", "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.content" => response.content }) end
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
traceloop-sdk/lib/traceloop/sdk.rb(4 hunks)
🔇 Additional comments (3)
traceloop-sdk/lib/traceloop/sdk.rb (3)
8-27: LGTM! Initialization and configuration improvements are solid.The switch to
BatchSpanProcessoris appropriate for production use, and the API key validation provides clear feedback. The string-based headers and configurable auth scheme/base URL enhance flexibility.
46-60: LGTM! Proper migration to GEN_AI_PROMPTS.The method correctly migrated to the new semantic conventions and handles both system+user and user-only prompt scenarios appropriately.
160-175: LGTM! Proper conversation tracking with GEN_AI_ conventions.*The addition of the optional
conversation_idparameter enhances conversation tracking capabilities. The conditional attribute addition (lines 168-170) and consistent use ofGEN_AI_*constants align well with the new semantic conventions.
nirga
left a comment
There was a problem hiding this comment.
hey @aniamisiorek thanks for this - can you explain the changes here?
| # Deprecated | ||
| TRACELOOP_CORRELATION_ID = "traceloop.correlation.id" | ||
|
|
||
| # Gen AI |
There was a problem hiding this comment.
and remove references to llm semantics?
traceloop-sdk/lib/traceloop/sdk.rb
Outdated
| endpoint: "#{ENV.fetch("TRACELOOP_BASE_URL", "https://api.traceloop.com")}/v1/traces", | ||
| headers: { "Authorization" => "Bearer #{ENV.fetch("TRACELOOP_API_KEY")}" } | ||
| headers: { | ||
| "Authorization" => "#{ENV.fetch("TRACELOOP_AUTH_SCHEME", "Bearer")} #{ENV.fetch("TRACELOOP_API_KEY")}" |
There was a problem hiding this comment.
authorization headers may vary in type. i.e. for our dynatrace exporter, they use Api-Token.
Vmad 3322
Adding guardrails
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (1)
semantic_conventions_ai/lib/opentelemetry/semantic_conventions.rb (1)
39-40:⚠️ Potential issue | 🟠 Major
GEN_AI_COMPLETIONSandGEN_AI_PROMPTSstill carry deprecated attribute values.In the latest (v1.38.0) OpenTelemetry GenAI Semantic Conventions, both
gen_ai.promptandgen_ai.completionare explicitly marked as deprecated and removed from the specification. These constants should not be introduced as first-class API in new code.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@semantic_conventions_ai/lib/opentelemetry/semantic_conventions.rb` around lines 39 - 40, Remove the two deprecated constants GEN_AI_COMPLETIONS and GEN_AI_PROMPTS from semantic_conventions.rb (they introduce the removed attributes "gen_ai.completion" and "gen_ai.prompt"); delete their definitions and any references/usages of GEN_AI_COMPLETIONS or GEN_AI_PROMPTS in the codebase (or replace usages with the current supported semantic convention names), and run tests to ensure no remaining references break compilation.
🧹 Nitpick comments (3)
traceloop-sdk/lib/traceloop/sdk.rb (3)
289-304: Both the deprecatedGEN_AI_SYSTEMand its replacementGEN_AI_PROVIDERare set to the identicalprovidervalue.The
gen_ai.systemattribute has been deprecated in the OTel semconv releases.gen_ai.provider.nameis the current discriminator and SHOULD be set consistently with provider-specific attributes.If emitting both is intentional for backward compatibility with older consumers, add a comment to that effect. Otherwise, drop
GEN_AI_SYSTEM.The same redundancy applies in the
guardrailmethod at lines 349–350.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@traceloop-sdk/lib/traceloop/sdk.rb` around lines 289 - 304, The llm_call method currently sets both deprecated GEN_AI_SYSTEM and the replacement GEN_AI_PROVIDER to the same provider value; remove the redundant GEN_AI_SYSTEM attribute (or if you intend backward compatibility, add an explicit comment stating that and why) and ensure only OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_PROVIDER is populated with the provider name; apply the same change in the guardrail method so GEN_AI_PROVIDER is the authoritative attribute and GEN_AI_SYSTEM is either removed or documented as kept for backward compatibility.
40-41:putsin library/SDK code pollutes callers' stdout unconditionally.There is no way for users to suppress this output. Replace with a proper
Loggeror at minimumOpenTelemetry.logger.♻️ Proposed fix
+ logger = OpenTelemetry.logger puts "Traceloop exporting traces to #{ENV.fetch("TRACELOOP_BASE_URL", "https://api.traceloop.com")}" puts "Service name: #{`@service_name`}" + logger.debug "Traceloop exporting traces to #{ENV.fetch("TRACELOOP_BASE_URL", "https://api.traceloop.com")}" + logger.debug "Service name: #{`@service_name`}" - puts "Traceloop exporting traces to #{ENV.fetch("TRACELOOP_BASE_URL", "https://api.traceloop.com")}" - puts "Service name: #{`@service_name`}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@traceloop-sdk/lib/traceloop/sdk.rb` around lines 40 - 41, The two unconditional puts calls in traceloop-sdk/lib/traceloop/sdk.rb (the lines that print TRACELOOP_BASE_URL and `@service_name`) should be replaced with a proper logger call so library code doesn't pollute callers' stdout; use OpenTelemetry.logger.info (or an internal Logger instance) to emit these messages (e.g., OpenTelemetry.logger.info("Traceloop exporting traces to ...") and OpenTelemetry.logger.info("Service name: #{`@service_name`}")), ensure any required logger is required/initialized where the SDK is initialized and preserve the same message text and interpolation but no longer call puts directly.
229-286: RuboCop flags excessive cyclomatic complexity on all three private guardrail helpers.
guardrail_activation(8/7),guardrail_blocked_words(12/7), andguardrail_content_filtered(8/7) all exceed the threshold. The main driver inguardrail_blocked_wordsis the duplicated block that processes bothcustom_wordsandmanaged_word_lists— extract a shared helper to reduce both ABC size and cyclomatic complexity.♻️ Proposed extract for `guardrail_blocked_words`
def guardrail_blocked_words(r) assessments = r["assessments"] || [] total = 0 blocked_words = [] assessments.each do |a| word_policy = a["word_policy"] || {} - # custom_words: [{ "match" => "API", "action" => "BLOCKED", "detected" => true }] - custom_words = word_policy["custom_words"] || [] - custom_words.each do |cw| - if cw["detected"] == true || cw["action"] == "BLOCKED" - total += 1 - blocked_words.append(cw["match"]) - end - end - - managed_lists = word_policy["managed_word_lists"] || [] - managed_lists.each do |entry| - if entry["detected"] == true || entry["action"] == "BLOCKED" - total += 1 - blocked_words.append(entry["match"]) - end - end + entries = (word_policy["custom_words"] || []) + (word_policy["managed_word_lists"] || []) + entries.each do |e| + next unless e["detected"] == true || e["action"] == "BLOCKED" + total += 1 + blocked_words << e["match"] + end end [total, blocked_words] end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@traceloop-sdk/lib/traceloop/sdk.rb` around lines 229 - 286, RuboCop flags high cyclomatic complexity in guardrail_activation, guardrail_blocked_words, and guardrail_content_filtered; reduce complexity by extracting a shared helper used by guardrail_blocked_words to process word entries and by simplifying boolean checks in the other methods. Create a private helper (e.g., process_word_entries or tally_blocked_words) that accepts an array of entries and returns [count, matches] or yields increments so guardrail_blocked_words can call it twice (once for "custom_words" and once for "managed_word_lists") instead of duplicating logic; then update guardrail_blocked_words to aggregate the returned counts and matches. Also simplify guardrail_activation to a concise boolean expression (use .positive? on the summed units or Enumerable#any?) and simplify guardrail_content_filtered by using Enumerable#each/any with dig and an early return to lower complexity.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@semantic_conventions_ai/lib/opentelemetry/semantic_conventions.rb`:
- Line 41: The constant GEN_AI_SYSTEM = "gen_ai.system" is deprecated in favor
of GEN_AI_PROVIDER = "gen_ai.provider.name"; either remove GEN_AI_SYSTEM or mark
it as deprecated for backward compatibility by adding a comment like "#
Deprecated: use GEN_AI_PROVIDER (gen_ai.provider.name) as of OpenTelemetry
v1.37.0" following the same style used for TRACELOOP_CORRELATION_ID so reviewers
can see the replacement and rationale.
In `@traceloop-sdk/lib/traceloop/sdk.rb`:
- Around line 192-212: The span attribute for the prompt filter result is using
the wrong namespace: change the key using
OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_PROMPTS in
log_guardrail_response to use
OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_BEDROCK_GUARDRAILS
so the prompt_filter_results attribute is recorded under the guardrails
namespace consistent with the other guardrail keys (leave the value [type,
confidence].to_s as-is and keep other keys like activation, words, content
unchanged).
- Around line 115-123: In log_ruby_llm_message, guard against nil span
attributes so they aren't silently dropped by OpenTelemetry: ensure
response.model_id and response.content are converted to safe defaults before
calling `@span.add_attributes` (e.g., use a fallback like an empty string for
content and a sensible fallback or stringified value for model_id), and keep the
existing keys (GEN_AI_RESPONSE_MODEL and the completions content key) but pass
non-nil values so the attributes are recorded.
- Around line 346-361: The guardrail method creates a span but yields nothing so
callers cannot call tracer.log_guardrail_response(response); change guardrail to
yield the tracer the same way llm_call does (i.e., inside def guardrail(...)
`@tracer.in_span`("#{name}.guardrails") do |tracer| ... yield tracer end) so
blocks receive the Tracer and can call log_guardrail_response and other tracer
methods within the span context.
---
Duplicate comments:
In `@semantic_conventions_ai/lib/opentelemetry/semantic_conventions.rb`:
- Around line 39-40: Remove the two deprecated constants GEN_AI_COMPLETIONS and
GEN_AI_PROMPTS from semantic_conventions.rb (they introduce the removed
attributes "gen_ai.completion" and "gen_ai.prompt"); delete their definitions
and any references/usages of GEN_AI_COMPLETIONS or GEN_AI_PROMPTS in the
codebase (or replace usages with the current supported semantic convention
names), and run tests to ensure no remaining references break compilation.
---
Nitpick comments:
In `@traceloop-sdk/lib/traceloop/sdk.rb`:
- Around line 289-304: The llm_call method currently sets both deprecated
GEN_AI_SYSTEM and the replacement GEN_AI_PROVIDER to the same provider value;
remove the redundant GEN_AI_SYSTEM attribute (or if you intend backward
compatibility, add an explicit comment stating that and why) and ensure only
OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_PROVIDER is
populated with the provider name; apply the same change in the guardrail method
so GEN_AI_PROVIDER is the authoritative attribute and GEN_AI_SYSTEM is either
removed or documented as kept for backward compatibility.
- Around line 40-41: The two unconditional puts calls in
traceloop-sdk/lib/traceloop/sdk.rb (the lines that print TRACELOOP_BASE_URL and
`@service_name`) should be replaced with a proper logger call so library code
doesn't pollute callers' stdout; use OpenTelemetry.logger.info (or an internal
Logger instance) to emit these messages (e.g.,
OpenTelemetry.logger.info("Traceloop exporting traces to ...") and
OpenTelemetry.logger.info("Service name: #{`@service_name`}")), ensure any
required logger is required/initialized where the SDK is initialized and
preserve the same message text and interpolation but no longer call puts
directly.
- Around line 229-286: RuboCop flags high cyclomatic complexity in
guardrail_activation, guardrail_blocked_words, and guardrail_content_filtered;
reduce complexity by extracting a shared helper used by guardrail_blocked_words
to process word entries and by simplifying boolean checks in the other methods.
Create a private helper (e.g., process_word_entries or tally_blocked_words) that
accepts an array of entries and returns [count, matches] or yields increments so
guardrail_blocked_words can call it twice (once for "custom_words" and once for
"managed_word_lists") instead of duplicating logic; then update
guardrail_blocked_words to aggregate the returned counts and matches. Also
simplify guardrail_activation to a concise boolean expression (use .positive? on
the summed units or Enumerable#any?) and simplify guardrail_content_filtered by
using Enumerable#each/any with dig and an early return to lower complexity.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
sample-app/Gemfile.lockis excluded by!**/*.lock
📒 Files selected for processing (6)
README.mdsample-app/bedrock.rbsample-app/gemini.rbsample-app/openai.rbsemantic_conventions_ai/lib/opentelemetry/semantic_conventions.rbtraceloop-sdk/lib/traceloop/sdk.rb
✅ Files skipped from review due to trivial changes (2)
- sample-app/bedrock.rb
- README.md
| GEN_AI_USAGE_INPUT_TOKENS = "gen_ai.usage.input_tokens" | ||
| GEN_AI_COMPLETIONS = "gen_ai.completion" | ||
| GEN_AI_PROMPTS = "gen_ai.prompt" | ||
| GEN_AI_SYSTEM = "gen_ai.system" |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
OpenTelemetry semantic conventions gen_ai.system deprecated gen_ai.provider.name
💡 Result:
gen_ai.system was deprecated and renamed to gen_ai.provider.name in OpenTelemetry semantic-conventions v1.37.0 (GenAI “system-specific naming policy” breaking change). [1]
What to use instead:
- Use
gen_ai.provider.nameto identify the GenAI telemetry “flavor” / provider discriminator (e.g.,openai,azure.ai.openai,aws.bedrock,gcp.vertex_ai, etc.). [2] - The spec notes
gen_ai.provider.nameshould be set based on the instrumentation’s best knowledge and may differ from the actual underlying model provider (proxies/gateways);gen_ai.request.model,gen_ai.response.model, andserver.addresscan help disambiguate. [2]
Sources:
[1] OpenTelemetry semantic-conventions GitHub releases (v1.37.0): rename gen_ai.system → gen_ai.provider.name
[2] OpenTelemetry GenAI spans semantic conventions: definition + well-known values for gen_ai.provider.name
🏁 Script executed:
# Check the file context around lines 31-32 and 41-42
cat -n semantic_conventions_ai/lib/opentelemetry/semantic_conventions.rb | sed -n '25,50p'Repository: traceloop/openllmetry-ruby
Length of output: 1250
Mark GEN_AI_SYSTEM = "gen_ai.system" as deprecated or remove it.
gen_ai.system was deprecated and renamed to gen_ai.provider.name in OpenTelemetry semantic-conventions v1.37.0. Line 42 already defines GEN_AI_PROVIDER = "gen_ai.provider.name" as the replacement. If keeping GEN_AI_SYSTEM for backward compatibility, add a # Deprecated comment consistent with the pattern at line 31–32 (TRACELOOP_CORRELATION_ID); otherwise, remove it.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@semantic_conventions_ai/lib/opentelemetry/semantic_conventions.rb` at line
41, The constant GEN_AI_SYSTEM = "gen_ai.system" is deprecated in favor of
GEN_AI_PROVIDER = "gen_ai.provider.name"; either remove GEN_AI_SYSTEM or mark it
as deprecated for backward compatibility by adding a comment like "# Deprecated:
use GEN_AI_PROVIDER (gen_ai.provider.name) as of OpenTelemetry v1.37.0"
following the same style used for TRACELOOP_CORRELATION_ID so reviewers can see
the replacement and rationale.
| def log_ruby_llm_message(response) | ||
| @span.add_attributes({ | ||
| OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_RESPONSE_MODEL => response.model_id, | ||
| OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_USAGE_OUTPUT_TOKENS => response.output_tokens || 0, | ||
| OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_USAGE_INPUT_TOKENS => response.input_tokens || 0, | ||
| "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.role" => response.role.to_s, | ||
| "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.content" => response.content | ||
| }) | ||
| end |
There was a problem hiding this comment.
response.model_id and response.content can be nil, silently dropping span attributes.
nil values are filtered out by the OTel Ruby SDK's add_attributes, so no crash occurs, but the attributes won't be recorded. Add nil guards to at least preserve an empty string for content.
🛡️ Proposed fix
def log_ruby_llm_message(response)
`@span.add_attributes`({
- OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_RESPONSE_MODEL => response.model_id,
+ OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_RESPONSE_MODEL => response.model_id || `@model`,
OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_USAGE_OUTPUT_TOKENS => response.output_tokens || 0,
OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_USAGE_INPUT_TOKENS => response.input_tokens || 0,
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.role" => response.role.to_s,
- "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.content" => response.content
+ "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.content" => response.content.to_s
})
end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@traceloop-sdk/lib/traceloop/sdk.rb` around lines 115 - 123, In
log_ruby_llm_message, guard against nil span attributes so they aren't silently
dropped by OpenTelemetry: ensure response.model_id and response.content are
converted to safe defaults before calling `@span.add_attributes` (e.g., use a
fallback like an empty string for content and a sensible fallback or stringified
value for model_id), and keep the existing keys (GEN_AI_RESPONSE_MODEL and the
completions content key) but pass non-nil values so the attributes are recorded.
| def log_guardrail_response(response) | ||
| r = deep_stringify_keys(response || {}) | ||
|
|
||
| activation = guardrail_activation(r) | ||
| words_blocked, blocked_words = guardrail_blocked_words(r) | ||
| content_filtered, type, confidence = guardrail_content_filtered(r) | ||
|
|
||
| attrs = { | ||
| "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_PROMPTS}.prompt_filter_results" => [type, confidence].to_s, | ||
|
|
||
| "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_BEDROCK_GUARDRAILS}.activation" => activation, | ||
| "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_BEDROCK_GUARDRAILS}.words" => words_blocked, | ||
| "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_BEDROCK_GUARDRAILS}.content" => content_filtered, | ||
|
|
||
| "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_BEDROCK_GUARDRAILS}.action" => r["action"] || "NONE", | ||
| "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_BEDROCK_GUARDRAILS}.action_reason" => r["action_reason"] || "No action.", | ||
| "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_BEDROCK_GUARDRAILS}.words.blocked_words" => blocked_words.to_s, | ||
| } | ||
|
|
||
| @span.add_attributes(attrs) | ||
| end |
There was a problem hiding this comment.
GEN_AI_PROMPTS is the wrong namespace for a guardrail content-filter result.
Line 200 resolves to "gen_ai.prompt.prompt_filter_results", placing a guardrail/content-filter result under the prompt attribute namespace. All other guardrail attributes in this method consistently use GEN_AI_BEDROCK_GUARDRAILS — this one should too.
🐛 Proposed fix
- "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_PROMPTS}.prompt_filter_results" => [type, confidence].to_s,
+ "#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_BEDROCK_GUARDRAILS}.prompt_filter_results" => [type, confidence].to_s,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@traceloop-sdk/lib/traceloop/sdk.rb` around lines 192 - 212, The span
attribute for the prompt filter result is using the wrong namespace: change the
key using OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_PROMPTS
in log_guardrail_response to use
OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_BEDROCK_GUARDRAILS
so the prompt_filter_results attribute is recorded under the guardrails
namespace consistent with the other guardrail keys (leave the value [type,
confidence].to_s as-is and keep other keys like activation, words, content
unchanged).
| def guardrail(name, provider, conversation_id: nil) | ||
| @tracer.in_span("#{name}.guardrails") do |span| | ||
| attributes = { | ||
| OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_SYSTEM => provider, | ||
| OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_PROVIDER => provider, | ||
| } | ||
|
|
||
| if conversation_id | ||
| attributes[OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_CONVERSATION_ID] = | ||
| conversation_id | ||
| end | ||
|
|
||
| span.add_attributes(attributes) | ||
| yield | ||
| end | ||
| end |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's find the guardrail method and nearby llm_call for comparison
fd "sdk.rb" --type fRepository: traceloop/openllmetry-ruby
Length of output: 139
🏁 Script executed:
# Look at the full context of the guardrail and llm_call methods
cat -n traceloop-sdk/lib/traceloop/sdk.rb | sed -n '300,380p'Repository: traceloop/openllmetry-ruby
Length of output: 2666
🏁 Script executed:
# Search for how guardrail is called in the codebase
rg "guardrail" --type rb -B 2 -A 2Repository: traceloop/openllmetry-ruby
Length of output: 96
🏁 Script executed:
# Look for log_guardrail_response method
rg "log_guardrail_response" --type rbRepository: traceloop/openllmetry-ruby
Length of output: 96
🏁 Script executed:
# Search for guardrail usage without type flag
rg "guardrail" -B 2 -A 2Repository: traceloop/openllmetry-ruby
Length of output: 2558
🏁 Script executed:
# Search for log_guardrail_response
rg "log_guardrail_response" -B 2 -A 2Repository: traceloop/openllmetry-ruby
Length of output: 353
🏁 Script executed:
# Find the Tracer class definition to verify the constructor signature
rg "class Tracer" -A 10Repository: traceloop/openllmetry-ruby
Length of output: 845
🏁 Script executed:
# Let me also look at the broader context to understand workflow, task, agent, tool patterns
cat -n traceloop-sdk/lib/traceloop/sdk.rb | sed -n '275,305p'Repository: traceloop/openllmetry-ruby
Length of output: 1353
🏁 Script executed:
# Find guardrail usage examples
find . -type f -name "*.rb" | xargs grep -l "guardrail"Repository: traceloop/openllmetry-ruby
Length of output: 174
🏁 Script executed:
# Look for log_guardrail_response usage
rg "log_guardrail_response" -B 5 -A 5Repository: traceloop/openllmetry-ruby
Length of output: 820
🏁 Script executed:
# Check bedrock examples specifically
find . -type d -name "*bedrock*" -o -name "*example*" | head -20Repository: traceloop/openllmetry-ruby
Length of output: 52
guardrail yields no Tracer, making log_guardrail_response unreachable.
llm_call yields a Tracer so callers can log prompts/responses within the span's context. guardrail just calls bare yield, so there is no way for the block to call tracer.log_guardrail_response(response) on the correct span. The guardrail span ends up with only system/provider attributes and no response data.
🐛 Proposed fix — yield a Tracer from guardrail
def guardrail(name, provider, conversation_id: nil)
`@tracer.in_span`("#{name}.guardrails") do |span|
attributes = {
OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_SYSTEM => provider,
OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_PROVIDER => provider,
}
if conversation_id
attributes[OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_CONVERSATION_ID] =
conversation_id
end
span.add_attributes(attributes)
- yield
+ yield Tracer.new(span, provider, nil)
end
end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@traceloop-sdk/lib/traceloop/sdk.rb` around lines 346 - 361, The guardrail
method creates a span but yields nothing so callers cannot call
tracer.log_guardrail_response(response); change guardrail to yield the tracer
the same way llm_call does (i.e., inside def guardrail(...)
`@tracer.in_span`("#{name}.guardrails") do |tracer| ... yield tracer end) so
blocks receive the Tracer and can call log_guardrail_response and other tracer
methods within the span context.
Changes:
saycallNotes:
Important
Adds RubyLLM support, GenAI semantic conventions, and updates dependencies in Traceloop SDK.
RubyLLM::MessageandRubyLLM::Tool::Haltinlog_response()insdk.rb.TRACELOOP_AUTH_SCHEMAenvironment variable for trace exporting.semantic_conventions.rb.opentelemetry-exporter-otlpto~> 0.31.1andopentelemetry-sdkto~> 1.10.0intraceloop-sdk.gemspec.This description was created by
for 987d252. You can customize this summary. It will automatically update as commits are pushed.
Summary by CodeRabbit
New Features
Improvements
Documentation
Chores