Skip to content

Observability

ask-forge instruments all LLM interactions with OpenTelemetry spans following the GenAI semantic conventions.

Setup

Install the OpenTelemetry SDK and configure a tracer provider before creating sessions:

import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
import { SimpleSpanProcessor } from "@opentelemetry/sdk-trace-base";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http";
const provider = new NodeTracerProvider();
provider.addSpanProcessor(
new SimpleSpanProcessor(new OTLPTraceExporter())
);
provider.register();

Once registered, all session.ask() calls automatically emit spans.

Trace Structure

ask_forge.session.ask
├── gen_ai.chat (per LLM call)
│ └── gen_ai.tool_call (per tool invocation)
├── gen_ai.chat
│ └── gen_ai.tool_call
└── ...

Captured Attributes

AttributeDescription
gen_ai.systemModel provider name
gen_ai.request.modelModel identifier
gen_ai.response.modelModel returned in response
gen_ai.response.finish_reasonsWhy the model stopped
gen_ai.usage.input_tokensPrompt token count
gen_ai.usage.output_tokensCompletion token count
ask_forge.iteration.countNumber of tool-use iterations
ask_forge.tool_call.countTotal tool calls in the session
ask_forge.repo.urlRepository URL