Metadata-Version: 2.4
Name: langgraph-otel-topology-instrumentor
Version: 0.1.1
Summary: OpenTelemetry instrumentor for LangGraph — captures graph topology, state changes, and edge transitions
License: Apache-2.0
Requires-Python: >=3.9
Description-Content-Type: text/markdown
Requires-Dist: opentelemetry-api>=1.20.0
Requires-Dist: opentelemetry-sdk>=1.20.0
Requires-Dist: opentelemetry-instrumentation>=0.41b0
Provides-Extra: dev
Requires-Dist: langgraph>=0.2.0; extra == "dev"
Requires-Dist: langchain-core>=0.2.0; extra == "dev"
Requires-Dist: pytest>=7.0; extra == "dev"

# LangGraph OpenTelemetry Instrumentor

A custom OpenTelemetry instrumentor that captures **LangGraph-specific telemetry** beyond what standard LLM instrumentors provide.

## What it captures

| Feature | Standard LLM Instrumentors | This Instrumentor |
|---|---|---|
| LLM prompts & responses | ✅ | — (use alongside) |
| Token usage | ✅ | — |
| **Graph topology** (all nodes, edges, conditional edges) | ❌ | ✅ |
| **State snapshots** before/after each node | ❌ | ✅ |
| **State diffs** (added/modified/removed keys) | ❌ | ✅ |
| **Conditional edge routing decisions** | ❌ | ✅ |
| **Full execution path** with transitions | ❌ | ✅ |
| **Edge path maps** (all possible routes) | ❌ | ✅ |

## Installation

```bash
pip install langgraph-otel-instrumentor
```

Or install from source:

```bash
cd langgraph-instrumentor
pip install -e .
```

## Quick Start

```python
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor, ConsoleSpanExporter

from langgraph_otel_instrumentor import LangGraphInstrumentor

# 1. Set up OpenTelemetry
provider = TracerProvider()
provider.add_span_processor(BatchSpanProcessor(ConsoleSpanExporter()))
trace.set_tracer_provider(provider)

# 2. Instrument LangGraph
LangGraphInstrumentor().instrument()

# 3. Use LangGraph as normal — telemetry is captured automatically
from langgraph.graph import StateGraph, END

graph = StateGraph(MyState)
graph.add_node("agent", agent_fn)
graph.add_node("tools", tool_fn)
graph.set_entry_point("agent")
graph.add_conditional_edges("agent", should_continue, {"tools": "tools", "end": END})
graph.add_edge("tools", "agent")

app = graph.compile()
result = app.invoke({"messages": [...]})  # All telemetry captured here
```

## Span Attributes Reference

### Root span: `langgraph.graph.invoke`

| Attribute | Type | Description |
|---|---|---|
| `langgraph.graph.topology` | JSON string | Full graph structure: nodes, edges, conditional edges |
| `langgraph.graph.node_count` | int | Number of nodes in the graph |
| `langgraph.graph.edge_count` | int | Number of static edges |
| `langgraph.graph.conditional_edge_count` | int | Number of conditional edges |
| `langgraph.graph.input` | JSON string | Input to the graph invocation |
| `langgraph.graph.output` | JSON string | Output of the graph invocation |

### Node spans: `langgraph.node.{name}`

| Attribute | Type | Description |
|---|---|---|
| `langgraph.node.name` | string | Node name |
| `langgraph.node.state_before` | JSON string | State summary before node execution |
| `langgraph.node.state_after` | JSON string | State summary after node execution |
| `langgraph.node.state_diff` | JSON string | `{added_keys, removed_keys, modified_keys}` |
| `langgraph.node.output` | JSON string | Raw output of the node |

### Conditional edge spans: `langgraph.edge.conditional.{func_name}`

| Attribute | Type | Description |
|---|---|---|
| `langgraph.edge.type` | string | Always `"conditional"` |
| `langgraph.edge.source` | string | Source node name |
| `langgraph.edge.condition_func` | string | Name of the routing function |
| `langgraph.edge.path_map` | JSON string | All possible routes `{decision: target_node}` |
| `langgraph.edge.routing_decision` | string | The actual decision made |
| `langgraph.edge.target` | string | The target node selected |

### Stream events: `langgraph.transition`

| Attribute | Type | Description |
|---|---|---|
| `langgraph.transition` | JSON string | `{step, from_node, to_node, state_update_keys}` |
| `langgraph.node.name` | string | Current node |
| `langgraph.node.output` | JSON string | State update from this node |
| `langgraph.step` | int | Step number in the stream |

## Using with LLM Instrumentors

This instrumentor captures **graph-level** telemetry. For **LLM-level** telemetry
(prompts, responses, token usage), use it alongside the appropriate LLM instrumentor:

```python
from langgraph_otel_instrumentor import LangGraphInstrumentor

# Graph-level: topology, state, transitions
LangGraphInstrumentor().instrument()

# LLM-level: prompts, responses, tokens
# Pick the one matching your LLM provider:

# For Anthropic:
# from opentelemetry.instrumentation.anthropic import AnthropicInstrumentor
# AnthropicInstrumentor().instrument()

# For Vertex AI (Gemini):
# from opentelemetry.instrumentation.vertexai import VertexAIInstrumentor
# VertexAIInstrumentor().instrument()

# For OpenAI:
# from opentelemetry.instrumentation.openai import OpenAIInstrumentor
# OpenAIInstrumentor().instrument()
```

## Exporting to Google Cloud

```python
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
import grpc

# Set up authenticated OTLP export to Google Cloud
channel_creds = grpc.composite_channel_credentials(
    grpc.ssl_channel_credentials(),
    grpc.metadata_call_credentials(auth_metadata_plugin),
)

exporter = OTLPSpanExporter(
    credentials=channel_creds,
    endpoint="https://telemetry.googleapis.com:443/v1/traces",
)
```

## Exporting to LangSmith

```python
import os
os.environ["LANGSMITH_OTEL_ENABLED"] = "true"
os.environ["LANGSMITH_TRACING"] = "true"
os.environ["LANGSMITH_API_KEY"] = "<your key>"

# The LangGraph instrumentor's spans will appear in LangSmith
# alongside LangChain's built-in spans
```

## Topology JSON Schema

The `langgraph.graph.topology` attribute contains:

```json
{
  "nodes": [
    {"name": "__start__", "type": "entry"},
    {"name": "agent", "type": "node", "func": "call_model"},
    {"name": "tools", "type": "node", "func": "ToolNode"},
    {"name": "__end__", "type": "exit"}
  ],
  "edges": [
    {"source": "__start__", "target": "agent"},
    {"source": "tools", "target": "agent"}
  ],
  "conditional_edges": [
    {
      "source": "agent",
      "condition_func": "should_continue",
      "path_map": {"tools": "tools", "end": "__end__"}
    }
  ]
}
```

This is everything needed to visually reconstruct the full graph,
including edges that were *not* taken during a particular execution.
