Metadata-Version: 2.4
Name: llm-tracking
Version: 1.0.0
Summary: LLM observability and tracing platform - trace, evaluate, and optimize your LLM applications
Project-URL: Homepage, https://github.com/yourusername/llm-tracking
Project-URL: Documentation, https://docs.llmtracking.com
Project-URL: Repository, https://github.com/yourusername/llm-tracking
Project-URL: Issues, https://github.com/yourusername/llm-tracking/issues
Author-email: LLM Tracking Team <team@llmtracking.com>
License: MIT
License-File: LICENSE
Keywords: ai,anthropic,langchain,llm,observability,openai,tracing
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Requires-Python: >=3.9
Requires-Dist: httpx>=0.26.0
Provides-Extra: dev
Requires-Dist: build>=1.0.0; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.21.0; extra == 'dev'
Requires-Dist: pytest>=7.0.0; extra == 'dev'
Requires-Dist: twine>=4.0.0; extra == 'dev'
Description-Content-Type: text/markdown

# LLM Tracking SDK

A Python SDK for LLM observability and tracing. Monitor, debug, and optimize your LLM applications.

## Features

- **Automatic Tracing** - Decorate your functions to automatically trace LLM calls
- **LLM Client Wrappers** - Wrap OpenAI and Anthropic clients for automatic instrumentation
- **Context Manager** - Manual tracing with context manager
- **Token Usage Tracking** - Automatic tracking of prompt/completion/total tokens

## Installation

```bash
pip install llm-tracking
```

## Quick Start

### 1. Configure the client

```python
from llm_tracking import traceable, set_client
from llm_tracking.client import LLMRtrackingClient

# Initialize with your API key
client = LLMRtrackingClient(
    api_key="your-api-key",
    base_url="http://localhost:8000/v1"  # or your deployed server URL
)
set_client(client)
```

### 2. Use the decorator

```python
from llm_tracking import traceable

@traceable(project="my-app", name="chat")
def chat(prompt: str):
    response = openai.ChatCompletion.create(
        model="gpt-4",
        messages=[{"role": "user", "content": prompt}]
    )
    return response.choices[0].message.content

result = chat("Hello, world!")
```

### 3. Or wrap your OpenAI client

```python
from llm_tracking import wrap_openai
import openai

client = wrap_openai(openai.Client())

# All calls are automatically traced
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello"}]
)
```

## Configuration

### Environment Variables

- `LLM_TRACKING_API_KEY` - Your API key
- `LLM_TRACKING_PROJECT` - Default project name

### Client Options

```python
client = LLMRtrackingClient(
    api_key="your-api-key",
    base_url="http://localhost:8000/v1",
    timeout=30.0  # Request timeout in seconds
)
```

## API Reference

### `traceable(project=None, name=None, run_type='generic')`

Decorator for automatic function tracing.

```python
@traceable(project="my-app", name="my-function", run_type="llm")
def my_function(prompt):
    # Your LLM code
    pass
```

### `wrap_openai(client)`

Wraps an OpenAI client for automatic tracing.

```python
from llm_tracking import wrap_openai
client = wrap_openai(openai.Client())
```

### `wrap_anthropic(client)`

Wraps an Anthropic client for automatic tracing.

```python
from llm_tracking import wrap_anthropic
client = wrap_anthropic(anthropic.Client())
```

### `TraceContext`

Manual tracing with context manager.

```python
from llm_tracking import TraceContext

with TraceContext(project="my-app") as ctx:
    ctx.log_input({"prompt": "Hello"})
    # ... do something ...
    ctx.log_output({"response": "Hi!"})
```

## Requirements

- Python 3.9+
- httpx

## License

MIT
