Metadata-Version: 2.4
Name: lf-ai
Version: 0.1.0
Summary: LF AI platform SDK
Author-email: Joan Reyero <joan@reyero.io>
Requires-Python: >=3.12
Requires-Dist: agno
Requires-Dist: alembic
Requires-Dist: anthropic
Requires-Dist: boto3
Requires-Dist: click
Requires-Dist: fastapi
Requires-Dist: httpx
Requires-Dist: jinja2
Requires-Dist: nest-asyncio>=1.6.0
Requires-Dist: prompt-toolkit>=3.0.52
Requires-Dist: psycopg2-binary
Requires-Dist: pydantic-settings>=2.13.1
Requires-Dist: python-dotenv>=1.2.2
Requires-Dist: python-multipart
Requires-Dist: pyyaml
Requires-Dist: questionary>=2.1.1
Requires-Dist: rich
Requires-Dist: sqlalchemy
Requires-Dist: uvicorn
Provides-Extra: all
Requires-Dist: cryptography; extra == 'all'
Requires-Dist: langwatch; extra == 'all'
Requires-Dist: openinference-instrumentation-agno; extra == 'all'
Requires-Dist: pyjwt; extra == 'all'
Requires-Dist: snowflake-connector-python; extra == 'all'
Provides-Extra: observability
Requires-Dist: langwatch; extra == 'observability'
Requires-Dist: openinference-instrumentation-agno; extra == 'observability'
Provides-Extra: snowflake
Requires-Dist: cryptography; extra == 'snowflake'
Requires-Dist: pyjwt; extra == 'snowflake'
Requires-Dist: snowflake-connector-python; extra == 'snowflake'
Description-Content-Type: text/markdown

# lf-ai

Platform SDK for building AI agent projects on [Agno](https://docs.agno.com). Provides a CLI, convention-based auto-discovery, database utilities, an evaluation framework, and project scaffolding — so you can focus on writing agents.

## Prerequisites

- Python >= 3.12
- [uv](https://docs.astral.sh/uv/) package manager
- Docker (for local database and containerized deployment)
- AWS credentials (for Bedrock model access)

## Installation

`lf-ai` is currently installed as a local editable dependency. Clone the repo and add it to your project:

```bash
# 1. Create a new project
uv init my-project && cd my-project

# 2. Add lf-ai as an editable dependency (adjust path to where you cloned lf-ai)
uv add --editable ../lf-ai
```

## Quick start

```bash
# 1. Scaffold the project (interactive)
uv run lfai init

# 2. Start services
./lfai docker up --build

# 3. Interact with your agents
./lfai agent list
./lfai agent run my-project-sample -s
```

`lfai init` walks you through project name, database setup (Docker PostgreSQL or existing instance), AWS Bedrock region, and optional integrations (Snowflake, LangWatch). It generates everything you need inside an `ai/` directory:

```
my-project/
├── pyproject.toml
├── compose.yaml
├── Dockerfile
├── .env / .env.example
├── lfai                       # Wrapper script (runs uv run lfai)
└── ai/
    ├── agent_os.py            # Entry point with auto-discovery
    ├── agents/
    │   ├── sample_agent.py    # get_sample_agent() factory
    │   └── sample-inputs/
    │       └── sample_agent.json
    ├── api/                   # FastAPI routes (optional)
    ├── db/                    # Models + migrations (optional)
    ├── evals/                 # Evaluation suites (optional)
    ├── models/                # Pydantic models
    └── tools/                 # Agent tools
```

## Convention-based auto-discovery

No manifest files. The SDK discovers project components by naming conventions:

| Component  | Convention                                    | How it's found     |
| ---------- | --------------------------------------------- | ------------------ |
| Project ID | `pyproject.toml` → `[project].name`           | `tomllib.load()`   |
| Agents     | `agents/*_agent.py` with `get_*_agent()`      | Glob + importlib   |
| Routers    | `api/routes.py` with `router` attribute       | Import + getattr   |
| Database   | `db/migrations/` directory exists             | Path check         |
| Evals      | `evals/*/runner.py`                           | Glob               |

The generated `agent_os.py` calls `discover_all()` to find and register everything with AgentOS automatically.

## Creating agents

Create a file matching `agents/*_agent.py` with a `get_*_agent()` factory function:

```python
# ai/agents/my_agent.py
from agno.agent import Agent
from agno.models.aws import Claude

def get_my_agent(debug_mode: bool = False) -> Agent:
    return Agent(
        id="my-project-my",
        name="My Agent",
        model=Claude(id="us.anthropic.claude-sonnet-4-20250514-v1:0"),
        instructions=["You are a helpful assistant."],
        markdown=True,
        debug_mode=debug_mode,
    )
```

The SDK discovers it automatically — no registration needed. The agent ID follows the pattern `{project-id}-{agent-name}`.

### Adding tools

Give agents access to tools by placing them in `ai/tools/` and passing them to the agent:

```python
# ai/tools/search.py
from agno.tools import tool

@tool(name="search_docs")
async def search_docs(query: str) -> str:
    """Search the documentation for relevant information."""
    # Your implementation here
    return results
```

```python
# ai/agents/research_agent.py
from ai.tools.search import search_docs

def get_research_agent(debug_mode: bool = False) -> Agent:
    return Agent(
        id="my-project-research",
        name="Research Agent",
        model=Claude(id="us.anthropic.claude-sonnet-4-20250514-v1:0"),
        tools=[search_docs],
        debug_mode=debug_mode,
    )
```

### Sample inputs

Place JSON test inputs alongside your agents for quick testing:

```
ai/agents/sample-inputs/
└── my_agent.json    # Matches my_agent.py
```

```json
{
  "message": "What can you help me with?"
}
```

When you run `lfai agent run my-project-my` without `-m`, the CLI automatically finds and uses the matching sample input.

## API routes

Create `ai/api/routes.py` to add custom FastAPI endpoints. They are auto-discovered and included in the app:

```python
# ai/api/routes.py
from lf_ai.core.security import get_secured_router

router = get_secured_router(prefix="/my-project", tags=["my-project"])

@router.post("/my-endpoint")
async def my_endpoint():
    return {"message": "Automatically secured with OS_SECURITY_KEY"}
```

All endpoints on a secured router require a valid `Authorization: Bearer <OS_SECURITY_KEY>` header.

## Database

Each project gets its own PostgreSQL schema, fully isolated from other projects sharing the same database.

### Define models

```python
# ai/db/models.py
from sqlalchemy import Column, Integer, String
from lf_ai.db.base import create_project_base, TimestampMixin

Base = create_project_base("my_project")

class Document(TimestampMixin, Base):
    __tablename__ = "documents"

    id = Column(Integer, primary_key=True)
    title = Column(String, nullable=False)
    content = Column(String)
```

`TimestampMixin` adds `created_at` and `updated_at` columns automatically.

### Create sessions

```python
# ai/db/session.py
from lf_ai.db.project_session import create_project_session

engine, SessionLocal, get_db = create_project_session("my_project")
```

Use `get_db` as a FastAPI dependency or call it directly. Each session's `search_path` is set to the project schema, so queries are automatically scoped.

### Manage migrations

```bash
lfai db create-migration my-project "Add documents table"
lfai db migrate my-project
lfai db status my-project
lfai db rollback my-project        # Roll back one migration
lfai db migrate-all                # Apply migrations for all projects
```

Migrations use [Alembic](https://alembic.sqlalchemy.org/) under the hood. The schema is created automatically on first migration.

## Evaluations

The SDK includes an evaluation framework with two modes — **exact** (string comparison) and **fuzzy** (LLM-judged) — plus DB storage, CSV/JSONL I/O, and interactive review.

### Structure

```
ai/evals/
└── my_eval/
    ├── runner.py        # Eval runner implementation
    ├── input.csv        # Test cases
    └── output/          # Results
```

### Input format

CSV files with standard columns:

| Column        | Required | Description                                     |
| ------------- | -------- | ----------------------------------------------- |
| `input`       | Always   | The query or input to evaluate                  |
| `section`     | No       | Section number for grouping (defaults to 1)     |
| `annotation`  | Exact    | Expected output for exact comparison            |
| `expectation` | No       | Evaluation criteria for fuzzy judging           |
| `context`     | No       | JSON string with additional context             |

Eval type is auto-detected: if the CSV has an `annotation` column, it's an exact eval; otherwise, it's fuzzy.

### Running evals

```bash
lfai eval list                     # Discover available evals
lfai eval exec my-eval             # Run evaluation
lfai eval review my-eval           # Interactive review and annotation
lfai eval rows my-eval             # View results in a table
lfai eval export my-eval           # Export results to CSV
```

## Docker

`lfai init` generates a `compose.yaml` and `Dockerfile` ready to go. The compose file includes PostgreSQL (with pgvector) and the AgentOS service.

```bash
./lfai docker up --build           # Build and start services
./lfai docker logs -f              # Follow agent-os logs
./lfai docker status               # Check service health
./lfai docker restart              # Restart services
./lfai docker down                 # Stop services
./lfai docker clean-volumes        # Wipe database (with confirmation)
```

The API is available at `http://localhost:8000` with interactive docs at `http://localhost:8000/docs`.

## CLI reference

### Project setup

```
lfai init                                Scaffold a new project (interactive)
lfai version                             Show SDK version
lfai info                                Show system info and status
```

### Agents

```
lfai agent list                          List discovered agents
lfai agent run <id> [-s]                 Run an agent (with optional streaming)
lfai agent run <id> -m "message"         Run with a custom message
lfai agent run <id> -i input.json        Run with a JSON input file
lfai agent describe <id>                 Show agent details
lfai agent curl <id> [--prod]            Generate a curl command
```

### Workflows

```
lfai workflow list                       List discovered workflows
lfai workflow run <id> [-s]              Run a workflow (with optional streaming)
lfai workflow run <id> -m "message"      Run with a custom message
lfai workflow describe <id>              Show workflow details
lfai workflow curl <id> [--prod]         Generate a curl command
```

### Docker

```
lfai docker up [--build] [-s service]    Start services
lfai docker down                         Stop services
lfai docker restart [-s service]         Restart services
lfai docker logs [-f] [-s service]       View logs
lfai docker status                       Check service status
lfai docker clean-volumes [--force]      Remove all volumes (destructive)
```

### Database

```
lfai db list                             List projects with database configs
lfai db status <project>                 Show migration status and tables
lfai db migrate <project>                Apply pending migrations
lfai db rollback <project> [-n N]        Roll back N migrations (default: 1)
lfai db create-migration <project> "msg" Create a new migration
lfai db migrate-all                      Apply migrations for all projects
```

### Evaluations

```
lfai eval list                           List available evals
lfai eval exec <id>                      Execute an evaluation
lfai eval review <id>                    Interactive review of results
lfai eval rows <id>                      View results in a table
lfai eval export <id>                    Export results to CSV
```

### Development

```
lfai dev validate                        Run linting + type checking
lfai dev format                          Format code with ruff
lfai dev check                           Run all checks (format + validate)
lfai test                                Run project tests
```

## Optional integrations

Install extras for additional capabilities:

```bash
uv add --editable '../lf-ai[snowflake]'        # Snowflake connector with JWT key-pair auth
uv add --editable '../lf-ai[observability]'    # LangWatch tracing
uv add --editable '../lf-ai[all]'              # Everything
```

### Snowflake

The `lf_ai.tools.snowflake` module provides a connector with key-pair authentication and ready-made Agno tools for querying Snowflake from agents:

- `execute_query` — Run read-only SQL queries (with automatic safety validation)
- `find_project_facets` — Fuzzy match project names
- `find_filter_facets` — Fuzzy match filter values in any column

Required environment variables: `SNOWFLAKE_ACCOUNT`, `SNOWFLAKE_USER`, `SNOWFLAKE_ROLE`, `SNOWFLAKE_WAREHOUSE`, `SNOWFLAKE_PRIVATE_KEY_PATH`.

### LangWatch observability

When enabled, all agent runs are traced and sent to [LangWatch](https://langwatch.ai/) for monitoring and debugging. Requires the `LANGWATCH_API_KEY` environment variable.

## Environment variables

| Variable                    | Required | Description                                    |
| --------------------------- | -------- | ---------------------------------------------- |
| `OS_SECURITY_KEY`           | Yes      | Bearer token for API authentication            |
| `AWS_ACCESS_KEY_ID`         | Yes      | AWS credentials for Bedrock                    |
| `AWS_SECRET_ACCESS_KEY`     | Yes      | AWS credentials for Bedrock                    |
| `AWS_DEFAULT_REGION`        | Yes      | AWS region (e.g., `us-east-2`)                 |
| `DB_HOST`                   | Docker   | PostgreSQL host (set automatically with Docker)|
| `DB_PORT`                   | Docker   | PostgreSQL port (default: `5432`)              |
| `DB_USER`                   | Docker   | PostgreSQL user (default: `ai`)                |
| `DB_PASS`                   | Docker   | PostgreSQL password (default: `ai`)            |
| `DB_DATABASE`               | Docker   | PostgreSQL database (default: `ai`)            |
| `AGENT_OS_PORT`             | No       | Server port (default: `8000`)                  |
| `SNOWFLAKE_ACCOUNT`         | Snowflake| Snowflake account identifier                   |
| `SNOWFLAKE_USER`            | Snowflake| Snowflake username                             |
| `SNOWFLAKE_ROLE`            | Snowflake| Snowflake role                                 |
| `SNOWFLAKE_WAREHOUSE`       | Snowflake| Snowflake warehouse                            |
| `SNOWFLAKE_PRIVATE_KEY_PATH`| Snowflake| Path to RSA private key (.p8)                  |
| `LANGWATCH_API_KEY`         | LangWatch| LangWatch API key for observability            |

## SDK internals

```
lf_ai/
├── discovery.py               # Convention-based auto-discovery engine
├── core/
│   ├── security.py            # FastAPI auth (OS_SECURITY_KEY bearer tokens)
│   └── observability/         # LangWatch tracing integration
├── db/
│   ├── base.py                # create_project_base(), TimestampMixin
│   ├── session.py             # Shared engine + SessionLocal
│   ├── project_session.py     # Per-project schema isolation
│   ├── url.py                 # DB URL from environment variables
│   ├── migration_utils.py     # Alembic migration helpers
│   └── migration_env_template.py
├── evals/
│   ├── schema.py              # EvalType, column validation
│   ├── discovery.py           # Eval package discovery
│   ├── io.py                  # CSV/JSONL I/O
│   ├── sync.py                # Bidirectional DB ↔ JSONL sync
│   └── db/                    # EvalRun model + CRUD operations
├── tools/
│   └── snowflake.py           # Snowflake connector + Agno tools
├── cli/                       # Full CLI implementation
└── templates/                 # Jinja2 templates for lfai init
```
