Metadata-Version: 2.4
Name: lipas
Version: 0.3.1
Summary: Layered Invariant-Preserving Agent System
Project-URL: Homepage, https://github.com/lipworld/lipas
License: MIT
License-File: LICENSE
Requires-Python: >=3.10
Description-Content-Type: text/markdown

# LIPAS

**An agent framework built on algebra, not architecture.**

> Most frameworks ask you to wire components together.
>
> LIPAS asks you to declare intent — and lets one operation, **⊕**, handle the rest.

---

## The 30-second version

```python
import asyncio
from lipas.flow import Step, Pipeline
from lipas.llm  import LLMAdapter, OllamaBackend

llm = LLMAdapter(OllamaBackend(model="gemma4", timeout=300.0), verbose=True)
the_text = "AI will not replace humans, but humans who use AI will replace those who don't."

@Step(needs=['english_text'], produces=['chinese_draft'])
async def translate(english_text: str) -> str:
    return await llm.ask(
            f'topic: Please translate it into Chinese naturally: "{english_text}"',
            system="You are a translator providing only the most appropriate translation."
               "Produce a focused, accurate translation. Every sentence must be relevant.",
    )

@Step(needs=['chinese_draft'], produces=['polished'])
async def polish(chinese_draft: str) -> str:
    return await llm.ask(
            f'topic: Please polish this Chinese naturally: "{chinese_draft}"',
            system="You are a polishing editor providing only the most appropriate polishment."
               "Produce a focused, accurate polishment. Every sentence must be relevant.",
    )

pipeline = Pipeline(steps=[translate, polish], done_when='polished',)
asyncio.run(pipeline.run(english_text=the_text))
```

That is a real, complete, runnable agent. No state machine. No edge list. No router config.

Each function declares what it reads and what it writes.

`Pipeline` threads context through, handles retries, and converges.

---

## Why another framework?

Most agent frameworks give you a box of components — planners, memory stores, tool routers — and ask you to wire them together.

The wiring is where bugs live.

LIPAS starts from a different place: a single algebraic operation, **⊕** (claim merge).

The practical consequences:

- **You declare what the agent wants, not how it gets there.**
  A pipeline is a set of steps with declared inputs and outputs. There is no control flow to wire, no state machine to draw.

- **Failure handling writes itself.**
  Every outcome — success or failure — is a Claim that merges into belief via ⊕. The agent cannot forget a lesson, and it provably stops repeating the same mistake.

- **Complexity stays flat.**
  Adding a new capability is one function and one `@Step` decorator. The algebra composes them; you don't reason about interactions.

- **The LLM is just another claim source.**
  Its output is unreliable — that's fine. ⊕ treats an LLM response exactly like a sensor reading: a Claim with a priority. Hallucinations are absorbed the same way as any other low-confidence signal.

---

## A fuller example — research pipeline with quality gate

See [`examples/research_flow/research_app.py`](examples/research_flow/research_app.py)
— a search → summarise → validate → refine pipeline with conditional branching via `when=` guards.
— two plain functions, one line each. There is no router, no conditional edge, no explicit branch node.

---

## How it works — the algebra in brief

**⊕ (claim merge)** is a join on a field-indexed product semilattice.

```
You supply:  deliberation  (what to want — your strategy, your intelligence)
⊕ supplies:  everything else
```

From this one operation, the entire architecture is *derived*, not designed:

| Concept | What it is |
|---|---|
| **Belief** | The cumulative fold of all Claims under ⊕. Monotonicity is a theorem. |
| **Commitment** | A conative Claim with a priority field. Arbitration falls out of the semilattice order. |
| **Effect** | An action that, when executed, produces new Claims — closing the perception-action loop. |
| **Learning** | What happens when failure records fold back into belief via ⊕. Not a module — a consequence. |
| **Adaptation** | The agent progresses through levels (REACTIVE → CAUTIOUS → STRATEGIC → REFLECTIVE) as belief accumulates. Each level unlocks richer deliberation; all share the same ⊕ machinery. |
| **Multi-agent** | Coordination *is* claim merging across agent boundaries. No new algebra required. |

The one thing ⊕ does *not* give you is **deliberation** — the creative act of deciding what to want. That is the true primitive you supply.

It is where your agent's intelligence lives, and it belongs outside the algebra.

---

## Design philosophy

| Framework | Core metaphor | The question it asks |
|---|---|---|
| LangGraph  | Control flow — orchestrate the world | *How do I wire steps correctly?* |
| CrewAI     | Social structure — roles and crews | *How do I assign roles?* |
| AutoGen    | Conversation — dialogue as intelligence | *How do I design the dialogue?* |
| **LIPAS**  | **Algebra — intelligence is convergence under ⊕** | ***Under what conditions must a system become competent?*** |

LIPAS is built on a single assumption: you will make mistakes, your agents will make mistakes, and the world will respond unpredictably. The only reliable path is to accumulate experience monotonically, never repeat the same failure blindly, and let convergence do the rest.

You cannot "turn off" learning in LIPAS any more than you can remove addition from arithmetic — it is belief accumulation, and belief accumulation is what ⊕ does.

---

## Installation

```bash
pip install lipas
```

Requires Python 3.10. For local LLM support, install [Ollama](https://ollama.com) and pull a model:

```bash
ollama pull gemma4
```

---

## License

MIT
