Metadata-Version: 2.1
Name: quarterbit
Version: 20.0.0
Summary: Memory-efficient LLM training. AXIOM enables 70B on single H100, 7B on consumer GPUs.
Home-page: https://quarterbit.dev
Author: Clouthier Simulation Labs
Author-email: Clouthier Simulation Labs <info@quarterbit.dev>
Project-URL: Homepage, https://quarterbit.dev
Project-URL: Documentation, https://quarterbit.dev/docs
Keywords: optimizer,adam,deep-learning,pytorch,gpu,memory-efficient,compression,axiom
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Science/Research
Classifier: Operating System :: Microsoft :: Windows
Classifier: Operating System :: POSIX :: Linux
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.11
Description-Content-Type: text/markdown

# QuarterBit - AXIOM Optimizer

**Memory-efficient LLM training. Train 34B on single A100, 7B on consumer GPUs.**

## Features

- **5x Larger Models** - Train models 5x larger than AdamW allows on same GPU
- **1365x Optimizer Compression** - Proprietary compression with zero quality loss
- **341x Gradient Compression** - Via register_hooks() API
- **Simple API** - Drop-in AdamW replacement
- **Production Ready** - Gradient checkpointing, AMP, early stopping, NaN detection

## Requirements

- Python 3.11+ (Windows/Linux)
- PyTorch 2.0+ with CUDA
- NVIDIA GPU (Pascal or newer)

## Installation

```bash
pip install quarterbit
```

## Quick Start

```python
from quarterbit import AXIOM_Trainer

# One line - handles everything with optimized defaults
trainer = AXIOM_Trainer(model, train_loader, val_loader)
results = trainer.fit(steps=2000)

print(f"Val PPL: {results['final_val_ppl']:.1f}")
print(f"Peak VRAM: {results['peak_vram_gb']:.1f} GB")
```

## CLI Commands

Manage your QuarterBit license from the command line:

```bash
# Login to your QuarterBit account
quarterbit login

# Activate license on this machine
quarterbit activate

# Check license status
quarterbit status

# Deactivate (free up machine slot)
quarterbit deactivate
```

## Memory Comparison

| Model | AdamW | AXIOM | Reduction |
|-------|-------|-------|-----------|
| Mistral-7B | 84 GB | 14 GB | 6x |
| Yi-1.5-9B | 108 GB | 17 GB | 6.4x |
| Yi-34B | 413 GB | 78 GB | 5.3x |

## Verified Results (Feb 25, 2026)

Yi-34B trained on **single A100-80GB**:

| Metric | Value |
|--------|-------|
| Peak VRAM | 77.95 GB |
| Optimizer Compression | 1365x |
| Gradient Compression | 341x |
| PPL Improvement | 9.5% |
| Throughput | 95 tok/s |

## Advanced: Manual Optimizer

For custom training loops, use AXIOM directly:

```python
from quarterbit import AXIOM

opt = AXIOM(model.parameters(), lr=5e-4)
opt.register_hooks()  # Enable 1365x compression

for batch in dataloader:
    loss = model(batch).loss
    loss.backward()
    opt.step()
    opt.zero_grad()
```

## Supported Models

- LLaMA 3, LLaMA 2, LLaMA
- Yi-34B, Yi-1.5
- Mistral, Mixtral
- Phi-3, Phi-2
- Gemma, Gemma 2
- GPT-J, GPT-NeoX
- Any HuggingFace causal LM

## License Tiers

| Tier | Price | GPU Hours | Features |
|------|-------|-----------|----------|
| Free | $0 | 5 hrs/mo | Full 1365x compression, personal use |
| Academic | $0 | 10 hrs/mo | Full features, .edu/university email |
| Pro | $49/mo | Unlimited | Commercial license |
| Team | $299/mo | Unlimited | Multi-GPU, DDP support |
| Enterprise | Custom | Unlimited | Custom SLA |

## Documentation

**https://quarterbit.dev/docs**

---
Copyright 2026 Clouthier Simulation Labs
