Initial commit with translated description
This commit is contained in:
304
README.md
Normal file
304
README.md
Normal file
@@ -0,0 +1,304 @@
|
||||
⚠️ Note: The FreeRide skill was recently removed from ClawHub without prior explanation.
|
||||
We’ve opened an issue for clarification: https://github.com/openclaw/clawhub/issues/1262
|
||||
|
||||
# 🎢 FreeRide
|
||||
|
||||
### Stop paying for AI. Start riding free.
|
||||
|
||||
[](https://clawhub.ai/skills/free-ride)
|
||||
[](https://clawhub.ai/skills/free-ride)
|
||||
[](https://clawhub.ai/skills/free-ride)
|
||||
[](https://clawhub.ai/skills/free-ride)
|
||||
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
[](https://github.com/openclaw/openclaw)
|
||||
---
|
||||
|
||||
**FreeRide** gives you unlimited free AI in [OpenClaw](https://github.com/openclaw/openclaw) by automatically managing OpenRouter's free models.
|
||||
|
||||
```
|
||||
You: *hits rate limit*
|
||||
FreeRide: "I got you." *switches to next best model*
|
||||
You: *keeps coding*
|
||||
```
|
||||
|
||||
## The Problem
|
||||
|
||||
You're using OpenClaw. You love it. But:
|
||||
|
||||
- 💸 API costs add up fast
|
||||
- 🚫 Free models have rate limits
|
||||
- 😤 Manually switching models is annoying
|
||||
- 🤷 You don't know which free model is actually good
|
||||
|
||||
## The Solution
|
||||
|
||||
One command. Free AI. Forever.
|
||||
|
||||
```bash
|
||||
freeride auto
|
||||
```
|
||||
|
||||
That's it. FreeRide:
|
||||
|
||||
1. **Finds** the 30+ free models on OpenRouter
|
||||
2. **Ranks** them by quality (context length, capabilities, speed)
|
||||
3. **Sets** the best one as your primary
|
||||
4. **Configures** smart fallbacks for when you hit rate limits
|
||||
5. **Preserves** your existing OpenClaw config
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
npx clawhub@latest install free-ride
|
||||
cd ~/.openclaw/workspace/skills/free-ride
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
That's it. `freeride` and `freeride-watcher` are now available as global commands.
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Get a Free OpenRouter Key
|
||||
|
||||
Go to [openrouter.ai/keys](https://openrouter.ai/keys) → Create account → Generate key
|
||||
|
||||
No credit card. No trial. Actually free.
|
||||
|
||||
### 2. Set Your Key
|
||||
|
||||
```bash
|
||||
export OPENROUTER_API_KEY="sk-or-v1-..."
|
||||
```
|
||||
|
||||
Or add it to your OpenClaw config:
|
||||
|
||||
```bash
|
||||
openclaw config set env.OPENROUTER_API_KEY "sk-or-v1-..."
|
||||
```
|
||||
|
||||
### 3. Run FreeRide
|
||||
|
||||
```bash
|
||||
freeride auto
|
||||
```
|
||||
|
||||
### 4. Restart OpenClaw
|
||||
|
||||
```bash
|
||||
openclaw gateway restart
|
||||
```
|
||||
|
||||
### 5. Verify It Works
|
||||
|
||||
Message your agent on WhatsApp/Telegram/Discord or the dashboard:
|
||||
|
||||
```
|
||||
You: /status
|
||||
Agent: (shows the free model name + token count)
|
||||
```
|
||||
|
||||
Done. You're now running on free AI with automatic fallbacks.
|
||||
|
||||
## What You Get
|
||||
|
||||
```
|
||||
Primary Model: openrouter/nvidia/nemotron-3-nano-30b-a3b:free (256K context)
|
||||
|
||||
Fallbacks:
|
||||
1. openrouter/free ← Smart router (auto-picks best available)
|
||||
2. qwen/qwen3-coder:free ← Great for coding
|
||||
3. stepfun/step-3.5:free ← Fast responses
|
||||
4. deepseek/deepseek:free ← Strong reasoning
|
||||
5. mistral/mistral:free ← Reliable fallback
|
||||
```
|
||||
|
||||
When you hit a rate limit, OpenClaw automatically tries the next model. You keep working. No interruptions.
|
||||
|
||||
## Commands
|
||||
|
||||
| Command | What it does |
|
||||
|---------|--------------|
|
||||
| `freeride auto` | Auto-configure best model + fallbacks |
|
||||
| `freeride list` | See all 30+ free models ranked |
|
||||
| `freeride switch <model>` | Use a specific model |
|
||||
| `freeride status` | Check your current setup |
|
||||
| `freeride fallbacks` | Update fallbacks only |
|
||||
| `freeride refresh` | Force refresh model cache |
|
||||
|
||||
### Pro Tips
|
||||
|
||||
```bash
|
||||
# Already have a model you like? Just add fallbacks:
|
||||
freeride auto -f
|
||||
|
||||
# Want more fallbacks for maximum uptime?
|
||||
freeride auto -c 10
|
||||
|
||||
# Coding? Switch to the best coding model:
|
||||
freeride switch qwen3-coder
|
||||
|
||||
# See what's available:
|
||||
freeride list -n 30
|
||||
|
||||
# Always restart OpenClaw after changes:
|
||||
openclaw gateway restart
|
||||
```
|
||||
|
||||
## How It Ranks Models
|
||||
|
||||
FreeRide scores each model (0-1) based on:
|
||||
|
||||
| Factor | Weight | Why |
|
||||
|--------|--------|-----|
|
||||
| Context Length | 40% | Longer = handle bigger codebases |
|
||||
| Capabilities | 30% | Vision, tools, structured output |
|
||||
| Recency | 20% | Newer models = better performance |
|
||||
| Provider Trust | 10% | Google, Meta, NVIDIA, etc. |
|
||||
|
||||
The **smart fallback** `openrouter/free` is always first - it auto-selects based on what your request needs.
|
||||
|
||||
## Testing with Your OpenClaw Agent
|
||||
|
||||
After running `freeride auto` and `openclaw gateway restart`:
|
||||
|
||||
```bash
|
||||
# Check OpenClaw sees the models
|
||||
openclaw models list
|
||||
|
||||
# Validate config
|
||||
openclaw doctor --fix
|
||||
|
||||
# Open the dashboard and chat
|
||||
openclaw dashboard
|
||||
# Or message your agent on WhatsApp/Telegram/Discord
|
||||
```
|
||||
|
||||
Useful agent commands to verify:
|
||||
|
||||
| Command | What it tells you |
|
||||
|---------|-------------------|
|
||||
| `/status` | Current model + token usage |
|
||||
| `/model` | Available models (your free models should be listed) |
|
||||
| `/new` | Start fresh session with the new model |
|
||||
|
||||
## Watcher (Auto-Rotation)
|
||||
|
||||
FreeRide includes a watcher daemon that monitors for rate limits and automatically rotates models:
|
||||
|
||||
```bash
|
||||
# Run once (check + rotate if needed)
|
||||
freeride-watcher
|
||||
|
||||
# Run as daemon (continuous monitoring)
|
||||
freeride-watcher --daemon
|
||||
|
||||
# Force rotate to next model
|
||||
freeride-watcher --rotate
|
||||
|
||||
# Check watcher status
|
||||
freeride-watcher --status
|
||||
|
||||
# Clear rate limit cooldowns
|
||||
freeride-watcher --clear-cooldowns
|
||||
```
|
||||
|
||||
## FAQ
|
||||
|
||||
**Is this actually free?**
|
||||
|
||||
Yes. OpenRouter provides free tiers for many models. You just need an account (no credit card).
|
||||
|
||||
**What about rate limits?**
|
||||
|
||||
That's the whole point. FreeRide configures multiple fallbacks. When one model rate-limits you, OpenClaw automatically switches to the next.
|
||||
|
||||
**Will it mess up my OpenClaw config?**
|
||||
|
||||
No. FreeRide only touches `agents.defaults.model` and `agents.defaults.models`. Your gateway, channels, plugins, workspace, customInstructions - all preserved.
|
||||
|
||||
**Which models are free?**
|
||||
|
||||
Run `freeride list` to see current availability. It changes, which is why FreeRide exists.
|
||||
|
||||
**Do I need to restart OpenClaw after changes?**
|
||||
|
||||
Yes. Run `openclaw gateway restart` after any FreeRide command that changes your config.
|
||||
|
||||
## The Math
|
||||
|
||||
| Scenario | Monthly Cost |
|
||||
|----------|--------------|
|
||||
| GPT-4 API | $50-200+ |
|
||||
| Claude API | $50-200+ |
|
||||
| OpenClaw + FreeRide | **$0** |
|
||||
|
||||
You're welcome.
|
||||
|
||||
## Requirements
|
||||
|
||||
- [OpenClaw](https://github.com/openclaw/openclaw) installed (Node ≥22)
|
||||
- Python 3.8+
|
||||
- Free OpenRouter account ([get key](https://openrouter.ai/keys))
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌──────────────┐ ┌──────────────┐ ┌──────────────────┐
|
||||
│ You │ ──→ │ FreeRide │ ──→ │ OpenRouter API │
|
||||
│ "freeride │ │ │ │ (30+ free │
|
||||
│ auto" │ │ • Fetch │ │ models) │
|
||||
└──────────────┘ │ • Rank │ └──────────────────┘
|
||||
│ • Configure │
|
||||
└──────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────┐
|
||||
│ ~/.openclaw/ │
|
||||
│ openclaw.json│
|
||||
└──────┬───────┘
|
||||
│
|
||||
openclaw gateway restart
|
||||
│
|
||||
▼
|
||||
┌──────────────┐
|
||||
│ OpenClaw │
|
||||
│ (free AI!) │
|
||||
└──────────────┘
|
||||
```
|
||||
|
||||
## Contributing
|
||||
|
||||
Found a bug? Want a feature? PRs welcome.
|
||||
|
||||
```bash
|
||||
cd ~/.openclaw/workspace/skills/free-ride
|
||||
|
||||
# Test commands
|
||||
freeride list
|
||||
freeride status
|
||||
freeride auto --help
|
||||
```
|
||||
|
||||
## Related Projects
|
||||
|
||||
- [OpenClaw](https://github.com/openclaw/openclaw) - The AI coding agent
|
||||
- [OpenRouter](https://openrouter.ai) - The model router
|
||||
- [ClawHub](https://github.com/clawhub) - Skill marketplace
|
||||
|
||||
## License
|
||||
|
||||
MIT - Do whatever you want.
|
||||
|
||||
---
|
||||
|
||||
<p align="center">
|
||||
<b>Stop paying. Start riding.</b>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://github.com/Shaivpidadi/FreeRide">⭐ Star us on GitHub</a>
|
||||
·
|
||||
<a href="https://openrouter.ai/keys">🔑 Get OpenRouter Key</a>
|
||||
·
|
||||
<a href="https://github.com/openclaw/openclaw">🦞 Install OpenClaw</a>
|
||||
</p>
|
||||
91
SKILL.md
Normal file
91
SKILL.md
Normal file
@@ -0,0 +1,91 @@
|
||||
---
|
||||
name: freeride
|
||||
description: "为OpenClaw管理来自OpenRouter的免费AI模型。"
|
||||
---
|
||||
|
||||
# FreeRide - Free AI for OpenClaw
|
||||
|
||||
## What This Skill Does
|
||||
|
||||
Configures OpenClaw to use **free** AI models from OpenRouter. Sets the best free model as primary, adds ranked fallbacks so rate limits don't interrupt the user, and preserves existing config.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before running any FreeRide command, ensure:
|
||||
|
||||
1. **OPENROUTER_API_KEY is set.** Check with `echo $OPENROUTER_API_KEY`. If empty, the user must get a free key at https://openrouter.ai/keys and set it:
|
||||
```bash
|
||||
export OPENROUTER_API_KEY="sk-or-v1-..."
|
||||
# Or persist it:
|
||||
openclaw config set env.OPENROUTER_API_KEY "sk-or-v1-..."
|
||||
```
|
||||
|
||||
2. **The `freeride` CLI is installed.** Check with `which freeride`. If not found:
|
||||
```bash
|
||||
cd ~/.openclaw/workspace/skills/free-ride
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
## Primary Workflow
|
||||
|
||||
When the user wants free AI, run these steps in order:
|
||||
|
||||
```bash
|
||||
# Step 1: Configure best free model + fallbacks
|
||||
freeride auto
|
||||
|
||||
# Step 2: Restart gateway so OpenClaw picks up the changes
|
||||
openclaw gateway restart
|
||||
```
|
||||
|
||||
That's it. The user now has free AI with automatic fallback switching.
|
||||
|
||||
Verify by telling the user to send `/status` to check the active model.
|
||||
|
||||
## Commands Reference
|
||||
|
||||
| Command | When to use it |
|
||||
|---------|----------------|
|
||||
| `freeride auto` | User wants free AI set up (most common) |
|
||||
| `freeride auto -f` | User wants fallbacks but wants to keep their current primary model |
|
||||
| `freeride auto -c 10` | User wants more fallbacks (default is 5) |
|
||||
| `freeride list` | User wants to see available free models |
|
||||
| `freeride list -n 30` | User wants to see all free models |
|
||||
| `freeride switch <model>` | User wants a specific model (e.g. `freeride switch qwen3-coder`) |
|
||||
| `freeride switch <model> -f` | Add specific model as fallback only |
|
||||
| `freeride status` | Check current FreeRide configuration |
|
||||
| `freeride fallbacks` | Update only the fallback models |
|
||||
| `freeride refresh` | Force refresh the cached model list |
|
||||
|
||||
**After any command that changes config, always run `openclaw gateway restart`.**
|
||||
|
||||
## What It Writes to Config
|
||||
|
||||
FreeRide updates only these keys in `~/.openclaw/openclaw.json`:
|
||||
|
||||
- `agents.defaults.model.primary` — e.g. `openrouter/qwen/qwen3-coder:free`
|
||||
- `agents.defaults.model.fallbacks` — e.g. `["openrouter/free", "nvidia/nemotron:free", ...]`
|
||||
- `agents.defaults.models` — allowlist so `/model` command shows the free models
|
||||
|
||||
Everything else (gateway, channels, plugins, env, customInstructions, named agents) is preserved.
|
||||
|
||||
The first fallback is always `openrouter/free` — OpenRouter's smart router that auto-picks the best available model based on the request.
|
||||
|
||||
## Watcher (Optional)
|
||||
|
||||
For auto-rotation when rate limited, the user can run:
|
||||
|
||||
```bash
|
||||
freeride-watcher --daemon # Continuous monitoring
|
||||
freeride-watcher --rotate # Force rotate now
|
||||
freeride-watcher --status # Check rotation history
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Problem | Fix |
|
||||
|---------|-----|
|
||||
| `freeride: command not found` | `cd ~/.openclaw/workspace/skills/free-ride && pip install -e .` |
|
||||
| `OPENROUTER_API_KEY not set` | User needs a key from https://openrouter.ai/keys |
|
||||
| Changes not taking effect | `openclaw gateway restart` then `/new` for fresh session |
|
||||
| Agent shows 0 tokens | Check `freeride status` — primary should be `openrouter/<provider>/<model>:free` |
|
||||
6
_meta.json
Normal file
6
_meta.json
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"ownerId": "kn7eepf540q01kxs5gzwnvsp5s80hhje",
|
||||
"slug": "free-ride",
|
||||
"version": "1.0.7",
|
||||
"publishedAt": 1774618346215
|
||||
}
|
||||
767
main.py
Normal file
767
main.py
Normal file
@@ -0,0 +1,767 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
FreeRide - Free AI for OpenClaw
|
||||
Automatically manage and switch between free AI models on OpenRouter
|
||||
for unlimited free AI access.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
from pathlib import Path
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Optional
|
||||
|
||||
try:
|
||||
import requests
|
||||
except ImportError:
|
||||
print("Error: requests library required. Install with: pip install requests")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
# Constants
|
||||
OPENROUTER_API_URL = "https://openrouter.ai/api/v1/models"
|
||||
OPENCLAW_CONFIG_PATH = Path.home() / ".openclaw" / "openclaw.json"
|
||||
CACHE_FILE = Path.home() / ".openclaw" / ".freeride-cache.json"
|
||||
CACHE_DURATION_HOURS = 6
|
||||
|
||||
# Free model ranking criteria (higher is better)
|
||||
RANKING_WEIGHTS = {
|
||||
"context_length": 0.4, # Prefer longer context
|
||||
"capabilities": 0.3, # Prefer more capabilities
|
||||
"recency": 0.2, # Prefer newer models
|
||||
"provider_trust": 0.1 # Prefer trusted providers
|
||||
}
|
||||
|
||||
# Trusted providers (in order of preference)
|
||||
TRUSTED_PROVIDERS = [
|
||||
"google", "meta-llama", "mistralai", "deepseek",
|
||||
"nvidia", "qwen", "microsoft", "allenai", "arcee-ai"
|
||||
]
|
||||
|
||||
|
||||
def get_api_key() -> Optional[str]:
|
||||
"""Get OpenRouter API key from environment or OpenClaw config."""
|
||||
# Try environment first
|
||||
api_key = os.environ.get("OPENROUTER_API_KEY")
|
||||
if api_key:
|
||||
return api_key
|
||||
|
||||
# Try OpenClaw config
|
||||
if OPENCLAW_CONFIG_PATH.exists():
|
||||
try:
|
||||
config = json.loads(OPENCLAW_CONFIG_PATH.read_text())
|
||||
# Check env section
|
||||
api_key = config.get("env", {}).get("OPENROUTER_API_KEY")
|
||||
if api_key:
|
||||
return api_key
|
||||
except (json.JSONDecodeError, KeyError):
|
||||
pass
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def fetch_all_models(api_key: str) -> list:
|
||||
"""Fetch all models from OpenRouter API."""
|
||||
headers = {
|
||||
"Authorization": f"Bearer {api_key}",
|
||||
"Content-Type": "application/json"
|
||||
}
|
||||
|
||||
try:
|
||||
response = requests.get(OPENROUTER_API_URL, headers=headers, timeout=30)
|
||||
response.raise_for_status()
|
||||
data = response.json()
|
||||
return data.get("data", [])
|
||||
except requests.RequestException as e:
|
||||
print(f"Error fetching models: {e}")
|
||||
return []
|
||||
|
||||
|
||||
def filter_free_models(models: list) -> list:
|
||||
"""Filter models to only include free ones (pricing.prompt == 0)."""
|
||||
free_models = []
|
||||
|
||||
for model in models:
|
||||
model_id = model.get("id", "")
|
||||
pricing = model.get("pricing", {})
|
||||
|
||||
# Check if model is free (prompt cost is 0 or None)
|
||||
prompt_cost = pricing.get("prompt")
|
||||
if prompt_cost is not None:
|
||||
try:
|
||||
if float(prompt_cost) == 0:
|
||||
free_models.append(model)
|
||||
except (ValueError, TypeError):
|
||||
pass
|
||||
|
||||
# Also include models with :free suffix
|
||||
if ":free" in model_id and model not in free_models:
|
||||
free_models.append(model)
|
||||
|
||||
return free_models
|
||||
|
||||
|
||||
def calculate_model_score(model: dict) -> float:
|
||||
"""Calculate a ranking score for a model based on multiple criteria."""
|
||||
score = 0.0
|
||||
|
||||
# Context length score (normalized to 0-1, max 1M tokens)
|
||||
context_length = model.get("context_length", 0)
|
||||
context_score = min(context_length / 1_000_000, 1.0)
|
||||
score += context_score * RANKING_WEIGHTS["context_length"]
|
||||
|
||||
# Capabilities score
|
||||
capabilities = model.get("supported_parameters", [])
|
||||
capability_count = len(capabilities) if capabilities else 0
|
||||
capability_score = min(capability_count / 10, 1.0) # Normalize to max 10 capabilities
|
||||
score += capability_score * RANKING_WEIGHTS["capabilities"]
|
||||
|
||||
# Recency score (based on creation date)
|
||||
created = model.get("created", 0)
|
||||
if created:
|
||||
days_old = (time.time() - created) / 86400
|
||||
recency_score = max(0, 1 - (days_old / 365)) # Newer models score higher
|
||||
score += recency_score * RANKING_WEIGHTS["recency"]
|
||||
|
||||
# Provider trust score
|
||||
model_id = model.get("id", "")
|
||||
provider = model_id.split("/")[0] if "/" in model_id else ""
|
||||
if provider in TRUSTED_PROVIDERS:
|
||||
trust_index = TRUSTED_PROVIDERS.index(provider)
|
||||
trust_score = 1 - (trust_index / len(TRUSTED_PROVIDERS))
|
||||
score += trust_score * RANKING_WEIGHTS["provider_trust"]
|
||||
|
||||
return score
|
||||
|
||||
|
||||
def rank_free_models(models: list) -> list:
|
||||
"""Rank free models by quality score."""
|
||||
scored_models = []
|
||||
for model in models:
|
||||
score = calculate_model_score(model)
|
||||
scored_models.append({**model, "_score": score})
|
||||
|
||||
# Sort by score descending
|
||||
scored_models.sort(key=lambda x: x["_score"], reverse=True)
|
||||
return scored_models
|
||||
|
||||
|
||||
def get_cached_models() -> Optional[list]:
|
||||
"""Get cached model list if still valid."""
|
||||
if not CACHE_FILE.exists():
|
||||
return None
|
||||
|
||||
try:
|
||||
cache = json.loads(CACHE_FILE.read_text())
|
||||
cached_at = datetime.fromisoformat(cache.get("cached_at", ""))
|
||||
if datetime.now() - cached_at < timedelta(hours=CACHE_DURATION_HOURS):
|
||||
return cache.get("models", [])
|
||||
except (json.JSONDecodeError, ValueError):
|
||||
pass
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def save_models_cache(models: list):
|
||||
"""Save models to cache file."""
|
||||
CACHE_FILE.parent.mkdir(parents=True, exist_ok=True)
|
||||
cache = {
|
||||
"cached_at": datetime.now().isoformat(),
|
||||
"models": models
|
||||
}
|
||||
CACHE_FILE.write_text(json.dumps(cache, indent=2))
|
||||
|
||||
|
||||
def get_free_models(api_key: str, force_refresh: bool = False) -> list:
|
||||
"""Get ranked free models (from cache or API)."""
|
||||
if not force_refresh:
|
||||
cached = get_cached_models()
|
||||
if cached:
|
||||
return cached
|
||||
|
||||
all_models = fetch_all_models(api_key)
|
||||
free_models = filter_free_models(all_models)
|
||||
ranked_models = rank_free_models(free_models)
|
||||
|
||||
save_models_cache(ranked_models)
|
||||
return ranked_models
|
||||
|
||||
|
||||
def load_openclaw_config() -> dict:
|
||||
"""Load OpenClaw configuration."""
|
||||
if not OPENCLAW_CONFIG_PATH.exists():
|
||||
return {}
|
||||
|
||||
try:
|
||||
return json.loads(OPENCLAW_CONFIG_PATH.read_text())
|
||||
except json.JSONDecodeError:
|
||||
return {}
|
||||
|
||||
|
||||
def save_openclaw_config(config: dict):
|
||||
"""Save OpenClaw configuration."""
|
||||
OPENCLAW_CONFIG_PATH.parent.mkdir(parents=True, exist_ok=True)
|
||||
OPENCLAW_CONFIG_PATH.write_text(json.dumps(config, indent=2))
|
||||
|
||||
|
||||
def format_model_for_openclaw(model_id: str, with_provider_prefix: bool = True, append_free: bool = True) -> str:
|
||||
"""Format model ID for OpenClaw config.
|
||||
|
||||
OpenClaw uses two formats:
|
||||
- Primary model: "openrouter/<author>/<model>:free" (with provider prefix)
|
||||
- Fallbacks/models list: "<author>/<model>:free" (without prefix sometimes)
|
||||
"""
|
||||
base_id = model_id
|
||||
|
||||
# Handle openrouter/free special case: "openrouter" is both the routing
|
||||
# prefix OpenClaw adds AND the actual provider name in the API model ID.
|
||||
# The API model ID is "openrouter/free" (no :free suffix — it's a router, not a free-tier model).
|
||||
# - with prefix: "openrouter/openrouter/free" (routing prefix + API ID)
|
||||
# - without prefix: "openrouter/free" (just the API ID)
|
||||
if model_id in ("openrouter/free", "openrouter/free:free"):
|
||||
if with_provider_prefix:
|
||||
return "openrouter/openrouter/free"
|
||||
return "openrouter/free"
|
||||
|
||||
# Remove existing openrouter/ routing prefix if present to get the base API ID
|
||||
if base_id.startswith("openrouter/"):
|
||||
base_id = base_id[len("openrouter/"):]
|
||||
|
||||
# Ensure :free suffix
|
||||
if append_free and ":free" not in base_id:
|
||||
base_id = f"{base_id}:free"
|
||||
|
||||
if with_provider_prefix:
|
||||
return f"openrouter/{base_id}"
|
||||
return base_id
|
||||
|
||||
|
||||
def get_current_model(config: dict = None) -> Optional[str]:
|
||||
"""Get currently configured model in OpenClaw."""
|
||||
if config is None:
|
||||
config = load_openclaw_config()
|
||||
return config.get("agents", {}).get("defaults", {}).get("model", {}).get("primary")
|
||||
|
||||
|
||||
def get_current_fallbacks(config: dict = None) -> list:
|
||||
"""Get currently configured fallback models."""
|
||||
if config is None:
|
||||
config = load_openclaw_config()
|
||||
return config.get("agents", {}).get("defaults", {}).get("model", {}).get("fallbacks", [])
|
||||
|
||||
|
||||
def ensure_config_structure(config: dict) -> dict:
|
||||
"""Ensure the config has the required nested structure without overwriting existing values."""
|
||||
if "agents" not in config:
|
||||
config["agents"] = {}
|
||||
if "defaults" not in config["agents"]:
|
||||
config["agents"]["defaults"] = {}
|
||||
if "model" not in config["agents"]["defaults"]:
|
||||
config["agents"]["defaults"]["model"] = {}
|
||||
if "models" not in config["agents"]["defaults"]:
|
||||
config["agents"]["defaults"]["models"] = {}
|
||||
return config
|
||||
|
||||
|
||||
def setup_openrouter_auth(config: dict) -> dict:
|
||||
"""Set up OpenRouter auth profile if not exists."""
|
||||
if "auth" not in config:
|
||||
config["auth"] = {}
|
||||
if "profiles" not in config["auth"]:
|
||||
config["auth"]["profiles"] = {}
|
||||
|
||||
if "openrouter:default" not in config["auth"]["profiles"]:
|
||||
config["auth"]["profiles"]["openrouter:default"] = {
|
||||
"provider": "openrouter",
|
||||
"mode": "api_key"
|
||||
}
|
||||
print("Added OpenRouter auth profile.")
|
||||
|
||||
return config
|
||||
|
||||
|
||||
def update_model_config(
|
||||
model_id: str,
|
||||
as_primary: bool = True,
|
||||
add_fallbacks: bool = True,
|
||||
fallback_count: int = 5,
|
||||
setup_auth: bool = False,
|
||||
append_free: bool = True
|
||||
) -> bool:
|
||||
"""Update OpenClaw config with the specified model.
|
||||
|
||||
Args:
|
||||
model_id: The model ID to configure
|
||||
as_primary: If True, set as primary model. If False, only add to fallbacks.
|
||||
add_fallbacks: If True, also configure fallback models
|
||||
fallback_count: Number of fallback models to add
|
||||
setup_auth: If True, also set up OpenRouter auth profile
|
||||
"""
|
||||
config = load_openclaw_config()
|
||||
config = ensure_config_structure(config)
|
||||
|
||||
if setup_auth:
|
||||
config = setup_openrouter_auth(config)
|
||||
|
||||
formatted_primary = format_model_for_openclaw(model_id, with_provider_prefix=True, append_free=append_free)
|
||||
formatted_for_list = format_model_for_openclaw(model_id, with_provider_prefix=False, append_free=append_free)
|
||||
|
||||
if as_primary:
|
||||
# Set as primary model
|
||||
config["agents"]["defaults"]["model"]["primary"] = formatted_primary
|
||||
# Add to models allowlist
|
||||
config["agents"]["defaults"]["models"][formatted_for_list] = {}
|
||||
|
||||
# Handle fallbacks
|
||||
if add_fallbacks:
|
||||
api_key = get_api_key()
|
||||
if api_key:
|
||||
free_models = get_free_models(api_key)
|
||||
|
||||
# Get existing fallbacks
|
||||
existing_fallbacks = config["agents"]["defaults"]["model"].get("fallbacks", [])
|
||||
|
||||
# Build new fallbacks list
|
||||
new_fallbacks = []
|
||||
|
||||
# Always add openrouter/free as first fallback (smart router)
|
||||
# Skip if it's being set as primary
|
||||
free_router = "openrouter/free"
|
||||
free_router_primary = format_model_for_openclaw("openrouter/free", with_provider_prefix=True)
|
||||
if formatted_primary != free_router_primary and formatted_for_list != free_router:
|
||||
new_fallbacks.append(free_router)
|
||||
config["agents"]["defaults"]["models"][free_router] = {}
|
||||
|
||||
for m in free_models:
|
||||
# Reserve one slot for openrouter/free
|
||||
if len(new_fallbacks) >= fallback_count:
|
||||
break
|
||||
|
||||
m_formatted = format_model_for_openclaw(m["id"], with_provider_prefix=False)
|
||||
m_formatted_primary = format_model_for_openclaw(m["id"], with_provider_prefix=True)
|
||||
|
||||
# Skip openrouter/free (already added as first)
|
||||
if "openrouter/free" in m["id"]:
|
||||
continue
|
||||
|
||||
# Skip if it's the new primary
|
||||
if as_primary and (m_formatted == formatted_for_list or m_formatted_primary == formatted_primary):
|
||||
continue
|
||||
|
||||
# Skip if it's the current primary (when adding to fallbacks only)
|
||||
current_primary = config["agents"]["defaults"]["model"].get("primary", "")
|
||||
if not as_primary and m_formatted_primary == current_primary:
|
||||
continue
|
||||
|
||||
new_fallbacks.append(m_formatted)
|
||||
config["agents"]["defaults"]["models"][m_formatted] = {}
|
||||
|
||||
# If not setting as primary, prepend new model to fallbacks (after openrouter/free)
|
||||
if not as_primary:
|
||||
if formatted_for_list not in new_fallbacks:
|
||||
# Insert after openrouter/free if present
|
||||
insert_pos = 1 if free_router in new_fallbacks else 0
|
||||
new_fallbacks.insert(insert_pos, formatted_for_list)
|
||||
config["agents"]["defaults"]["models"][formatted_for_list] = {}
|
||||
|
||||
config["agents"]["defaults"]["model"]["fallbacks"] = new_fallbacks
|
||||
|
||||
save_openclaw_config(config)
|
||||
return True
|
||||
|
||||
|
||||
# ============== Command Handlers ==============
|
||||
|
||||
def cmd_list(args):
|
||||
"""List available free models ranked by quality."""
|
||||
api_key = get_api_key()
|
||||
if not api_key:
|
||||
print("Error: OPENROUTER_API_KEY not set")
|
||||
print("Set it via: export OPENROUTER_API_KEY='sk-or-...'")
|
||||
print("Or get a free key at: https://openrouter.ai/keys")
|
||||
sys.exit(1)
|
||||
|
||||
print("Fetching free models from OpenRouter...")
|
||||
models = get_free_models(api_key, force_refresh=args.refresh)
|
||||
|
||||
if not models:
|
||||
print("No free models available.")
|
||||
return
|
||||
|
||||
current = get_current_model()
|
||||
fallbacks = get_current_fallbacks()
|
||||
limit = args.limit if args.limit else 15
|
||||
|
||||
print(f"\nTop {min(limit, len(models))} Free AI Models (ranked by quality):\n")
|
||||
print(f"{'#':<3} {'Model ID':<50} {'Context':<12} {'Score':<8} {'Status'}")
|
||||
print("-" * 90)
|
||||
|
||||
for i, model in enumerate(models[:limit], 1):
|
||||
model_id = model.get("id", "unknown")
|
||||
context = model.get("context_length", 0)
|
||||
score = model.get("_score", 0)
|
||||
|
||||
# Format context length
|
||||
if context >= 1_000_000:
|
||||
context_str = f"{context // 1_000_000}M tokens"
|
||||
elif context >= 1_000:
|
||||
context_str = f"{context // 1_000}K tokens"
|
||||
else:
|
||||
context_str = f"{context} tokens"
|
||||
|
||||
# Check status
|
||||
formatted = format_model_for_openclaw(model_id, with_provider_prefix=True)
|
||||
formatted_fallback = format_model_for_openclaw(model_id, with_provider_prefix=False)
|
||||
|
||||
if current and formatted == current:
|
||||
status = "[PRIMARY]"
|
||||
elif formatted_fallback in fallbacks or formatted in fallbacks:
|
||||
status = "[FALLBACK]"
|
||||
else:
|
||||
status = ""
|
||||
|
||||
print(f"{i:<3} {model_id:<50} {context_str:<12} {score:.3f} {status}")
|
||||
|
||||
if len(models) > limit:
|
||||
print(f"\n... and {len(models) - limit} more. Use --limit to see more.")
|
||||
|
||||
print(f"\nTotal free models available: {len(models)}")
|
||||
print("\nCommands:")
|
||||
print(" freeride switch <model> Set as primary model")
|
||||
print(" freeride switch <model> -f Add to fallbacks only (keep current primary)")
|
||||
print(" freeride auto Auto-select best model")
|
||||
|
||||
|
||||
def cmd_switch(args):
|
||||
"""Switch to a specific free model."""
|
||||
api_key = get_api_key()
|
||||
if not api_key:
|
||||
print("Error: OPENROUTER_API_KEY not set")
|
||||
sys.exit(1)
|
||||
|
||||
model_id = args.model
|
||||
as_fallback = args.fallback_only
|
||||
|
||||
# Validate model exists and is free
|
||||
models = get_free_models(api_key)
|
||||
model_ids = [m["id"] for m in models]
|
||||
|
||||
# Check for exact match or partial match
|
||||
matched_model = None
|
||||
if model_id in model_ids:
|
||||
matched_model = model_id
|
||||
else:
|
||||
# Try partial match
|
||||
for m_id in model_ids:
|
||||
if model_id.lower() in m_id.lower():
|
||||
matched_model = m_id
|
||||
break
|
||||
|
||||
if not matched_model:
|
||||
print(f"Error: Model '{model_id}' not found in free models list.")
|
||||
print("Use 'freeride list' to see available models.")
|
||||
sys.exit(1)
|
||||
|
||||
if as_fallback:
|
||||
print(f"Adding to fallbacks: {matched_model}")
|
||||
else:
|
||||
print(f"Setting as primary: {matched_model}")
|
||||
|
||||
if update_model_config(
|
||||
matched_model,
|
||||
as_primary=not as_fallback,
|
||||
add_fallbacks=not args.no_fallbacks,
|
||||
setup_auth=args.setup_auth,
|
||||
append_free=False
|
||||
):
|
||||
config = load_openclaw_config()
|
||||
|
||||
if as_fallback:
|
||||
print("Success! Added to fallbacks.")
|
||||
print(f"Primary model (unchanged): {get_current_model(config)}")
|
||||
else:
|
||||
print("Success! OpenClaw config updated.")
|
||||
print(f"Primary model: {get_current_model(config)}")
|
||||
|
||||
fallbacks = get_current_fallbacks(config)
|
||||
if fallbacks:
|
||||
print(f"Fallback models ({len(fallbacks)}):")
|
||||
for fb in fallbacks[:5]:
|
||||
print(f" - {fb}")
|
||||
if len(fallbacks) > 5:
|
||||
print(f" ... and {len(fallbacks) - 5} more")
|
||||
|
||||
print("\nRestart OpenClaw for changes to take effect.")
|
||||
else:
|
||||
print("Error: Failed to update OpenClaw config.")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def cmd_auto(args):
|
||||
"""Automatically select the best free model."""
|
||||
api_key = get_api_key()
|
||||
if not api_key:
|
||||
print("Error: OPENROUTER_API_KEY not set")
|
||||
sys.exit(1)
|
||||
|
||||
config = load_openclaw_config()
|
||||
current_primary = get_current_model(config)
|
||||
|
||||
print("Finding best free model...")
|
||||
models = get_free_models(api_key, force_refresh=True)
|
||||
|
||||
if not models:
|
||||
print("Error: No free models available.")
|
||||
sys.exit(1)
|
||||
|
||||
# Find best SPECIFIC model (skip openrouter/free router)
|
||||
# openrouter/free is a router, not a specific model - use it as fallback only
|
||||
best_model = None
|
||||
for m in models:
|
||||
if "openrouter/free" not in m["id"]:
|
||||
best_model = m
|
||||
break
|
||||
|
||||
if not best_model:
|
||||
# Fallback to first model if all are routers (unlikely)
|
||||
best_model = models[0]
|
||||
|
||||
model_id = best_model["id"]
|
||||
context = best_model.get("context_length", 0)
|
||||
score = best_model.get("_score", 0)
|
||||
|
||||
# Determine if we should change primary or just add fallbacks
|
||||
as_fallback = args.fallback_only
|
||||
|
||||
if not as_fallback:
|
||||
if current_primary:
|
||||
print(f"\nReplacing current primary: {current_primary}")
|
||||
print(f"\nBest free model: {model_id}")
|
||||
print(f"Context length: {context:,} tokens")
|
||||
print(f"Quality score: {score:.3f}")
|
||||
else:
|
||||
print(f"\nKeeping current primary, adding fallbacks only.")
|
||||
print(f"Best available: {model_id} ({context:,} tokens, score: {score:.3f})")
|
||||
|
||||
if update_model_config(
|
||||
model_id,
|
||||
as_primary=not as_fallback,
|
||||
add_fallbacks=True,
|
||||
fallback_count=args.fallback_count,
|
||||
setup_auth=args.setup_auth
|
||||
):
|
||||
config = load_openclaw_config()
|
||||
|
||||
if as_fallback:
|
||||
print("\nFallbacks configured!")
|
||||
print(f"Primary (unchanged): {get_current_model(config)}")
|
||||
print("First fallback: openrouter/free (smart router - auto-selects best available)")
|
||||
else:
|
||||
print("\nOpenClaw config updated!")
|
||||
print(f"Primary: {get_current_model(config)}")
|
||||
|
||||
fallbacks = get_current_fallbacks(config)
|
||||
if fallbacks:
|
||||
print(f"Fallbacks ({len(fallbacks)}):")
|
||||
for fb in fallbacks:
|
||||
print(f" - {fb}")
|
||||
|
||||
print("\nRestart OpenClaw for changes to take effect.")
|
||||
else:
|
||||
print("Error: Failed to update config.")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def cmd_status(args):
|
||||
"""Show current configuration status."""
|
||||
api_key = get_api_key()
|
||||
config = load_openclaw_config()
|
||||
current = get_current_model(config)
|
||||
fallbacks = get_current_fallbacks(config)
|
||||
|
||||
print("FreeRide Status")
|
||||
print("=" * 50)
|
||||
|
||||
# API Key status
|
||||
if api_key:
|
||||
masked = api_key[:8] + "..." + api_key[-4:] if len(api_key) > 12 else "***"
|
||||
print(f"OpenRouter API Key: {masked}")
|
||||
else:
|
||||
print("OpenRouter API Key: NOT SET")
|
||||
print(" Set with: export OPENROUTER_API_KEY='sk-or-...'")
|
||||
|
||||
# Auth profile status
|
||||
auth_profiles = config.get("auth", {}).get("profiles", {})
|
||||
if "openrouter:default" in auth_profiles:
|
||||
print("OpenRouter Auth Profile: Configured")
|
||||
else:
|
||||
print("OpenRouter Auth Profile: Not set (use --setup-auth to add)")
|
||||
|
||||
# Current model
|
||||
print(f"\nPrimary Model: {current or 'Not configured'}")
|
||||
|
||||
# Fallbacks
|
||||
if fallbacks:
|
||||
print(f"Fallback Models ({len(fallbacks)}):")
|
||||
for fb in fallbacks:
|
||||
print(f" - {fb}")
|
||||
else:
|
||||
print("Fallback Models: None configured")
|
||||
|
||||
# Cache status
|
||||
if CACHE_FILE.exists():
|
||||
try:
|
||||
cache = json.loads(CACHE_FILE.read_text())
|
||||
cached_at = datetime.fromisoformat(cache.get("cached_at", ""))
|
||||
models_count = len(cache.get("models", []))
|
||||
age = datetime.now() - cached_at
|
||||
hours = age.seconds // 3600
|
||||
mins = (age.seconds % 3600) // 60
|
||||
print(f"\nModel Cache: {models_count} models (updated {hours}h {mins}m ago)")
|
||||
except:
|
||||
print("\nModel Cache: Invalid")
|
||||
else:
|
||||
print("\nModel Cache: Not created yet")
|
||||
|
||||
# OpenClaw config path
|
||||
print(f"\nOpenClaw Config: {OPENCLAW_CONFIG_PATH}")
|
||||
print(f" Exists: {'Yes' if OPENCLAW_CONFIG_PATH.exists() else 'No'}")
|
||||
|
||||
|
||||
def cmd_refresh(args):
|
||||
"""Force refresh the model cache."""
|
||||
api_key = get_api_key()
|
||||
if not api_key:
|
||||
print("Error: OPENROUTER_API_KEY not set")
|
||||
sys.exit(1)
|
||||
|
||||
print("Refreshing free models cache...")
|
||||
models = get_free_models(api_key, force_refresh=True)
|
||||
print(f"Cached {len(models)} free models.")
|
||||
print(f"Cache expires in {CACHE_DURATION_HOURS} hours.")
|
||||
|
||||
|
||||
def cmd_fallbacks(args):
|
||||
"""Configure fallback models for rate limit handling."""
|
||||
api_key = get_api_key()
|
||||
if not api_key:
|
||||
print("Error: OPENROUTER_API_KEY not set")
|
||||
sys.exit(1)
|
||||
|
||||
config = load_openclaw_config()
|
||||
current = get_current_model(config)
|
||||
|
||||
if not current:
|
||||
print("Warning: No primary model configured.")
|
||||
print("Fallbacks will still be added.")
|
||||
|
||||
print(f"Current primary: {current or 'None'}")
|
||||
print(f"Setting up {args.count} fallback models...")
|
||||
|
||||
models = get_free_models(api_key)
|
||||
config = ensure_config_structure(config)
|
||||
|
||||
# Get fallbacks excluding current model
|
||||
fallbacks = []
|
||||
|
||||
# Always add openrouter/free as first fallback (smart router)
|
||||
free_router = "openrouter/free"
|
||||
free_router_primary = format_model_for_openclaw("openrouter/free", with_provider_prefix=True)
|
||||
if not current or current != free_router_primary:
|
||||
fallbacks.append(free_router)
|
||||
config["agents"]["defaults"]["models"][free_router] = {}
|
||||
|
||||
for m in models:
|
||||
formatted = format_model_for_openclaw(m["id"], with_provider_prefix=False)
|
||||
formatted_primary = format_model_for_openclaw(m["id"], with_provider_prefix=True)
|
||||
|
||||
if current and (formatted_primary == current):
|
||||
continue
|
||||
# Skip openrouter/free (already added as first)
|
||||
if "openrouter/free" in m["id"]:
|
||||
continue
|
||||
if len(fallbacks) >= args.count:
|
||||
break
|
||||
|
||||
fallbacks.append(formatted)
|
||||
config["agents"]["defaults"]["models"][formatted] = {}
|
||||
|
||||
config["agents"]["defaults"]["model"]["fallbacks"] = fallbacks
|
||||
save_openclaw_config(config)
|
||||
|
||||
print(f"\nConfigured {len(fallbacks)} fallback models:")
|
||||
for i, fb in enumerate(fallbacks, 1):
|
||||
print(f" {i}. {fb}")
|
||||
|
||||
print("\nWhen rate limited, OpenClaw will automatically try these models.")
|
||||
print("Restart OpenClaw for changes to take effect.")
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
prog="freeride",
|
||||
description="FreeRide - Free AI for OpenClaw. Manage free models from OpenRouter."
|
||||
)
|
||||
subparsers = parser.add_subparsers(dest="command", help="Available commands")
|
||||
|
||||
# list command
|
||||
list_parser = subparsers.add_parser("list", help="List available free models")
|
||||
list_parser.add_argument("--limit", "-n", type=int, default=15,
|
||||
help="Number of models to show (default: 15)")
|
||||
list_parser.add_argument("--refresh", "-r", action="store_true",
|
||||
help="Force refresh from API (ignore cache)")
|
||||
|
||||
# switch command
|
||||
switch_parser = subparsers.add_parser("switch", help="Switch to a specific model")
|
||||
switch_parser.add_argument("model", help="Model ID to switch to")
|
||||
switch_parser.add_argument("--fallback-only", "-f", action="store_true",
|
||||
help="Add to fallbacks only, don't change primary")
|
||||
switch_parser.add_argument("--no-fallbacks", action="store_true",
|
||||
help="Don't configure fallback models")
|
||||
switch_parser.add_argument("--setup-auth", action="store_true",
|
||||
help="Also set up OpenRouter auth profile")
|
||||
|
||||
# auto command
|
||||
auto_parser = subparsers.add_parser("auto", help="Auto-select best free model")
|
||||
auto_parser.add_argument("--fallback-count", "-c", type=int, default=5,
|
||||
help="Number of fallback models (default: 5)")
|
||||
auto_parser.add_argument("--fallback-only", "-f", action="store_true",
|
||||
help="Add to fallbacks only, don't change primary")
|
||||
auto_parser.add_argument("--setup-auth", action="store_true",
|
||||
help="Also set up OpenRouter auth profile")
|
||||
|
||||
# status command
|
||||
subparsers.add_parser("status", help="Show current configuration")
|
||||
|
||||
# refresh command
|
||||
subparsers.add_parser("refresh", help="Refresh model cache")
|
||||
|
||||
# fallbacks command
|
||||
fallbacks_parser = subparsers.add_parser("fallbacks", help="Configure fallback models")
|
||||
fallbacks_parser.add_argument("--count", "-c", type=int, default=5,
|
||||
help="Number of fallback models (default: 5)")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.command == "list":
|
||||
cmd_list(args)
|
||||
elif args.command == "switch":
|
||||
cmd_switch(args)
|
||||
elif args.command == "auto":
|
||||
cmd_auto(args)
|
||||
elif args.command == "status":
|
||||
cmd_status(args)
|
||||
elif args.command == "refresh":
|
||||
cmd_refresh(args)
|
||||
elif args.command == "fallbacks":
|
||||
cmd_fallbacks(args)
|
||||
else:
|
||||
parser.print_help()
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
1
requirements.txt
Normal file
1
requirements.txt
Normal file
@@ -0,0 +1 @@
|
||||
requests>=2.31.0
|
||||
25
setup.py
Normal file
25
setup.py
Normal file
@@ -0,0 +1,25 @@
|
||||
from setuptools import setup, find_packages
|
||||
|
||||
setup(
|
||||
name="freeride",
|
||||
version="1.0.0",
|
||||
description="Free AI for OpenClaw - Automatic free model management via OpenRouter",
|
||||
author="Shaishav Pidadi",
|
||||
url="https://github.com/Shaivpidadi/FreeRide",
|
||||
py_modules=["main", "watcher"],
|
||||
install_requires=[
|
||||
"requests>=2.31.0",
|
||||
],
|
||||
entry_points={
|
||||
"console_scripts": [
|
||||
"freeride=main:main",
|
||||
"freeride-watcher=watcher:main",
|
||||
],
|
||||
},
|
||||
python_requires=">=3.8",
|
||||
license="MIT",
|
||||
classifiers=[
|
||||
"License :: OSI Approved :: MIT License",
|
||||
"Programming Language :: Python :: 3",
|
||||
],
|
||||
)
|
||||
41
skill.json
Normal file
41
skill.json
Normal file
@@ -0,0 +1,41 @@
|
||||
{
|
||||
"name": "freeride",
|
||||
"displayName": "FreeRide - Free AI for OpenClaw",
|
||||
"version": "1.0.1",
|
||||
"description": "Unlimited free AI access for OpenClaw via OpenRouter's free models with automatic fallback switching",
|
||||
"author": "Shaishav Pidadi",
|
||||
"repository": "https://github.com/Shaivpidadi/FreeRide",
|
||||
"license": "MIT",
|
||||
"commands": [
|
||||
"list",
|
||||
"switch",
|
||||
"auto",
|
||||
"status",
|
||||
"refresh",
|
||||
"fallbacks"
|
||||
],
|
||||
"binaries": {
|
||||
"freeride": "main:main",
|
||||
"freeride-watcher": "watcher:main"
|
||||
},
|
||||
"dependencies": {
|
||||
"requests": ">=2.31.0"
|
||||
},
|
||||
"config": {
|
||||
"OPENROUTER_API_KEY": {
|
||||
"description": "Your OpenRouter API key (get free at openrouter.ai/keys)",
|
||||
"required": true,
|
||||
"env": true
|
||||
}
|
||||
},
|
||||
"openclaw": {
|
||||
"compatible": true,
|
||||
"minVersion": "1.0.0",
|
||||
"configPath": "~/.openclaw/openclaw.json",
|
||||
"configKeys": [
|
||||
"agents.defaults.model",
|
||||
"agents.defaults.models"
|
||||
]
|
||||
},
|
||||
"install": "npx clawhub@latest install freeride && cd ~/.openclaw/workspace/skills/free-ride && pip install -e ."
|
||||
}
|
||||
383
watcher.py
Normal file
383
watcher.py
Normal file
@@ -0,0 +1,383 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
FreeRide Watcher
|
||||
Monitors for rate limits and automatically rotates models.
|
||||
Can run as a daemon or be called periodically via cron.
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import signal
|
||||
from pathlib import Path
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Optional
|
||||
|
||||
try:
|
||||
import requests
|
||||
except ImportError:
|
||||
print("Error: requests library required")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
# Import from main module
|
||||
from main import (
|
||||
get_api_key,
|
||||
get_free_models,
|
||||
load_openclaw_config,
|
||||
save_openclaw_config,
|
||||
ensure_config_structure,
|
||||
format_model_for_openclaw,
|
||||
OPENCLAW_CONFIG_PATH
|
||||
)
|
||||
|
||||
|
||||
# Constants
|
||||
STATE_FILE = Path.home() / ".openclaw" / ".freeride-watcher-state.json"
|
||||
RATE_LIMIT_COOLDOWN_MINUTES = 30
|
||||
CHECK_INTERVAL_SECONDS = 60
|
||||
OPENROUTER_CHAT_URL = "https://openrouter.ai/api/v1/chat/completions"
|
||||
|
||||
|
||||
def load_state() -> dict:
|
||||
"""Load watcher state."""
|
||||
if STATE_FILE.exists():
|
||||
try:
|
||||
return json.loads(STATE_FILE.read_text())
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
return {"rate_limited_models": {}, "rotation_count": 0}
|
||||
|
||||
|
||||
def save_state(state: dict):
|
||||
"""Save watcher state."""
|
||||
STATE_FILE.parent.mkdir(parents=True, exist_ok=True)
|
||||
STATE_FILE.write_text(json.dumps(state, indent=2))
|
||||
|
||||
|
||||
def is_model_rate_limited(state: dict, model_id: str) -> bool:
|
||||
"""Check if a model is currently in rate-limit cooldown."""
|
||||
rate_limited = state.get("rate_limited_models", {})
|
||||
if model_id not in rate_limited:
|
||||
return False
|
||||
|
||||
limited_at = datetime.fromisoformat(rate_limited[model_id])
|
||||
cooldown_end = limited_at + timedelta(minutes=RATE_LIMIT_COOLDOWN_MINUTES)
|
||||
return datetime.now() < cooldown_end
|
||||
|
||||
|
||||
def mark_rate_limited(state: dict, model_id: str):
|
||||
"""Mark a model as rate limited."""
|
||||
if "rate_limited_models" not in state:
|
||||
state["rate_limited_models"] = {}
|
||||
state["rate_limited_models"][model_id] = datetime.now().isoformat()
|
||||
save_state(state)
|
||||
|
||||
|
||||
def test_model(api_key: str, model_id: str) -> tuple[bool, Optional[str]]:
|
||||
"""
|
||||
Test if a model is available by making a minimal API call.
|
||||
Returns (success, error_type).
|
||||
"""
|
||||
headers = {
|
||||
"Authorization": f"Bearer {api_key}",
|
||||
"Content-Type": "application/json",
|
||||
"HTTP-Referer": "https://github.com/Shaivpidadi/FreeRide",
|
||||
"X-Title": "FreeRide Health Check"
|
||||
}
|
||||
|
||||
payload = {
|
||||
"model": model_id,
|
||||
"messages": [{"role": "user", "content": "Hi"}],
|
||||
"max_tokens": 5,
|
||||
"stream": False
|
||||
}
|
||||
|
||||
try:
|
||||
response = requests.post(
|
||||
OPENROUTER_CHAT_URL,
|
||||
headers=headers,
|
||||
json=payload,
|
||||
timeout=30
|
||||
)
|
||||
|
||||
if response.status_code == 200:
|
||||
return True, None
|
||||
elif response.status_code == 429:
|
||||
return False, "rate_limit"
|
||||
elif response.status_code == 503:
|
||||
return False, "unavailable"
|
||||
else:
|
||||
return False, f"error_{response.status_code}"
|
||||
|
||||
except requests.Timeout:
|
||||
return False, "timeout"
|
||||
except requests.RequestException as e:
|
||||
return False, "request_error"
|
||||
|
||||
|
||||
def get_next_available_model(api_key: str, state: dict, exclude_model: str = None) -> Optional[str]:
|
||||
"""Get the next best model that isn't rate limited."""
|
||||
models = get_free_models(api_key)
|
||||
|
||||
for model in models:
|
||||
model_id = model["id"]
|
||||
|
||||
# Skip the openrouter/free router - we want specific models
|
||||
if "openrouter/free" in model_id:
|
||||
continue
|
||||
|
||||
# Skip if same as excluded model
|
||||
if exclude_model and model_id == exclude_model:
|
||||
continue
|
||||
|
||||
# Skip if in cooldown
|
||||
if is_model_rate_limited(state, model_id):
|
||||
continue
|
||||
|
||||
# Test if actually available
|
||||
success, error = test_model(api_key, model_id)
|
||||
if success:
|
||||
return model_id
|
||||
|
||||
# Mark as rate limited if that's the error
|
||||
if error == "rate_limit":
|
||||
mark_rate_limited(state, model_id)
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def rotate_to_next_model(api_key: str, state: dict, reason: str = "manual"):
|
||||
"""Rotate to the next available model."""
|
||||
config = load_openclaw_config()
|
||||
config = ensure_config_structure(config)
|
||||
current = config.get("agents", {}).get("defaults", {}).get("model", {}).get("primary")
|
||||
|
||||
# Extract base model ID from OpenClaw format
|
||||
current_base = None
|
||||
if current:
|
||||
# openrouter/provider/model:free -> provider/model:free
|
||||
if current.startswith("openrouter/"):
|
||||
current_base = current[len("openrouter/"):]
|
||||
else:
|
||||
current_base = current
|
||||
|
||||
print(f"[{datetime.now().isoformat()}] Rotating from: {current_base or 'none'}")
|
||||
print(f" Reason: {reason}")
|
||||
|
||||
next_model = get_next_available_model(api_key, state, current_base)
|
||||
|
||||
if not next_model:
|
||||
print(" Error: No available models found!")
|
||||
return False
|
||||
|
||||
print(f" New model: {next_model}")
|
||||
|
||||
# Update config - primary uses provider prefix, fallbacks don't
|
||||
formatted_primary = format_model_for_openclaw(next_model, with_provider_prefix=True)
|
||||
config["agents"]["defaults"]["model"]["primary"] = formatted_primary
|
||||
|
||||
# Add to models allowlist
|
||||
formatted_for_list = format_model_for_openclaw(next_model, with_provider_prefix=False)
|
||||
config["agents"]["defaults"]["models"][formatted_for_list] = {}
|
||||
|
||||
# Rebuild fallbacks from remaining models (using correct format: no provider prefix)
|
||||
models = get_free_models(api_key)
|
||||
fallbacks = []
|
||||
|
||||
# Always add openrouter/free as first fallback
|
||||
free_router = "openrouter/free"
|
||||
fallbacks.append(free_router)
|
||||
config["agents"]["defaults"]["models"][free_router] = {}
|
||||
|
||||
for m in models:
|
||||
if m["id"] == next_model or "openrouter/free" in m["id"]:
|
||||
continue
|
||||
if is_model_rate_limited(state, m["id"]):
|
||||
continue
|
||||
|
||||
fb_formatted = format_model_for_openclaw(m["id"], with_provider_prefix=False)
|
||||
fallbacks.append(fb_formatted)
|
||||
config["agents"]["defaults"]["models"][fb_formatted] = {}
|
||||
|
||||
if len(fallbacks) >= 5:
|
||||
break
|
||||
|
||||
config["agents"]["defaults"]["model"]["fallbacks"] = fallbacks
|
||||
|
||||
save_openclaw_config(config)
|
||||
|
||||
# Update state
|
||||
state["rotation_count"] = state.get("rotation_count", 0) + 1
|
||||
state["last_rotation"] = datetime.now().isoformat()
|
||||
state["last_rotation_reason"] = reason
|
||||
save_state(state)
|
||||
|
||||
print(f" Success! Rotated to {next_model}")
|
||||
print(f" Total rotations this session: {state['rotation_count']}")
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def check_and_rotate(api_key: str, state: dict) -> bool:
|
||||
"""Check current model and rotate if needed."""
|
||||
config = load_openclaw_config()
|
||||
current = config.get("agents", {}).get("defaults", {}).get("model", {}).get("primary")
|
||||
|
||||
if not current:
|
||||
print("No primary model configured. Running initial setup...")
|
||||
return rotate_to_next_model(api_key, state, "initial_setup")
|
||||
|
||||
# Extract base model ID
|
||||
if current.startswith("openrouter/"):
|
||||
current_base = current[len("openrouter/"):]
|
||||
else:
|
||||
current_base = current
|
||||
|
||||
# Check if current model is rate limited
|
||||
if is_model_rate_limited(state, current_base):
|
||||
return rotate_to_next_model(api_key, state, "cooldown_active")
|
||||
|
||||
# Test current model
|
||||
print(f"[{datetime.now().isoformat()}] Testing: {current_base}")
|
||||
success, error = test_model(api_key, current_base)
|
||||
|
||||
if success:
|
||||
print(f" Status: OK")
|
||||
return False # No rotation needed
|
||||
else:
|
||||
print(f" Status: {error}")
|
||||
if error == "rate_limit":
|
||||
mark_rate_limited(state, current_base)
|
||||
return rotate_to_next_model(api_key, state, error)
|
||||
|
||||
|
||||
def cleanup_old_rate_limits(state: dict):
|
||||
"""Remove rate limit entries that have expired."""
|
||||
rate_limited = state.get("rate_limited_models", {})
|
||||
current_time = datetime.now()
|
||||
expired = []
|
||||
|
||||
for model_id, limited_at_str in rate_limited.items():
|
||||
try:
|
||||
limited_at = datetime.fromisoformat(limited_at_str)
|
||||
if current_time - limited_at > timedelta(minutes=RATE_LIMIT_COOLDOWN_MINUTES):
|
||||
expired.append(model_id)
|
||||
except (ValueError, TypeError):
|
||||
expired.append(model_id)
|
||||
|
||||
for model_id in expired:
|
||||
del rate_limited[model_id]
|
||||
print(f" Cleared cooldown: {model_id}")
|
||||
|
||||
if expired:
|
||||
save_state(state)
|
||||
|
||||
|
||||
def run_once():
|
||||
"""Run a single check and rotate cycle."""
|
||||
api_key = get_api_key()
|
||||
if not api_key:
|
||||
print("Error: OPENROUTER_API_KEY not set")
|
||||
sys.exit(1)
|
||||
|
||||
state = load_state()
|
||||
cleanup_old_rate_limits(state)
|
||||
check_and_rotate(api_key, state)
|
||||
|
||||
|
||||
def run_daemon():
|
||||
"""Run as a continuous daemon."""
|
||||
api_key = get_api_key()
|
||||
if not api_key:
|
||||
print("Error: OPENROUTER_API_KEY not set")
|
||||
sys.exit(1)
|
||||
|
||||
print(f"FreeRide Watcher started")
|
||||
print(f"Check interval: {CHECK_INTERVAL_SECONDS}s")
|
||||
print(f"Rate limit cooldown: {RATE_LIMIT_COOLDOWN_MINUTES}m")
|
||||
print("-" * 50)
|
||||
|
||||
# Handle graceful shutdown
|
||||
running = True
|
||||
def signal_handler(signum, frame):
|
||||
nonlocal running
|
||||
print("\nShutting down watcher...")
|
||||
running = False
|
||||
|
||||
signal.signal(signal.SIGINT, signal_handler)
|
||||
signal.signal(signal.SIGTERM, signal_handler)
|
||||
|
||||
state = load_state()
|
||||
|
||||
while running:
|
||||
try:
|
||||
cleanup_old_rate_limits(state)
|
||||
check_and_rotate(api_key, state)
|
||||
except Exception as e:
|
||||
print(f"Error during check: {e}")
|
||||
|
||||
# Sleep in small increments to allow graceful shutdown
|
||||
for _ in range(CHECK_INTERVAL_SECONDS):
|
||||
if not running:
|
||||
break
|
||||
time.sleep(1)
|
||||
|
||||
print("Watcher stopped.")
|
||||
|
||||
|
||||
def main():
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
prog="freeride-watcher",
|
||||
description="FreeRide Watcher - Monitor and auto-rotate free AI models"
|
||||
)
|
||||
parser.add_argument("--daemon", "-d", action="store_true",
|
||||
help="Run as continuous daemon")
|
||||
parser.add_argument("--rotate", "-r", action="store_true",
|
||||
help="Force rotate to next model")
|
||||
parser.add_argument("--status", "-s", action="store_true",
|
||||
help="Show watcher status")
|
||||
parser.add_argument("--clear-cooldowns", action="store_true",
|
||||
help="Clear all rate limit cooldowns")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.status:
|
||||
state = load_state()
|
||||
print("FreeRide Watcher Status")
|
||||
print("=" * 40)
|
||||
print(f"Total rotations: {state.get('rotation_count', 0)}")
|
||||
print(f"Last rotation: {state.get('last_rotation', 'Never')}")
|
||||
print(f"Last reason: {state.get('last_rotation_reason', 'N/A')}")
|
||||
print(f"\nModels in cooldown:")
|
||||
for model, limited_at in state.get("rate_limited_models", {}).items():
|
||||
print(f" - {model} (since {limited_at})")
|
||||
if not state.get("rate_limited_models"):
|
||||
print(" None")
|
||||
|
||||
elif args.clear_cooldowns:
|
||||
state = load_state()
|
||||
state["rate_limited_models"] = {}
|
||||
save_state(state)
|
||||
print("Cleared all rate limit cooldowns.")
|
||||
|
||||
elif args.rotate:
|
||||
api_key = get_api_key()
|
||||
if not api_key:
|
||||
print("Error: OPENROUTER_API_KEY not set")
|
||||
sys.exit(1)
|
||||
state = load_state()
|
||||
rotate_to_next_model(api_key, state, "manual_rotation")
|
||||
|
||||
elif args.daemon:
|
||||
run_daemon()
|
||||
|
||||
else:
|
||||
run_once()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
Reference in New Issue
Block a user