The Distributed Open Llama Network

Your GPU is idle.
Someone's agent needs it.

Use Llama 3.1 8B for free in Claude Code, Continue, and other agentic tools. Contribute your idle compute when you're not using it — your machine stays fully available and you build priority for faster inference. No tokens, no fees, no downside.

dollama mascot
0 nodes online
Llama 3.1 8B
0 requests
0 tokens

Folding@Home, but for LLM inference

01

Install & choose your role

Grab the dollama CLI. Run as a user to consume inference, as a contributor to share compute, or both to do it all.

02

Smart routing, not random

  • Nodes report hardware benchmarks (tokens/sec, RAM, VRAM)
  • Coordinator routes to the best available node
  • When busy, you queue by contribution balance
  • Heartbeat monitoring drops stale nodes automatically
03

Free inference, fair priority

No billing, no blockchain. A simple token ledger tracks what you've served minus what you've consumed. Contribute more and your requests jump the queue. When the network is idle, everyone gets instant service.

Your machine
Claude Code / IDE dollama proxy
Has: your files, context, repo
prompt (HTTPS)
Coordinator relay
Routes requests Manages queue
Sees: prompt plaintext (v1)
prompt (HTTPS)
Contributor node
Ollama runtime Llama 3.1 8B
Sees: raw prompt only
v1: all traffic flows through the coordinator. Direct peer connections with end-to-end encryption are planned for Phase 4.

Install once, pick how you participate

After installing, dollama lives in your system tray. Switch modes anytime from the menu — or use the terminal commands below.

USE
I need inference

Use the network

Runs a local proxy that your coding tools connect to. Get LLM inference from the network while your code and context stay on your machine — only the inference prompt is sent.

GIVE
Almost zero downside

Contribute cycles

Runs quietly in the background on your idle GPU/CPU. Your machine stays fully yours — when you need it, it's there. The only cost is a little extra power when you're not using it. Your contribution builds your priority balance so you get faster inference in return.

BOTH
Best of both worlds

Use & contribute

The recommended choice. Contribute your idle cycles and use the network for inference — when you need a burst of compute, you've already earned it. The more people run both, the faster and more reliable the network becomes for everyone.

Terminal commands

$ dollama connect
$ dollama serve
$ dollama both

Add --auto-start to launch on boot. See dollama --help for all options.

Up and running in 2 minutes

1

Install Ollama + dollama

curl -fsSL https://ollama.com/install.sh | sh
curl -fsSL https://dollama.net/install.sh | sh
2

Launch Claude Code

dollama launch claude
3

Start coding

You're running on community compute. Claude Code is configured automatically.

Install in one command

🦙
Ollama
Required runtime
💾
8GB+ RAM
Recommended
💻
macOS / Linux / Win
Cross-platform
🧠
llama3.1:8b
Network model
Terminal
curl -fsSL https://dollama.net/install.sh | sh

Requires Ollama installed with llama3.1:8b pulled if you plan to contribute.

PowerShell
irm https://dollama.net/install.ps1 | iex

Requires Ollama installed with llama3.1:8b pulled if you plan to contribute.

What the installer does: downloads the latest dollama binary for your platform, verifies the checksum, and moves it to /usr/local/bin. That's it — no services, no daemons, no config changes. Read the script source.

Direct binary download

Verify checksums

curl -fsSL https://dollama.net/dl/latest/checksums.txt
sha256sum -c checksums.txt

Build from source

git clone https://github.com/notangrywaffle/dollama.net.git
cd dollama.net/cli && make build

The herd at a glance

loading
Nodes online
Active contributors right now
loading
Tokens processed
Total inference served
loading
Requests completed
Successful inferences
loading
Network load
Inference slots in use

Works with

Claude Code Continue Aider Any Anthropic-compatible tool

Open source

Full source code on GitHub. Coordinator, CLI, installer, and this website — all public.

Built on

Ollama Llama 3.1 8B Cloudflare Workers

Report a bug or request a feature