Docs
How the priority system works, what data flows where, and how to configure your tools.
How the priority system works, what data flows where, and how to configure your tools.
Don't send secrets, credentials, or proprietary code you wouldn't share with a cloud API.
No tokens, no blockchain, no marketplace. Just a running tally that rewards generosity.
balance = tokens served − tokens consumed
| Idle network | Busy network | |
|---|---|---|
| Response time | Instant | Queued by balance |
| Balance needed? | No — everyone served | Higher balance = faster |
| New users | Full access | Lower priority |
Transparency over marketing. Here's exactly how your data flows through the network today.
Your files, repository context, and working directory never leave your machine. The local proxy assembles context locally — only the final inference prompt is sent to the network.
Only the raw inference prompt and generated tokens. No file access, no user identity, no conversation history beyond the current request. Nodes are stateless — they process a prompt and move on.
| Data | Your machine | Coordinator | Contributor node |
|---|---|---|---|
| Files & repo context | Local only | Never sent | Never sent |
| Inference prompt | Assembled here | Plaintext (v1) | Plaintext (v1) |
| User identity | Known | Token only | Anonymous |
| Conversation history | Full context | Per-request only | Per-request only |
Phase 4 introduces direct peer connections with end-to-end encryption. Until then, treat the network like any cloud API: don't send secrets you wouldn't send to a hosted LLM provider.
Run dollama launch claude — it starts the local proxy, configures the environment, and opens Claude Code automatically. That's it.
Run dollama both first, then dollama launch claude in another terminal — you'll use the network and share your idle compute.
dollama launch claude
{
"env": {
"ANTHROPIC_BASE_URL": "http://localhost:11435",
"ANTHROPIC_API_KEY": "dollama-proxy"
},
"model": "network:llama3.1:8b"
}
ollama pull llama3.1:8b
llama3.1:8b pulled. Then run dollama serve. The CLI handles registration, heartbeats, and routing automatically. Any machine that can run Ollama can contribute.localhost:11435. Any tool that supports a custom base URL can use it — Continue, Aider, or your own scripts.