AI Setup

Configure local AI for your PC2 personal cloud. This guide covers Ollama installation, model selection, GPU acceleration, and connecting external providers.

With Ollama (local AI), your conversations stay 100% private - they never leave your machine.

Installing Ollama

curl -fsSL https://ollama.com/install.sh | sh

Verify installation:

ollama --version
ModelSizeRAM NeededBest For
deepseek-r1:1.5b1GB4GBFast responses, basic tasks
llama3.2:3b2GB6GBGood balance
phi3:mini2GB6GBMicrosoft’s efficient model
mistral:7b4GB8GBStrong general purpose
llama3.2:8b5GB12GBComplex reasoning
codellama:7b4GB8GBCode generation

Install a Model

ollama pull deepseek-r1:1.5b

Or via PC2: Settings → AI Setup → Click “Install” on any model.

List Installed Models

ollama list

GPU Acceleration

GPU dramatically improves AI speed.

NVIDIA GPUs (CUDA)

Ollama automatically uses NVIDIA GPUs if CUDA is available.

# Check GPU is being used
nvidia-smi

Apple Silicon (M1/M2/M3)

Ollama automatically uses Metal acceleration. No configuration needed.

No GPU?

CPU-only works for smaller models (1.5b-3b). Larger models will be slow but functional.

Connecting to Remote Ollama

If running Ollama on a different machine (like a powerful server):

On the Ollama Server

OLLAMA_HOST=0.0.0.0 ollama serve

In PC2 Settings

  1. Go to Settings → AI Setup
  2. Set Ollama URL to http://server-ip:11434
  3. Save

External AI Providers

PC2 also supports cloud providers for when you need more power:

ProviderModelsGet API Key
OpenAIGPT-4, GPT-3.5platform.openai.com
AnthropicClaude 3console.anthropic.com
GoogleGeminiaistudio.google.com
xAIGrokxAI dashboard
⚠️

Cloud providers send data to their servers. Use Ollama for maximum privacy.

Troubleshooting

”Ollama not available”

# Check if running
curl http://localhost:11434/api/tags
 
# Start Ollama
ollama serve

Slow Responses

  1. Use smaller model (deepseek-r1:1.5b)
  2. Enable GPU (see above)
  3. Check system resources: htop

Out of Memory

  1. Use smaller/quantized model
  2. Close other applications
  3. Add more RAM or swap

Model Recommendations by Use Case

Use CaseRecommended Model
General chatllama3.2:3b, mistral:7b
Codingcodellama:7b, deepseek-coder:6.7b
Writingmistral:7b, llama3.2:8b
Fast responsesdeepseek-r1:1.5b, phi3:mini
© 2025 Elacity Labs. All rights reserved.