Ollama Setup
NeuroTerm uses Ollama to run AI locally on your machine. No API keys, no cloud, complete privacy.
What is Ollama?
Ollama is an open-source tool that runs large language models (LLMs) locally on your computer. NeuroTerm connects to Ollama to power:
- Magic Input — Natural language to terminal commands
- Local RAG — Ask questions about your datasheets
- AI Transforms — Intelligent output filtering
1. Install Ollama
After installation, Ollama runs as a background service. You'll see the llama icon in your system tray.
2. Pull a Model
Open a terminal (cmd, PowerShell, or Windows Terminal) and pull a model:
Recommended Models
llama3.2— Best balance of speed and quality (2GB)qwen2.5:3b— Fast, good for Magic Input (2GB)mistral— Great instruction following (4GB)
The download may take a few minutes depending on your connection.
3. Verify Installation
Test that Ollama is running:
llama3.2 a80c4f17acd5 2.0 GB 2 minutes ago
You should see your downloaded model in the list.
4. Configure NeuroTerm
Open NeuroTerm and go to Settings (gear icon):
http://localhost:11434llama3.2Click "Test Connection" to verify NeuroTerm can reach Ollama.
Troubleshooting
"Connection refused" error
Make sure Ollama is running. Check your system tray for the llama icon, or run ollama serve manually.
Slow responses
Try a smaller model like qwen2.5:1.5b. If you have an NVIDIA GPU, Ollama will use it automatically for faster inference.
Model not found
Ensure the model name in NeuroTerm settings matches exactly what you pulled. Run ollama list to see available models.