How to Install Ollama and Connect it to Definer
You can run many open-source AI models right on your computer or server.
Why Run AI Models Locally?
Running powerful AI models directly on your computer offers significant advantages:
- Complete privacy - Your data never leaves your device
- No costs - Forget about the API fees and subscription charges
- No rate limits - Make unlimited requests without worrying about usage quotas
- Reliable performance - No dependency on internet connectivity
- Full customization - Fine-tune model parameters to optimize for your specific use cases
Definer seamlessly integrates with Ollama to give you access to these local models.
Note: Local models require adequate hardware resources. For optimal performance, you'll need sufficient RAM and ideally a dedicated GPU. If you experience slowdowns, try using a different model or adjusting model parameters to reduce resource usage.
Installing Ollama
Ollama is a lightweight application that lets you download and run sophisticated open-source AI models like Llama, Qwen, DeepSeek, and others directly on your Windows, Mac, or Linux machine.
- Download Ollama here
- Follow the standard installation process for your operating system
- No account creation required
Running Your First Local AI Model
After installing Ollama, you can quickly get started with powerful models:
- Open your terminal or command prompt
- Run one of these commands to download and start a model:
# Latest Llama 3.2 model
ollama run llama3.2
# Lightweight but capable DeepSeek model
ollama run deepseek-r1:8b
Browse all available models at Ollama's model hub.
Using Ollama with Definer
Setting up Definer to use your local models is very straightforward:
- Right-click on the Definer icon located next to the address bar
- Select "Definer Options".
- Go to the "Sources" page and find the "AI" source in the list.
- Click on "Settings".
- In the "Provider" field, choose "Ollama".
Now you're ready to enjoy private, cost-free AI assistance powered by your own computer through Definer! If you need help, feel free to create a new post with the "Help" flair in the r/lumetrium_definer subreddit.
Advanced: Connecting to Remote Ollama Instances
You can run Ollama on one machine (like a powerful desktop) and access it from another device (like a laptop) over your local network. This is ideal when you have a high-performance GPU on one machine but want to use AI from multiple devices.
Configuring Ollama for Remote Access
By default, Ollama only accepts local connections. To enable remote access, you need to configure two environment variables:
OLLAMA_HOST=0.0.0.0
OLLAMA_ORIGINS=*
MacOS Configuration
- Open the command line terminal and enter the following commands:
launchctl setenv OLLAMA_HOST "0.0.0.0"
launchctl setenv OLLAMA_ORIGINS "*"
- Restart the Ollama application to apply the settings.
Windows Configuration
On Windows, Ollama inherits your user and system environment variables.
- Exit Ollama via the taskbar.
- Open Settings (Windows 11) or Control Panel (Windows 10), and search for "Environment Variables".
- Click to edit your account's environment variables.
- Edit or create a new variable
OLLAMA_HOST
for your user account, with the value0.0.0.0
; Edit or create a new variableOLLAMA_ORIGINS
for your user account, with the value*
. - Click OK/Apply to save the settings.
- Launch the Ollama application from the Windows Start menu.
Linux Configuration (systemd)
If Ollama is running as a systemd service, use systemctl to set the environment variables:
Invoke
systemctl edit ollama.service
to edit the systemd service configuration. This will open an editor.Under the
[Service]
section, add a line for each environment variable:
[Service]
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_ORIGINS=*"
Save and exit.
Reload systemd and restart Ollama:
systemctl daemon-reload
systemctl restart ollama
Connecting from Another Device
After configuration, the Ollama service will be available on your current network (such as home WiFi). You can use Definer on other computers to connect to this service.
The IP address of the Ollama service is your computer's address on the current network, typically in the form:
192.168.XX.XX
In Definer, set the API Host to:
http://192.168.XX.XX:11434
Security Considerations
- Configure your firewall to allow traffic on port 11434
- Only use this setup on trusted networks like your home WiFi
- Never expose your Ollama instance to the public internet without proper security measures
Troubleshooting
- Model runs slowly: Try a smaller model or adjust parameters
- Connection errors: Check firewall settings and ensure correct IP address
- Out of memory errors: Close other applications or try a more efficient model