Skip to content

How to Install Ollama and Connect it to Definer

You can run many open-source AI models right on your computer or server.

Why Run AI Models Locally?

Running powerful AI models directly on your computer offers significant advantages:

  • Complete privacy - Your data never leaves your device
  • No costs - Forget about the API fees and subscription charges
  • No rate limits - Make unlimited requests without worrying about usage quotas
  • Reliable performance - No dependency on internet connectivity
  • Full customization - Fine-tune model parameters to optimize for your specific use cases

Definer seamlessly integrates with Ollama to give you access to these local models.

Note: Local models require adequate hardware resources. For optimal performance, you'll need sufficient RAM and ideally a dedicated GPU. If you experience slowdowns, try using a different model or adjusting model parameters to reduce resource usage.

Installing Ollama

Ollama is a lightweight application that lets you download and run sophisticated open-source AI models like Llama, Qwen, DeepSeek, and others directly on your Windows, Mac, or Linux machine.

  1. Download Ollama here
  2. Follow the standard installation process for your operating system
  3. No account creation required

ollama-install.avif

Running Your First Local AI Model

After installing Ollama, you can quickly get started with powerful models:

  1. Open your terminal or command prompt
  2. Run one of these commands to download and start a model:
bash
# Latest Llama 3.2 model 
ollama run llama3.2  

# Lightweight but capable DeepSeek model 
ollama run deepseek-r1:8b

Browse all available models at Ollama's model hub.

ollama-run-model.avif

Using Ollama with Definer

Setting up Definer to use your local models is very straightforward:

  1. Right-click on the Definer icon located next to the address bar
  2. Select "Definer Options".
  3. Go to the "Sources" page and find the "AI" source in the list.
  4. Click on "Settings".
  5. In the "Provider" field, choose "Ollama".

definer-ollama.avif

Now you're ready to enjoy private, cost-free AI assistance powered by your own computer through Definer! If you need help, feel free to create a new post with the "Help" flair in the r/lumetrium_definer subreddit.

Advanced: Connecting to Remote Ollama Instances

You can run Ollama on one machine (like a powerful desktop) and access it from another device (like a laptop) over your local network. This is ideal when you have a high-performance GPU on one machine but want to use AI from multiple devices.

Configuring Ollama for Remote Access

By default, Ollama only accepts local connections. To enable remote access, you need to configure two environment variables:

OLLAMA_HOST=0.0.0.0
OLLAMA_ORIGINS=*

MacOS Configuration

  1. Open the command line terminal and enter the following commands:
launchctl setenv OLLAMA_HOST "0.0.0.0"
launchctl setenv OLLAMA_ORIGINS "*"
  1. Restart the Ollama application to apply the settings.

Windows Configuration

On Windows, Ollama inherits your user and system environment variables.

  1. Exit Ollama via the taskbar.
  2. Open Settings (Windows 11) or Control Panel (Windows 10), and search for "Environment Variables".
  3. Click to edit your account's environment variables.
  4. Edit or create a new variable OLLAMA_HOST for your user account, with the value 0.0.0.0; Edit or create a new variable OLLAMA_ORIGINS for your user account, with the value *.
  5. Click OK/Apply to save the settings.
  6. Launch the Ollama application from the Windows Start menu.

Linux Configuration (systemd)

If Ollama is running as a systemd service, use systemctl to set the environment variables:

  1. Invoke systemctl edit ollama.service to edit the systemd service configuration. This will open an editor.

  2. Under the [Service] section, add a line for each environment variable:

[Service]
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_ORIGINS=*"
  1. Save and exit.

  2. Reload systemd and restart Ollama:

systemctl daemon-reload
systemctl restart ollama

Connecting from Another Device

After configuration, the Ollama service will be available on your current network (such as home WiFi). You can use Definer on other computers to connect to this service.

The IP address of the Ollama service is your computer's address on the current network, typically in the form:

192.168.XX.XX

In Definer, set the API Host to:

http://192.168.XX.XX:11434

remote-access.avif

Security Considerations

  • Configure your firewall to allow traffic on port 11434
  • Only use this setup on trusted networks like your home WiFi
  • Never expose your Ollama instance to the public internet without proper security measures

Troubleshooting

  • Model runs slowly: Try a smaller model or adjust parameters
  • Connection errors: Check firewall settings and ensure correct IP address
  • Out of memory errors: Close other applications or try a more efficient model

Resources