A Bash Cheat Sheet: Adding a Local Ollama Model to watsonx Orchestrate

When working locally with IBM watsonx Orchestrate using the Agent Development Kit, I often want to test agents against local models instead of hosted ones.
For quick experiments, Ollama is usually the fastest path for this.

The pattern looks familiar and easy to automate:

  • start Ollama
  • verify the model endpoint
  • reset the local Orchestrate setup
  • register the model
  • start chatting

Instead of repeating this manually, I use a small Bash script as a cheat sheet for a simple executable workflow.

The GIF below shows the interaction:

1. Why This Script Exists

Some of the setup can fail for very unexciting reasons:

  • Ollama not listening on the expected interface
  • wrong IP or port when running inside local orchestration
  • environment variables not exported
  • model registered, but not actually reachable

This script forces a known-good path:
First make Ollama reachable, then wire it into Orchestrate.
Nothing fancy — just disciplined automation.

2. What the Script Does (High-Level)

In one linear flow, using the two scripts:

First script:

  • Starts Ollama on an explicit host and port

Second script:

  • Lists and sanity-checks available Ollama models
  • Verifies the Ollama /v1/chat/completions endpoint
  • Resets and starts the local Orchestrate server
  • Activates the local Orchestrate environment
  • Adds an Ollama-backed model (no connection required)
  • Starts an interactive chat
  • Streams relevant server logs

Colorized output is intentional — when something breaks, you want to see where immediately.

3. The Bash Cheat Sheets

  • Start Ollama (first terminal)
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
echo -e "\n${BLUE}========================================${NC}"
echo -e "${YELLOW} Set Ollama port ${NC}"
systemctl stop ollama
export OLLAMA_HOST=0.0.0.0:11434
ollama serve
  • Register Ollama in watsonx Orchestrate (second terminal)
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
export ENVIRONMENT="draft"
echo -e "\n${BLUE}========================================${NC}"
echo -e "${YELLOW} List Ollama models ${NC}"
ollama list
echo -e "\n${BLUE}========================================${NC}"
echo -e "${YELLOW} Test Ollama connection ${NC}"
curl -X POST ${OLLAMA_CUSTOM_HOST}/v1/chat/completions \
-H 'content-type: application/json' \
-d '{ "model": "llama3.2", "messages": [{"content": "Hi","role": "user"}]}'
read -p "Press enter to continue"
echo -e "\n${BLUE}========================================${NC}"
echo -e "${YELLOW} Activating virtual environment... ${NC}"
source venv/bin/activate
orchestrate server reset
source .env
orchestrate server start \
--env-file .env \
--with-connections-ui \
--with-doc-processing \
--accept-terms-and-conditions
orchestrate env activate local
orchestrate models list
cd adk-project/model-gateway/
export AGENTMODEL=${OLLAMA_AGENT_MODEL}
echo "Model: ${AGENTMODEL}"
echo -e "\n${RED}========================================${NC}"
echo -e "${RED} Ollama does not require a connection ${NC}"
PROVIDER_CONFIG="{\"api_key\":\"${OLLAMA_KEY}\",\"custom_host\":\"${OLLAMA_CUSTOM_HOST}\"}"
orchestrate models add --name ${AGENTMODEL} --provider-config ${PROVIDER_CONFIG}
orchestrate chat start
orchestrate server logs | grep "ai-gateway-1"
  • Optional Model configuration file
name: ollama/OLLAMA_MODEL_TO_REPLACE
display_name: Ollama LLaMA 3.2
description: |
Ollama-hosted LLaMA 3.2 model for local or edge deployments.
tags:
- ollama
- llama
model_type: chat
provider_config:
api_key: ollama
custom_host: OLLAMA_URL_TO_REPLACE

The command to use the model configuration file.

orchestrate models import -f ./adk-project/model-gateway/model_gateway_config_ollama.yaml
  • Environment file template

The ipconfig getifaddr en0 command is important in this context to get the local network address. I do this during setting the environment variable: export OLLAMA_CUSTOM_HOST=http://$(ipconfig getifaddr en0):11434

# watsonx Orchestrate
export WO_DEVELOPER_EDITION_SOURCE=myibm
export WO_ENTITLEMENT_KEY=
export WATSONX_APIKEY=
export WATSONX_SPACE_ID=<ID>
export WXO_API_KEY=${WATSONX_APIKEY}
export WATSONX_REGION=us-south
export WATSONX_URL=https://${WATSONX_REGION}.ml.cloud.ibm.com
export ASSISTANT_LLM_API_BASE=${WATSONX_URL}
export ASSISTANT_EMBEDDINGS_API_BASE=${WATSONX_URL}
export ROUTING_LLM_API_BASE=${WATSONX_URL}
export ORCHESTRATE_API_TOKEN=blablub
# Ollama
export OLLAMA_AGENT_MODEL=llama3.2:latest
export OLLAMA_KEY=ollama
export OLLAMA_CUSTOM_HOST=http://$(ipconfig getifaddr en0):11434

4. Additional Resources


I hope this was useful to you and let’s see what’s next?

Greetings,

Thomas

#watsonxOrchestrate, #Ollama, #LocalLLM, #AgenticAI, #AIAgents, #AIOrchestration, #LLMOps, #BashAutomation, #LocalAI

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog at WordPress.com.

Up ↑