Integrating langchain_ibm with watsonx and LangChain for function calls: Example and Tutorial

The blog post has an example of using the ChatWatsonx class in langchain_ibm for “function calls” with LangChain, and watsonx, and the mistralai/mixtral-8x7b-instruct-v01 model running in watsonx.ai.

Note: To inspect a full working weather query example for an agent implementation, please visit the blog post Implementing LangChain AI Agent with WatsonxLLM for a Weather Queries application.

  • LangChain is a framework for developing applications powered by large language models (LLMs).”
  • IBM watsonx™ AI and data platform, brings together new generative AI (gen AI) capabilities powered by foundation models and traditional machine learning (ML)”

You can find the complete source code in this GitHub repository: https://github.com/thomassuedbroecker/function_calling_langchain_watsonx_example.git

Table of content

  1. Introduction
  2. Example function call, for a chat to get temperature information for various cities
    1. Starting point: provide input, prompt, and function definitions
    2. Invoke the model using chat messages
    3. Use model response converted by LangChain to an AIMessage to get the needed information to invoke the external weather service
    4. Extract the tool_calls from the AIMessage
    5. Invoke the external weather service using the arguments.
    6. Show the result
  3. The example application
  4. Setup and run the example
    1. Clone the repository to your local machine
    2. Generate a virtual Python environment
    3. Install the needed libraries
    4. Generate a .env file for the needed environment variables
    5. Run the example
  5. Additional resources
  6. Summary

1. Introduction

To integrate watsonx with Langchain, you can use the langchain_ibm Python library to call an LLM. The langchain_ibm GitHub repository is a part of the LangChain GitHub project. The class ChatWatsonx of langchain_ibm implements the integration to “function calls” with a tools_prompt for the mixtral model. This is the copy of the prompt.

You are Mixtral Chat function calling, an AI language model developed by Mistral AI. 
You are a cautious assistant. You carefully follow instructions. You are helpful and 
harmless and you follow ethical guidelines and promote positive behavior. Here are a 
few of the tools available to you:
[AVAILABLE_TOOLS]
{json.dumps(tools[0], indent=2)}
[/AVAILABLE_TOOLS]
To use these tools you must always respond in JSON format containing `"type"` and 
`"function"` key-value pairs. Also `"function"` key-value pair always containing 
`"name"` and `"arguments"` key-value pairs. For example, to answer the question, 
"What is a length of word think?" you must use the get_word_length tool like so:

```json
{{
    "type": "function",
    "function": {{
        "name": "get_word_length",
        "arguments": {{
            "word": "think"
        }}
    }}
}}
```
</endoftext>

Remember, even when answering to the user, you must still use this JSON format! 
If you'd like to ask how the user is doing you must write:

```json
{{
    "type": "function",
    "function": {{
        "name": "Final Answer",
        "arguments": {{
            "output": "How are you today?"
        }}
    }}
}}
```
</endoftext>

Remember to end your response with '</endoftext>'

{chat_prompt}
(reminder to respond in a JSON blob no matter what and use tools only if necessary)

Here is a “function calling” description using my words: an LLM doesn’t contain all the information to answer questions because it does not have the actual data, and the data can’t be provided before the LLM invocation like in an RAG use case where your first search for an initial set of answers in the search engine, which you provide as a context for a prompt you sent to the LLM.

Consider a scenario where you need to access weather data that changes throughout the day. In such cases, integrating an external service becomes essential to obtain the necessary data, such as the current weather in a specific location from a weather service. The LLM itself does not invoke this service, but it can identify the appropriate service and the required parameters for invocation when you provide the relevant resources in form of context with is usually in a JSON format known as “Tools” as you see in this example.

To identify the right external service, we need these elements:

  • Identify the intention of the given text, which is related to the given system prompt in a chat situation.
  • Identify the related external function needed to fulfill the intention given in the text.
  • Extract the required parameters from the text to invoke the external function.

2. Example “function call”, for a chat to get temperature information for various cities

Step 1: Starting point: provide input, prompt, and function definitions

  • Input text

Example question:

"Which city is hotter today: LA or NY?"
  • LLM prompt system prompt
"You are a weather expert. If the question is not about the weather, say: I don't know."

  • Provide the model with a list of function definitions (the tools); here is a format used in this example.
[{
    "type": "function",
    "function": {
        "name": "weather_service",
        "description": "weather advisor api provide all of the weather needs and information in that api, it serve weather information",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "The cities list e.g. [San Francisco, LA, New York]"
                }
            },
            "required": [
                "location"
            ]
        }
    }
 }]

Step 2: Invoke the model using chat messages

Chat message format.

weather_messages = [
            ("system", weather_system_prompt),
            ("human", weather_question)
        ]

weather_aimessage = watsonx_chat_with_tools.invoke(input=weather_messages)

Step 3: Use model response converted by LangChain to an AIMessage to get the needed information to invoke the external weather service

The model provides the function name and the needed parameters to invoke an external weather service.

  • AI Message format:
content='' additional_kwargs={'tool_calls': {'type': 'function', 'function': {'name': 'weather_service', 'arguments': {'city': 'LA, NY'}}}} response_metadata={'token_usage': {'generated_token_count': 65, 'input_token_count': 723}, 'model_name': 'mistralai/mixtral-8x7b-instruct-v01', 'system_fingerprint': '', 'finish_reason': 'stop_sequence'} id='run-309c4371-da6f-4e28-9862-664fe1a140c2-0' tool_calls=[{'name': 'weather_service', 'args': {'city': 'LA, NY'}, 'id': '1723450585.585', 'type': 'tool_call'}] usage_metadata={'input_tokens': 723, 'output_tokens': 65, 'total_tokens': 788}

Step 4: Extract the tool_calls from the AIMessage

[{'name': 'weather_service', 'args': {'city': 'LA, NY'}, 'id': '1723450740.908', 'type': 'tool_call'}]

Step 5: Invoke the external weather service using the arguments.

asyncio.run(getweather(weather_aimessage.tool_calls[0]['args']))

Step 6: Show the result

[{'city': 'LA', 'temperature': '11 celsius'}, {'city': ' NY', 'temperature': '13 celsius'}]

3. The example application

The repository implements an example Python application for invocating a fictive finance service and an invocation using the open-source weather service Phython Weather.

The image below shows the simplified architecture:

The following code is an extraction from the example application source code for the weather service function call.

        environment = load_env()
        print(f"1. Load environment variables\n{environment}\n")
              
        parameters = {
            "decoding_method": "greedy",
            "max_new_tokens": 400,
            "min_new_tokens": 1,
            "temperature": 1.0
        }
        print(f"2. Prepare model parameters\n{parameters}\n")
        
        print(f"3. Create a ChatWatsonx instance\n")
        watsonx_chat =  ChatWatsonx( model_id=environment['model_id'],
            url=environment['url'],
            project_id=environment['project_id'],
            apikey= environment['apikey'],
            params=parameters
        )
        
        print(f"4. Bind tools to chat\n")
        watsonx_chat_with_tools = watsonx_chat.bind_tools(tools_load())

        # Weather example
        print(f"5. Run the weather example\n")
        weather_system_prompt = """You are a weather expert. If the question is not about the weather, say: I don't know."""
        weather_question="Which city is hotter today: LA or NY?"
        weather_messages = [
            ("system", weather_system_prompt),
            ("human", weather_question)
        ]
        print(f"- Weather_messages:\n{weather_messages}\n")
        weather_aimessage = watsonx_chat_with_tools.invoke(input=weather_messages)
        
        print(f"- Weather_aimessage:\n{weather_aimessage}\n")
        print(f"- Weather_tools:\n{weather_aimessage.tool_calls}\n")
        print(f"- Invoke real weather endpoint:\n{asyncio.run(getweather(weather_aimessage.tool_calls[0]['args']))}\n")

4. Setup and run the example

The following steps are instructions on how to run the example on your local machine.

Step 1: Clone the repository to your local machine

git clone https://github.com/thomassuedbroecker/function_calling_langchain_watsonx_example.git

Step 2: Generate a virtual Python environment

cd code
python3 -m venv --upgrade-deps venv
source venv/bin/activate

Step 3: Install the needed libraries

python3 -m pip install -qU langchain-ibm
python3 -m pip install python-weather

Step 4: Generate a .env file for the needed environment variables

cat env_example_template > .env

Insert the values for the two environment variables:

  • WATSONX_PROJECT_ID=YOUR_WATSONX_PROJECT_ID
  • IBMCLOUD_APIKEY=YOUR_KEY

The content of the environment file.

export IBMCLOUD_APIKEY=YOUR_KEY
export IBMCLOUD_URL="https://iam.cloud.ibm.com/identity/token"

# Watsonx
export WATSONX_URL="https://eu-de.ml.cloud.ibm.com"
export WATSONX_VERSION=2023-05-29
export WATSONX_PROJECT_ID=YOUR_PROJECT_ID

export WATSONX_MIN_NEW_TOKENS=1
export WATSONX_MAX_NEW_TOKENS=300
export WATSONX_LLM_NAME=mistralai/mixtral-8x7b-instruct-v01

Step 5: Run the example

bash example_function_invocation.sh
  • Output of the invocation:
##########################
# 0. Load environments
##########################
# 1. Invoke application
1. Load environment
{'project_id': 'YOUR_PROJECT_ID', 'url': 'https://eu-de.ml.cloud.ibm.com', 'model_id': 'mistralai/mixtral-8x7b-instruct-v01', 'apikey': 'YOUR_APIKEY'}

2. Prepare model parameters
{'decoding_method': 'greedy', 'max_new_tokens': 400, 'min_new_tokens': 1, 'temperature': 1.0}

3. Create a ChatWatsonx instance

4. Bind tools to chat

5. Run the weather example

- Weather_messages:
[('system', "You are a weather expert. If the question is not about the weather, say: I don't know."), ('human', 'Which city is hotter today: LA or NY?')]

- Weather_aimessage:
content='' additional_kwargs={'tool_calls': {'type': 'function', 'function': {'name': 'weather_service', 'arguments': {'city': 'LA, NY'}}}} response_metadata={'token_usage': {'generated_token_count': 65, 'input_token_count': 723}, 'model_name': 'mistralai/mixtral-8x7b-instruct-v01', 'system_fingerprint': '', 'finish_reason': 'stop_sequence'} id='run-6893b0a4-85cb-44fd-8500-fd44c35def5e-0' tool_calls=[{'name': 'weather_service', 'args': {'city': 'LA, NY'}, 'id': '1723450740.908', 'type': 'tool_call'}] usage_metadata={'input_tokens': 723, 'output_tokens': 65, 'total_tokens': 788}

- Weather_tools:
[{'name': 'weather_service', 'args': {'city': 'LA, NY'}, 'id': '1723450740.908', 'type': 'tool_call'}]

- Invoke real weather endpoint:
[{'city': 'LA', 'temperature': '11 celsius'}, {'city': ' NY', 'temperature': '13 celsius'}]


6. Run the finance example

- Finance_messages:
[('system', 'You are a finance expert tasked with analyzing the questions and selecting the most relevant title from a specified table. Find the finance topic and finance category that best match the content of the sentence.\n        **Dictionary:**\n        {categories}\n        **Instructions:**\n        - Determine the correct table from the dictionary.\n        - Use this table to find the finance topic and finance category values that are most relevant to the finance sentence\n        - Ensure that the values retrieved are the best match to the content of the sentence.\n        **Conclusion:**\n        Provide the result in the following format, only return following information not add any other word or sentence in response, give answer only in JSON Object format, only return answer with the following format do not use different format:\n        {{"category": "found_category", "id": category_id}}\n        '), ('human', 'What percentage of total Debit Card and Credit Card expenditures were made in the Airlines and Accommodation sectors in 2023?')]

- Finance_aimessage:
content='' additional_kwargs={'tool_calls': {'type': 'function', 'function': {'name': 'finance_service', 'arguments': {'startdate': '01-01-2023', 'enddate': '31-12-2023'}}}} response_metadata={'token_usage': {'generated_token_count': 393, 'input_token_count': 902}, 'model_name': 'mistralai/mixtral-8x7b-instruct-v01', 'system_fingerprint': '', 'finish_reason': 'stop_sequence'} id='run-f5865bca-48b5-49ba-b285-adb15235d79d-0' tool_calls=[{'name': 'finance_service', 'args': {'startdate': '01-01-2023', 'enddate': '31-12-2023'}, 'id': '1723450746.65', 'type': 'tool_call'}] usage_metadata={'input_tokens': 902, 'output_tokens': 393, 'total_tokens': 1295}

- Finance_tools:
[{'name': 'finance_service', 'args': {'startdate': '01-01-2023', 'enddate': '31-12-2023'}, 'id': '1723450746.65', 'type': 'tool_call'}]

- Invoke example finance endpoint:
 Your finance request is from 01-01-2023 to 31-12-2023

5. Additional resources

6. Summary

This a good introduction from my point of view to get familiar with function calling using LangChain and watsonx, at the moment (August 12. 2024) only the mistralai/mixtral-8x7b-instruct-v01 model is supported let’s see how “function calling” with models in watsonx will be realized in the future with LangChain and more frameworks.


I hope this was useful to you and let’s see what’s next?

Greetings,

Thomas

#llm, #langchain, #ai, #opensource, #ibm, #watsonx, #functioncalling, #mistralai, #functioncall

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog at WordPress.com.

Up ↑