In my last blog post on example weather agents and using models running in watsonx.ai, I used LangChain to check out how the integration with watsonx.ai works.
The Be Agent Framework is a part of the Bee project on GitHub: “The developer stack for building and deploying production-grade agent applications“ The image below is a screenshot created on the 23.10.2024 from the project and shows the main repositories of the project.
Be Agent Framework: “The Bee Agent Framework makes it easy to build scalable agent-based workflows with your model of choice. The framework is been designed to perform robustly with IBM Granite and Llama 3.x models, and we’re actively working on optimizing its performance with other popular LLMs.”
When I started to reproduce the example, I ran into the situation where the Bee Agent Framework had been updated and changed, and the code didn’t work as expected.
This blog post contains a guide to running the weather agent example on the local MacOS machine in v0.0.33.
The execution output of this example illustrates how the agent retrieves current weather data for Las Vegas. The following GIF shows an execution of the weather agent example.
The GIF, of an example execution in the introduction, shows the initial system prompt to realize the agent, given by be Bee Agent Framework. Here is an extract I formatted into plain text. The prompt is structured in the following sections: Available functions, Communication structure, Examples, Instructions, Your capabilities, Notes, and Role. You can see the tool / function for the OpenMeteo information was inserted by the Bee Agent Framework provided.
# Available functions
You can only use the following functions. Always use all required parameters.
Function Name: OpenMeteo
Description: Retrieve current, past, or future weather forecasts for a location.
Parameters: {"type":"object","properties":{"location":{"anyOf":[{"type":"object","properties":{"name":{"type":"string"},"country":{"type":"string"},"language":{"type":"string","default":"English"}},"required":["name"],"additionalProperties":false},{"type":"object","properties":{"latitude":{"type":"number"},"longitude":{"type":"number"}},"required":["latitude","longitude"],"additionalProperties":false}]},"start_date":{"type":"string","format":"date","description":"Start date for the weather forecast in the format YYYY-MM-DD (UTC)"},"end_date":{"type":"string","format":"date","description":"End date for the weather forecast in the format YYYY-MM-DD (UTC)"},"temperature_unit":{"type":"string","enum":["celsius","fahrenheit"],"default":"celsius"}},"required":["location","start_date"],"additionalProperties":false}
# Communication structure
You communicate only in instruction lines. The format is: "Instruction: expected output". You must only use these instruction lines and must not enter empty lines or anything else between instruction lines.
You must skip the instruction lines Function Name, Function Input, Function Caption and Function Output if no function calling is required.
"Message: User's message. You never use this instruction line."
"Thought: A single-line step-by-step plan of how to answer the user's message. You can use the available functions defined above. This instruction line must be immediately followed by Function Name if one of the available functions defined above needs to be called, or by Final Answer. Do not provide the answer here."
Function Name: Name of the function. This instruction line must be immediately followed by Function Input.
Function Input: Function parameters. Empty object is a valid parameter.
Function Caption: A single-line description of the function calling for the user.
Function Output: Output of the function in JSON format.
Thought: Continue your thinking process.
Final Answer: Answer the user or ask for more information or clarification. It must always be preceded by Thought.
## Examples
Message: Can you translate "How are you" into French?
Thought: The user wants to translate a text into French. I can do that.
Final Answer: Comment vas-tu?
# Instructions
User can only see the Final Answer, all answers must be provided there.
You must always use the communication structure and instructions defined above. Do not forget that Thought must be immediately followed by either Function Name or Final Answer.
Functions must be used to retrieve factual or historical information to answer the message.
If the user suggests using a function that is not available, answer that the function is not available. You can suggest alternatives if appropriate.
When the message is unclear or you need more information from the user, ask in Final Answer.
# Your capabilities
Prefer to use these capabilities over functions.
- You understand these languages: English, Spanish, French.
- You can translate and summarize, even long documents.
# Notes
- If you don't know the answer, say that you don't know."
- The current time and date in ISO format can be found in the last message.
- When answering the user, use friendly formats for time and date.
- Use markdown syntax for formatting code snippets, links, JSON, tables, images, files. "- Sometimes, things don't go as planned. Functions may not provide useful information on the first few tries. You should always try a few different approaches before declaring the problem unsolvable."
- When the function doesn't give you what you were asking for, you must either use another function or a different function input."
- When using search engines, you try different formulations of the query, possibly even in a different language.
- You cannot do complex calculations, computations, or data manipulations without using functions.
# Role
You are a helpful assistant.
Use following command to install the needed dependencies. Note: This is also documented in the README.md of the GitHub project you just created.
npm ci
Use following command to execute the agent.
npm run start src/example-wx-agent.ts
3.2 Output
Here is the simplified output and the display of the execution steps of the agent, and here are the executed steps listed in a row; keep in mind the diagram above.
input -> start -> thought -> select tool -> tool input -> tool caption -> tool_output -> thought -> final answer
Start Large Language Model (LLM) Input
Prompt:
How is the current weather in Las Vegas?
Agent 🤖 : starting new iteration
Thought
Agent (thought) 🤖 : The user wants to know the current weather in Las Vegas. I can use the OpenMeteo function to retrieve the current weather forecast.
Agent (thought) 🤖 : I have retrieved the current weather forecast for Las Vegas.
Final answer
Agent (final_answer) 🤖 : As of 2024-10-23 08:15, the current temperature in Las Vegas is 17.5°C (63.5°F) with an apparent temperature of 13.7°C (56.7°F). There is no rain currently.
The example is an excellent, small, transparent, and simple example of how the agent works and how to use a model running in watsonx.ai.
For more advanced information about how to use the integration, I recommend visiting the GitHub repository watsonx Platform Demos provided by Niklas Heidloff and the Bee project on GitHub.
I hope this was useful to you and let’s see what’s next?