This blog post explores the Bee Agent Framework integration with watsonx.ai, detailing the setup process for a weather agent example on MacOS. It discusses necessary installations, environment variable configurations, and code updates needed due to framework changes. The execution output illustrates how the agent retrieves current weather data for Las Vegas.
IBM Granite for Code models are available on Hugging Face and ready to be used locally with “watsonx Code Assistant”
IBM Granite for Code models on Hugging Face are beneficial for developers, allowing seamless integration with VS Code. They support 116 programming languages and are available under an Apache 2.0 license.
Land of Confusion using Classifications, and Metrics for a nonspecific Ground Truth
This blog post examines the Confusion Matrix as a metric for evaluating the performance of large language models (LLMs) in classification tasks, especially legal document analysis. It discusses the calculation of key classification metrics like Accuracy, Precision, Recall, and F1 score, emphasizing the challenges of using a broadly defined Ground Truth.
Integrating langchain_ibm with watsonx and LangChain for function calls: Example and Tutorial
The blog post demonstrates using the ChatWatsonx class of langchain_ibm for "function calls" with LangChain and IBM watsonx™ AI. It provides an example of a chat function call for weather information for various cities. The post also includes instructions to set up and run the example. Additional resources and examples are also provided.
InstructLab and Taxonomy tree: LLM Foundation Model Fine-tuning Guide | Musician Example
The blog post introduces InstructLab, a project by IBM and Red Hat, outlining the fine-tuning process of the model "MODELS/MERLINITE-7B-LAB-Q4_K_M.GGUF." This involves data preparation, model training, testing, and conversion, finally serving the model to verify its accuracy, by using a personal musician example.
Fine-tune LLM foundation models with the InstructLab an Open-Source project introduced by IBM and Red Hat
This blog post provides a step-by-step guide to setting up InstructLab CLI on an Apple Laptop with an Apple M3 chip, including an overview of InstructLab and its benefits. It also mentions supported models and detailed setup instructions. Additionally, it refers to a Red Hat YouTube demonstration and highlights the project's potential impact.
Using CUDA and Llama-cpp to Run a Phi-3-Small-128K-Instruct Model on IBM Cloud VSI with GPUs
The popularity of llama.cpp and optimized GGUF format for models is growing. This post outlines steps to run "Phi-3-Small-128K-Instruct" in GGUF format with llama.cpp on an IBM Cloud VSI with GPUs and Ubuntu 22.04. It covers VSI setup, CUDA toolkit, compilation, Python environment, model usage, and additional resources.
AI Prompt Engineering: Streamlining Automation for Large Language Models
This blog post focuses on the importance of Prompt Engineering in AI models, particularly Large Language Models (LLMs), for reducing manual effort and automating validation processes. It emphasizes the need for automation to handle increasing test data and variable combinations, and discusses the use of the Watsonx.ai Prompt Lab for manual and initial automation processes. The post also highlights the significance of integrating automation with version control for consistency and reproducibility.
Fine-tune a large language model (llm) for multi-turn conversations and run it on a Text Generation Inference (TGI) server
This blog post delves into the initial fine-tuning process for large language models (LLMs) for multi-turn conversations and their deployment on Text Generation Inference (TGI) servers. It covers topics such as use cases, data formats, training data preparation, server setup, and evaluation frameworks. The goal is to guide readers through the process of fine-tuning and deploying LLMs.
Getting started with Text Generation Inference (TGI) using a container to serve your LLM model
This blog post outlines a bash automation for setting up and testing Text Generation Inference (TGI) using a container. It provides instructions for creating a Python test client, starting the TGI server, and troubleshooting common issues. The post emphasizes the benefits of using containers and references the Hugging Face and Nvidia technologies.
