The post outlines using the Evaluation Framework in watsonx Orchestrate ADK to verify AI Agent behavior through a practical example: Galaxium Travels, a fictional booking system. It details setting up the environment, defining user Stories, generating synthetic Test Cases, and running evaluations, crucial for ensuring AI reliability and transparency.
Integrating watsonx Orchestrate Agent Chat in Web Apps
This blog post demonstrates the usage of the web channel functionality in watsonx Orchestrate, enabling the embedding of conversational AI agents into custom web applications. It guides users through setting up a remote environment, generating source code, and running a web server to invoke chat features, emphasizing ease of use and customization options.
Supercharge Your Support: Example Build & Orchestrate AI Agents with watsonx.ai and watsonx Orchestrate
This post explains how to create, test, and integrate AI support agents using IBM's watsonx.ai and watsonx Orchestrate. It describes an example to integrate a Specialist Support Agent for DB2, into multi-agent orchestration, and highlights best practices for creating efficient agent workflows and accurate responses while anticipating potential complexities.
Deploying an InstructLab Fine-Tuned Model on IBM watsonx Inference: A SaaS Guide
This blog post explains how to deploy a fine-tuned model to IBM watsonx on IBM Cloud. It highlights the advantages of using this platform, such as avoiding infrastructure management and ensuring enterprise security, as well as detailed steps for configuration, deployment, and accessing the model from IBM watsonx.
Create a Full-Screen Web-Chat with watsonx Assistant, IBM Cloud Code Engine and watsonx.ai
The blog post shows integrating watsonx Assistant and watsonx.ai to create a full-screen user interface for interacting with a large language model (LLM) using minimal coding. It outlines the motivation, architecture, setup process, and specific actions necessary to deploy the integration on IBM Cloud Code Engine.
Using CUDA and Llama-cpp to Run a Phi-3-Small-128K-Instruct Model on IBM Cloud VSI with GPUs
The popularity of llama.cpp and optimized GGUF format for models is growing. This post outlines steps to run "Phi-3-Small-128K-Instruct" in GGUF format with llama.cpp on an IBM Cloud VSI with GPUs and Ubuntu 22.04. It covers VSI setup, CUDA toolkit, compilation, Python environment, model usage, and additional resources.
Create an IBM Cloud IAM access token in your Spring Boot Java application
This blog post provides an example of obtaining an IBM Cloud access token using the IBM Cloud IAM REST API and Spring Boot. It includes a Java RestClient implementation for getting the access token and a REST endpoint invocation in a sample application.
CheatSheet: How to add users to your watsonx project?
This cheat sheet provides a two-step guide for adding users to your watsonx project in IBM Cloud.
CheatSheet: Configure the Block Storage usage in Virtual Server Instances on IBM Cloud
This post introduces the use of Block Storage in Virtual Server Instances, particularly in relation to GPUs. It covers the process of mounting and configuring block storage, along with creating, formatting, and mounting the disk. It also provides steps for permanently mounting the storage and attaching existing block storage to a new virtual service instance machine.
How to define a custom Open API specification for a Watson Machine Learning deployment to integrate it into watsonx Assistant
This blog post is about how to define a custom Open API specification` for Watson Machine Learning - IBM Cloud deployment to integrate it into watsonx Assistant. The Watson Machine Learning deployments make it easy for data scientists to write AI Prototypes to be integrated into applications because they can use Jupyter Notebooks and Python they are used to without knowing how to write containers and set up runtimes; they can deploy, and the developers can consume the AI functionalities they have implemented via a REST API.
