AI Prompt Engineering: Streamlining Automation for Large Language Models

This blog post focuses on the importance of Prompt Engineering in AI models, particularly Large Language Models (LLMs), for reducing manual effort and automating validation processes. It emphasizes the need for automation to handle increasing test data and variable combinations, and discusses the use of the Watsonx.ai Prompt Lab for manual and initial automation processes. The post also highlights the significance of integrating automation with version control for consistency and reproducibility.

Fine-tune a large language model (llm) for multi-turn conversations and run it on a Text Generation Inference (TGI) server

This blog post delves into the initial fine-tuning process for large language models (LLMs) for multi-turn conversations and their deployment on Text Generation Inference (TGI) servers. It covers topics such as use cases, data formats, training data preparation, server setup, and evaluation frameworks. The goal is to guide readers through the process of fine-tuning and deploying LLMs.

Some fun with “Watson Text to Speech” and voice model customization

My last blog post was about Watson Speech to Text language model customization and this blog post is about IBM Cloud Watson Text to Speech (TTS) custom voice model configuration. Because, now it's time to have some fun with the Watson TTS service. I created a fun customisation of the service that the German pronunciation sounds a little bit like the palatinate dialect. Here are the differences with two wav file I created with a custom Watson to Text to Speech voice model.

Blog at WordPress.com.

Up ↑