Prompt engineering develops prompts to obtain better results from large language models (LLMs), such as OpenAI's GPT-4. This lab illustrates several prompt engineering tactics that can be used to improve the quality of the results obtained from the OpenAI Chat Completions API. The goal of this lab is to put you in a position to apply prompt engineering to your specific use cases of LLMs.
This lab is built around the application of AI in content categorization. Specifically, given a piece of content, use AI to assign a category to it from a list of possible categories.
Learning objectives
Upon completion of this intermediate-level lab, you will be able to:
- Understand how to apply prompt engineering in practice
- Demonstrate how to use the OpenAI Chat Completions API to perform content categorization
- Apply one-shot and few-shot learning to improve the quality of the results obtained from LLMs
Intended audience
- Software Developers
- Machine Learning Engineers
- Anyone interested in learning about applications of generative AI
Prerequisites
Familiarity with the following will ensure the most beneficial lab experience:
- Python
- OpenAI Completion API basics
The following content can be used to fulfill the prerequisites:
Environment before
Environment after
Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Security Specialist (CKS), Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.