Course Overview
This course is designed to introduce generative artificial intelligence (AI) to software developers interested in using large language models (LLMs) without fine-tuning. The course provides an overview of generative AI, planning a generative AI project, getting started with Amazon Bedrock, the foundations of prompt engineering, and the architecture patterns to build generative AI applications using Amazon Bedrock and LangChain.
Who should attend
This course is intended for:
- Software developers interested in using LLMs without fine-tuning
Prerequisites
We recommend that attendees of this course have:
- Completed AWS Technical Essentials (AWSE)
- Intermediate-level proficiency in Python
Course Objectives
In this course, you will learn to:
- Describe generative AI and how it aligns to machine learning
- Define the importance of generative AI and explain its potential risks and benefits
- Identify business value from generative AI use cases
- Discuss the technical foundations and key terminology for generative AI
- Explain the steps for planning a generative AI project
- Identify some of the risks and mitigations when using generative AI
- Understand how Amazon Bedrock works
- Familiarize yourself with basic concepts of Amazon Bedrock
- Recognize the benefits of Amazon Bedrock
- List typical use cases for Amazon Bedrock
- Describe the typical architecture associated with an Amazon Bedrock solution
- Understand the cost structure of Amazon Bedrock
- Implement a demonstration of Amazon Bedrock in the AWS Management Console
- Define prompt engineering and apply general best practices when interacting with foundation models (FMs)
- Identify the basic types of prompt techniques, including zero-shot and few-shot learning
- Apply advanced prompt techniques when necessary for your use case
- Identify which prompt techniques are best suited for specific models
- Identify potential prompt misuses
- Analyze potential bias in FM responses and design prompts that mitigate that bias
- Identify the components of a generative AI application and how to customize an FM
- Describe Amazon Bedrock foundation models, inference parameters, and key Amazon Bedrock APIs
- Identify Amazon Web Services (AWS) offerings that help with monitoring, securing, and governing your Amazon Bedrock applications
- Describe how to integrate LangChain with LLMs, prompt templates, chains, chat models, text embeddings models, document loaders, retrievers, and Agents for Amazon Bedrock
- Describe architecture patterns that you can implement with Amazon Bedrock for building generative AI applications
- Apply the concepts to build and test sample use cases that use the various Amazon Bedrock models, LangChain, and the Retrieval Augmented Generation (RAG) approach
Course Content
- Introduction to Generative AI – Art of the Possible
- Planning a Generative AI Project
- Getting Started with Amazon Bedrock
- Foundations of Prompt Engineering
- Amazon Bedrock Application Components
- Amazon Bedrock Foundation Models
- LangChain
- Architecture Patterns