375 lines
12 KiB
Plaintext
375 lines
12 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"vscode": {
|
|
"languageId": "python"
|
|
}
|
|
},
|
|
"source": [
|
|
"# Getting started with the Bedrock SDK"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'Hi! How can I help you today?'"
|
|
]
|
|
},
|
|
"execution_count": 3,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"import boto3\n",
|
|
"\n",
|
|
"def generate_conversation(messages,\n",
|
|
" ):\n",
|
|
"\n",
|
|
" bedrock_client = boto3.client(service_name='bedrock-runtime', region_name=\"us-west-2\")\n",
|
|
" model_id = \"anthropic.claude-3-5-sonnet-20241022-v2:0\"\n",
|
|
"\n",
|
|
" # Send the message.\n",
|
|
" response = bedrock_client.converse(\n",
|
|
" modelId=model_id,\n",
|
|
" messages=messages\n",
|
|
" )\n",
|
|
"\n",
|
|
" return response[\"output\"][\"message\"][\"content\"][0][\"text\"]\n",
|
|
"\n",
|
|
"messages = [{\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": [{\"text\": \"hello world\"}]\n",
|
|
"}]\n",
|
|
"\n",
|
|
"generate_conversation(messages)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"\"Dr Pepper's exact formula is a trade secret, but it's known to contain 23 different flavors. While the complete list isn't public, some commonly believed ingredients include:\\n\\n1. Cherry\\n2. Cola\\n3. Vanilla\\n4. Amaretto/Almond\\n5. Blackberry\\n6. Caramel\\n7. Pepper\\n8. Root Beer\\n9. Prune juice (though the company has denied this)\\n10. Licorice\\n\\nThe actual combination and proportions of flavors remain confidential, and there may be other flavors not commonly known or speculated about. The unique blend of these flavors creates Dr Pepper's distinctive taste that sets it apart from other soft drinks.\""
|
|
]
|
|
},
|
|
"execution_count": 4,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"messages = [{\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": [{\"text\": \"What flavors are used in Dr. Pepper?\"}]\n",
|
|
"}]\n",
|
|
"\n",
|
|
"generate_conversation(messages)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"metadata": {
|
|
"scrolled": false
|
|
},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"{'ResponseMetadata': {'RequestId': '67f892c6-6dcc-4296-a61c-d67449183ab9',\n",
|
|
" 'HTTPStatusCode': 200,\n",
|
|
" 'HTTPHeaders': {'date': 'Thu, 14 Nov 2024 19:19:03 GMT',\n",
|
|
" 'content-type': 'application/json',\n",
|
|
" 'content-length': '209',\n",
|
|
" 'connection': 'keep-alive',\n",
|
|
" 'x-amzn-requestid': '67f892c6-6dcc-4296-a61c-d67449183ab9'},\n",
|
|
" 'RetryAttempts': 0},\n",
|
|
" 'output': {'message': {'role': 'assistant',\n",
|
|
" 'content': [{'text': 'Hi! How can I help you today?'}]}},\n",
|
|
" 'stopReason': 'end_turn',\n",
|
|
" 'usage': {'inputTokens': 9, 'outputTokens': 12, 'totalTokens': 21},\n",
|
|
" 'metrics': {'latencyMs': 643}}"
|
|
]
|
|
},
|
|
"execution_count": 5,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"bedrock_client = boto3.client(service_name='bedrock-runtime', region_name=\"us-west-2\")\n",
|
|
"model_id = \"anthropic.claude-3-5-sonnet-20241022-v2:0\"\n",
|
|
"\n",
|
|
" # Send the message.\n",
|
|
"response = bedrock_client.converse(\n",
|
|
" modelId=model_id,\n",
|
|
" messages=[{\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": [{\"text\": \"hello world\"}]\n",
|
|
" }],\n",
|
|
")\n",
|
|
"\n",
|
|
"response"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"In addition to `content`, the response contains some other pieces of information:\n",
|
|
"\n",
|
|
"* `stopReason` - The reason the model stopped generating. We'll learn more about this later.\n",
|
|
"* `usage` - information on billing and rate-limit usage. Contains information on:\n",
|
|
" * `inputTokens` - The number of input tokens that were used.\n",
|
|
" * `outputTokens` - The number of output tokens that were used.\n",
|
|
"\n",
|
|
"It's important to know that we have access to these pieces of information, but if you only remember one thing, make it this: `content` contains the actual model-generated content"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"vscode": {
|
|
"languageId": "python"
|
|
}
|
|
},
|
|
"source": [
|
|
"## Putting words in Claude's mouth\n",
|
|
"\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 6,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'\\ncherry blossoms dance and drift\\npeace follows the wind'"
|
|
]
|
|
},
|
|
"execution_count": 6,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"messages = [{\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": [{\"text\": \"Generate a beautiful haiku\"}]\n",
|
|
"},{\n",
|
|
" \"role\":\"assistant\",\n",
|
|
" \"content\": [{\"text\": \"calming mountain air\"}] \n",
|
|
"}]\n",
|
|
"\n",
|
|
"generate_conversation(messages)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 7,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"calming mountain air \n",
|
|
"cherry blossoms dance in dew \n",
|
|
"Spring calls in a dream\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"print(\"calming mountain air\" + generate_conversation(messages))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Few-shot prompting\n",
|
|
"\n",
|
|
"One of the most useful prompting strategies is called \"few-shot prompting\" which involves providing a model with a small number of **examples**. These examples help guide Claude's generated output. The messages conversation history is an easy way to provide examples to Claude.\n",
|
|
"\n",
|
|
"For example, suppose we want to use Claude to analyze the sentiment in tweets. We could start by simply asking Claude to \"please analyze the sentiment in this tweet: \" and see what sort of output we get:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 8,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'This tweet has a positive sentiment. The use of enthusiastic language (\"happy dance\"), the positive context of enjoying the product, and the cheerful emoji usage (🌶️🥒) all indicate the user had a favorable experience with the spicy pickles. The inclusion of upbeat hashtags like #pickleslove also reinforces the positive tone.'"
|
|
]
|
|
},
|
|
"execution_count": 8,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"messages = [{\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": [{\"text\": f\"Analyze the sentiment in this tweet: Just tried the new spicy pickles from @PickleCo, and my taste buds are doing a happy dance! 🌶️🥒 #pickleslove #spicyfood\"}]\n",
|
|
"}]\n",
|
|
"\n",
|
|
"generate_conversation(messages)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"This is a great response, but it's probably way more information than we need from Claude, especially if we're trying to automate the sentiment analysis of a large number of tweets. \n",
|
|
"\n",
|
|
"We might prefer that Claude respond with a standardized output format like a single word (POSITIVE, NEUTRAL, NEGATIVE) or a numeric value (1, 0, -1). For readability and simplicity, let's get Claude to respond with either \"POSITIVE\" or \"NEGATIVE\". One way of doing this is through few-shot prompting. We can provide Claude with a conversation history that shows exactly how we want it to respond: \n",
|
|
"\n",
|
|
"```py\n",
|
|
"messages=[\n",
|
|
" {\"role\": \"user\", \"content\": \"Unpopular opinion: Pickles are disgusting. Don't @ me\"},\n",
|
|
" {\"role\": \"assistant\", \"content\": \"NEGATIVE\"},\n",
|
|
" {\"role\": \"user\", \"content\": \"I think my love for pickles might be getting out of hand. I just bought a pickle-shaped pool float\"},\n",
|
|
" {\"role\": \"assistant\", \"content\": \"POSITIVE\"},\n",
|
|
" {\"role\": \"user\", \"content\": \"Seriously why would anyone ever eat a pickle? Those things are nasty!\"},\n",
|
|
" {\"role\": \"assistant\", \"content\": \"NEGATIVE\"},\n",
|
|
" {\"role\": \"user\", \"content\": \"Just tried the new spicy pickles from @PickleCo, and my taste buds are doing a happy dance! 🌶️🥒 #pickleslove #spicyfood\"},\n",
|
|
" ]\n",
|
|
"```\n",
|
|
"\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 9,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'POSITIVE'"
|
|
]
|
|
},
|
|
"execution_count": 9,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"messages = [\n",
|
|
" {\"role\": \"user\", \"content\": [{\"text\": \"Unpopular opinion: Pickles are disgusting. Don't @ me\"}]},\n",
|
|
" {\"role\": \"assistant\", \"content\": [{\"text\": \"NEGATIVE\"}]},\n",
|
|
" {\"role\": \"user\", \"content\": [{\"text\": \"I think my love for pickles might be getting out of hand. I just bought a pickle-shaped pool float\"}]},\n",
|
|
" {\"role\": \"assistant\", \"content\": [{\"text\": \"POSITIVE\"}]},\n",
|
|
" {\"role\": \"user\", \"content\": [{\"text\": \"Seriously why would anyone ever eat a pickle? Those things are nasty!\"}]},\n",
|
|
" {\"role\": \"assistant\", \"content\": [{\"text\": \"NEGATIVE\"}]},\n",
|
|
" {\"role\": \"user\", \"content\": [{\"text\": \"Just tried the new spicy pickles from @PickleCo, and my taste buds are doing a happy dance! 🌶️🥒 #pickleslove #spicyfood\"}]}\n",
|
|
"]\n",
|
|
"\n",
|
|
"generate_conversation(messages)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"vscode": {
|
|
"languageId": "python"
|
|
}
|
|
},
|
|
"source": [
|
|
"***\n",
|
|
"\n",
|
|
"## Exercise\n",
|
|
"\n",
|
|
"### Your task: build a chatbot\n",
|
|
"\n",
|
|
"Build a simple multi-turn command-line chatbot script. The messages format lends itself to building chat-based applications. To build a chat-bot with Claude, it's as simple as:\n",
|
|
"\n",
|
|
"1. Keep a list to store the conversation history\n",
|
|
"2. Ask a user for a message using `input()` and add the user input to the messages list\n",
|
|
"3. Send the message history to Claude\n",
|
|
"4. Print out Claude's response to the user\n",
|
|
"5. Add Claude's assistant response to the history\n",
|
|
"6. Go back to step 2 and repeat! (use a loop and provide a way for users to quit!)\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"<details>\n",
|
|
" <summary>View exercise solution</summary>\n",
|
|
"\n",
|
|
" ```py\n",
|
|
"\n",
|
|
" conversation_history = []\n",
|
|
"\n",
|
|
" while True:\n",
|
|
" user_input = input(\"User: \")\n",
|
|
" \n",
|
|
" if user_input.lower() == \"quit\":\n",
|
|
" print(\"Conversation ended.\")\n",
|
|
" break\n",
|
|
" \n",
|
|
" conversation_history.append({\n",
|
|
" \"role\": \"user\", \n",
|
|
" \"content\": [{\"text\": user_input}]\n",
|
|
" })\n",
|
|
" \n",
|
|
" assistant_response = generate_conversation(conversation_history)\n",
|
|
" print(f\"Assistant: {assistant_response}\")\n",
|
|
" \n",
|
|
" conversation_history.append({\n",
|
|
" \"role\": \"assistant\", \n",
|
|
" \"content\": [{\"text\": assistant_response}]\n",
|
|
" })\n",
|
|
" ```\n",
|
|
"</details>"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"***\n"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "py311",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.11.6"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
}
|